Research in the field of machine learning and AI, now a key technology in virtually every industry and company, is too voluminous for anyone to read in its entirety. This column, Perceptron (formerly Deep Science), aims to collect some of the most relevant recent discoveries and articles, particularly in artificial intelligence, among others, and explain why they are important. This week in AI, researchers discovered a method that could allow adversaries to track the movements of remotely controlled robots, even when the robots’ communications are end-to-end encrypted. The co-authors, who come from the University of Strathclyde in Glasgow, said their study shows that adopting best cybersecurity practices is not enough to stop attacks on autonomous systems. Remote control, or teleoperation, promises to allow operators to guide one or multiple robots from afar in a variety of environments. Startups like Pollen Robotics, Beam and Tortoise have demonstrated the usefulness of teleoperated robots in supermarkets, hospitals and offices. Other companies are developing remote-controlled robots for tasks such as defusing bombs or surveying radiation-heavy sites. But the new research shows that teleoperation, even when supposed to be “secure,” is risky in its susceptibility to surveillance. The Strathclyde co-authors describe in a paper the use of a neural network to infer information about what operations a remotely controlled robot is carrying out. After collecting samples of TLS-protected traffic between the robot and the controller and performing analysis, they found that the neural network could identify movements approximately 60% of the time and also reconstruct “storage workflows” (eg, pick up packets) with βhigh throughputβ. precision.”Image credits: Sha et al. Alarming in a less immediate way is a new study from researchers at Google and the University of Michigan that explored people’s relationships with AI-powered systems in countries with weak legislation and “national optimism” for AI. The work surveyed βfinancially stressedβ India-based instant loan platform users targeting borrowers with credit determined by risk modeling AI. According to the co-authors, users experienced feelings of indebtedness from the βblessingβ of instant loans and the obligation to agree to strict terms, share excessively sensitive data, and pay high fees. The researchers argue that the findings illustrate the need for more “algorithmic accountability,” particularly as it relates to AI in financial services. “We argue that liability is determined by platform and user power relationships, and we urge policymakers to exercise caution in taking a purely technical approach to encourage algorithmic liability,” they wrote. βInstead, we call for situated interventions that enhance user agency, enable meaningful transparency, reconfigure designer-user relationships, and provoke critical reflection in practitioners toward broader accountability.β In less severe research, a team of scientists from the TU University of Dortmund, the Rhine-Waal University and the LIACS Universiteit Leiden in the Netherlands have developed an algorithm they claim can “solve” the game Rocket League. Motivated to find a less computationally intensive way to create AI for the game, the team took advantage of what they call a “sim-to-sim” transfer technique, which trained the AI ββsystem to perform in-game tasks such as goalkeeping and shooting inside. of a reduced and simplified version of Rocket League. (Rocket League basically resembles futsal, except cars instead of human players in teams of three.)Image credits: Pleines et al. It wasn’t perfect, but the researchers’ Rocket League game system managed to save nearly every shot made when he was a goalkeeper. When he was on offense, the system successfully made 75% of shots, a respectable record. Human movement simulators are also advancing at a good pace. Meta’s work on human limb tracking and simulation has obvious applications in its AR and VR products, but could also be used more broadly in robotics and embedded AI. The investigation that came to light this week received a tip from none other than Mark Zuckerberg.
Skeleton and muscle groups simulated in Myosuite. MyoSuite simulates muscles and skeletons in 3D as they interact with objects and themselves; this is important for agents to learn how to properly hold and manipulate things without crushing or dropping them, and also in a virtual world it provides realistic grips and interactions. It supposedly runs thousands of times faster on certain tasks, allowing simulated learning processes to happen much faster. “We’re going to open up these models so researchers can use them to further advance the field,” says Zuck. And they did! Many of these simulations are agent- or object-based, but this MIT project seeks to simulate a general system of independent agents: self-driving cars. The idea is that if you have a good number of cars on the road, you can make them work together to not only avoid collisions, but also avoid idling and unnecessary stops at traffic lights.
If you look closely, only the front cars actually stop. As you can see in the animation above, a set of autonomous vehicles communicating using v2v protocols can basically prevent all but the ones in front of the cars from coming to a stop by phasing out one behind the other, but not that much. enough for them to really stop. . This type of hypermiling behavior may not seem like it saves much gas or battery power, but when you scale it up to thousands or millions of cars, it does make a difference, and it can be a more comfortable ride, too. Good luck getting everyone to the intersection perfectly spaced like this, though. Switzerland is taking a hard look at itself, using 3D scanning technology. The country is making a huge map using unmanned aerial vehicles equipped with lidar and other tools, but there is a problem: the movement of the drone (deliberate and accidental) introduces an error in the dotted map that must be corrected manually. Isn’t it a problem if you are only scanning a single building, but an entire country? Fortunately, a team at EPFL is integrating an ML model directly into the lidar capture stack that can determine when an object has been scanned multiple times from different angles and use that information to align the dotmap into a single cohesive mesh. This news article isn’t particularly illuminating, but the accompanying document goes into more detail. An example of the resulting map can be seen in the video above. Finally, in unexpected but very nice AI news, a team from the University of Zurich has designed an algorithm to track animal behavior so that zoologists don’t have to sift through weeks of footage to find the two examples of courtship dances. It is a collaboration with the Zurich Zoo, which makes sense when you consider the following: “Our method can recognize even subtle or rare behavioral changes in research animals, such as signs of stress, anxiety or discomfort,” said the director of the laboratory. , Mehmet Fatih Yanik. Therefore, the tool could be used to learn and track behaviors in captivity, for the welfare of captive animals in zoos, and also for other forms of animal studies. They could use fewer study animals and get more information in less time, with less work from graduate students poring over video files late into the night. Sounds like a win-win-win-win situation to me.
Image credits: Ella Marushenko / ETH Zurich Also, I love the illustration.
BLOG
Perceptron: risky teleoperation, Rocket League simulation and multiplication of zoologists
- 5 minutes read
- 0 Views