Using images of the destination as a reference, in this case, the house on the other side of the site, and a priori on how to navigate land on various terrains", gleaned from hours of filmed video, driverless vehicles can almost Navigating yourself to get there by walking two miles Robots and self-driving cars have a very big challenge in common, and that is how to navigate the world. Often, AI treats this task as a problem of how to map their surroundings for use in robots or autonomous vehicles. There's an easier way to build precise probabilities of the scene's geometry before a car traverses the terrain. In a paper published in aX in February by academics at the University of California, Berkeley, a wheeled robot was able to travel several kilometers over suburban terrain. The robot sticks to a path and avoids previously unseen obstacles. Importantly, it does not map its environment like some other methods, such as in self-driving AI programs. Instead, it relies on data extracted from 30 hours of previously run video. Heuristics and some topographical topographical maps to create a schematic diagram of interrelated stops along the way, without the need for a full map. The study, titled "VKNG: Vision-Based Kilometer-Level Navigation with Geographical Hints," was authored by Dr. Candidate Dhruv Shah and UC Berkeley Assistant Professor Sergey Levine, who has been working with Google to bring artificial intelligence to robotics for years, Levine has been working on bringing artificial intelligence to robotics for many years. Many of the key findings of this work were shared by Vine last year in a paper titled "How to Train You Robots” paper. That paper focused on the discovery of so-called “reinforcement learning,” a form of deep learning that trains neural networks in stages toward a goal.
Shah and Levine's latest work, VIKING, has important ties to RL. KNG builds on the previous system, called N', which stands for S's speed-exploration control that drives navigation. RECON's training is to make wheeled robots (Clearpath robotics Manufactured by the Jack History ground vehicle, the Moon's Time Field installed in multiple environments such as RGB cameras to collect hours of video, laser snow and GPS
VKNG builds on its predecessor program RECON, adding "hints" in the form of high-altitude satellite or high-altitude landscape schematic data. RECON learns a tuned "navigation prior", the so-called "navigation prior", through a convolutional network that compresses and decompresses image data. The "information bottleneck," a method of processing signals introduced by Naftali Tishby and colleagues in 2000. This approach of RECON exploits the software's ability to well represent the visual environment by compressing images and then recalling what stands out. During the test segment, RECON sees an image of a target, say a specific building, and has to figure out in flight how to navigate to that new place RECON builds a graph of steps along the path to that target, This is an impromptu map. Using these techniques, the Jackal robot was able to navigate to targets 80 meters away in new environments it had never encountered before. It is able to do this where all other existing approaches to robot navigation have fallen short. In VIKING, Shah and Levine extend RECON in a specific way: hints. They provide the jackal software with satellite imagery or aerial maps of new terrain. As Shah and Levine write, "In contrast to RECON, which performs an uninformed search, iKNG combines geographic cues in the form of approximate GPS coordinates and aerial maps. This enables VIKING to achieve Long-range targets, up to 25 times farther than the farthest targets reported by RECON, and reaching targets 15 times faster than RECON when exploring new environments.
Outline of VIKING System Approach. Sampling and cueing from images allows the system to dynamically build a local topology map to map a route to a destination VIKING program adds random walks of camera observations to training data, adds 12 hours of video from "remote" travel, humans Guide the jackal along paths like walkways or hiking trails to build up those previous examples. The neural network used to process all the training data is rather monotonous, i.e. the familiar Mobile Net convolutional neural network this time the vKNG's Jackal is well beyond RECON's 80 meters, the distance from the origin to the destination is about 3 kilometers, or nearly 2 Miles In a video featured on the project's blog page, Shah and Levine show how Jackal with VIKING figures out how to get around previously unknown obstacles, such as a parked vehicle blocking its path. A companion video explains the work, and you can check out RECON at the bottom of this post that explicitly employs elements of reinforcement learning. Likewise, VKNG is also borrowed in some way. When asked about his connection to RL, Levine told Zdnet in an email, "I describe VIKING as a reinforcement learning method with higher-level planning on top of it, and the key is to use it for real-time," Levine explained. A low-level learning control approach for navigation combined with RL-like high-level planning
Explicit high-level planning, as Levine describes it, provides a good way to deal with very long views. A good way to look at the method is to use model-free [RL] techniques to deal with low-level problems of local navigation (for example, how to drive a tree ) planning the high-level problem of how to draw a path to a distant goal. I think it's actually a pretty natural fit - like a person driving a car might not think through every turn they make, but will do some definite planning in their mind to decide which route to take To get to their destination, perhaps reasoning about landmarks, evie argues, is highly relevant to more sophisticated navigation such as self-driving cars. VIKING is a "sidewalk delivery robot," he said: "But autonomous driving or other higher-risk tasks (or even real sidewalk deliveries that have to deal with heavy traffic) must have additional mechanisms to handle safety and restraint, currently An approach that has not directly addressed evie mentioning safety, etc. Additional work could include explicit instructions from a human as a "co-pilot" to guide the robot away from injury. It could also include imitating existing policies that would imbue some safeguards. However, Levine Saying that more research needs to be done to deal with unseen factors like high-speed vehicles and jaywalking pedestrians. Of course class systems providing strict safety guarantees is a major open issue, I do think more needs to be done It takes work to make such a system safe enough for a self-driving car.







