Deep Science: Robots, meet world

Research papers come out far too frequently for anyone to read them all. That’s especially true in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

This edition, we have a lot of items concerned with the interface between AI or robotics and the real world. Of course most applications of this type of technology have real-world applications, but specifically this research is about the inevitable difficulties that occur due to limitations on either side of the real-virtual divide.

One issue that constantly comes up in robotics is how slow things actually go in the real world. Naturally some robots trained on certain tasks can do them with superhuman speed and agility, but for most that’s not the case. They need to check their observations against their virtual model of the world so frequently that tasks like picking up an item and putting it down can take minutes.

What’s especially frustrating about this is that the real world is the best place to train robots, since ultimately they’ll be operating in it. One approach to addressing this is by increasing the value of every hour of real-world testing you do, which is the goal of this project over at Google.

In a rather technical blog post the team describes the challenge of using and integrating data from multiple robots learning and performing multiple tasks. It’s complicated, but they talk about creating a unified process for assigning and evaluating tasks, and adjusting future assignments and evaluations based on that. More intuitively, they create a process by which success at task A improves the robots’ ability to do task B, even if they’re different.

Humans do it — knowing how to throw a ball well gives you a head start on throwing a dart, for instance. Making the most of valuable real-world training is important, and this shows there’s lots more optimization to do there.

Another approach is to improve the quality of simulations so they’re closer to what a robot will encounter when it takes its knowledge to the real world. That’s the goal of the Allen Institute for AI’s THOR training environment and its newest denizen, ManipulaTHOR.

Image Credits: Allen Institute

Simulators like THOR provide an analogue to the real world where an AI can learn basic knowledge like how to navigate a room to find a specific object — a surprisingly difficult task! Simulators balance the need for realism with the computational cost of providing it, and the result is a system where a robot agent can spend thousands of virtual “hours” trying things over and over with no need to plug them in, oil their joints and so on.

Related Articles

Why applied AI requires skills and knowledge beyond data science

Every year, machine learning researchers fascinate us with new discoveries and innovations. There are a dozen artificial intelligence conferences where researchers push the boundaries of science and show how neural networks and deep learning architectures can take on new challenges in areas such as computer vision and natural language processing. But using machine learning in real-world applications and business problems—often referred to as “applied machine learning” or “applied AI”—presents challenges that are absent in academic and scientific research settings. Applied machine learning requires resources, skills, and knowledge that go beyond data science, that can integrate AI algorithms into applications used by thousands and millions of…This story continues at The Next Web

This new robotics challenge could bring us closer to human-level AI

Since the early decades of artificial intelligence, humanoid robots have been a staple of sci-fi books, movies, and cartoons. Yet, after decades of research and development in AI, we still have nothing that comes close to the Jetsons’ Rosey the Robot. This is because that many of our intuitive planning and motor skills—things that we take for granted—are a lot more complicated than we think. Navigating unknown areas, finding and picking up objects, choosing routes, and planning tasks are complicated feats that we only appreciate when we try to turn them into computer programs. Developing robots that can physically sense…This story continues at The Next Web

Responses

Your email address will not be published. Required fields are marked *

Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about chronicles from TreatMyBrand directly in your inbox