Research papers are coming far too often for anyone to read them all. This is especially true in the area of machine learning, which is now affecting virtually every industry and company (and where papers are made). This column aims to collect some of the most relevant recent discoveries and publications – especially regarding, but not limited to, Artificial Intelligence – and explain why they are important.
In this issue we have many articles that deal with the interface between AI or robotics and the real world. Of course, most of the applications of this type of technology have real applications, but this research specifically addresses the inevitable difficulties that arise due to constraints on both sides of the real-virtual divide.
One problem that keeps popping up in robotics is how slow things actually go in the real world. Of course, some robots trained for certain tasks can perform them with superhuman speed and agility, but for most this is not the case. They have to compare their observations with their virtual world model so often that tasks like picking up and putting down an object can take minutes.
What is particularly frustrating about this is that the real world is the best place to train robots because they will ultimately operate in it. One approach to addressing this is to add value to every hour of real-world testing that you take. That is the goal of this project on Google.
In a more technical blog post, the team describes the challenge of using and integrating data from multiple robots that are learning and performing multiple tasks. It’s complicated, but they talk about creating a unified process for assigning and scoring tasks and adjusting future assignments and scoring based on that. More intuitively, they create a process through which success in Task A improves the robots’ ability to perform Task B, even if they are different.
People do it – if you know how to throw a ball well, you will have a head start on throwing an arrow, for example. Getting the most out of valuable real-world training is important, and this shows that there is much more room for improvement there.
Another approach is to improve the quality of simulations so that they are closer to what a robot will experience when it brings its knowledge to the real world. That is the goal of the THOR training environment of the Allen Institute for AI and its newest resident ManipulaTHOR.
Simulators like THOR offer an analogue to the real world, in which an AI can learn basic skills like navigating a room in order to find a certain object – a surprisingly difficult task! Simulators balance the need for realism with the computational cost of providing it. The result is a system in which a robotic agent can spend thousands of virtual “hours” trying things over and over without plugging them in, lubricating their joints, and so on.