VR as the Gateway Into a Robot’s Mind

Thomas De Moor
OneBonsai
Published in
5 min readJan 4, 2019

--

Photo by Liam Charmer on Unsplash

A robot needs two things: a physical structure to operate with and a computer system that allows it to carry out a series of actions automatically. This means that a self-driving car can be considered a robot, and so can the programmable coffee machine that makes you a coffee 6 AM every weekday.

Robots have been utilized for decades in industries such as manufacturing, automotive, and military. They can very efficiently perform tasks that consist of a structured set of actions. But advances in artificial intelligence are making our robots capable of doing much more.

The most visible example of progress in the capabilities of robots comes from a company called Boston Dynamics. For decades, emulating human behavior that even a toddler is capable of, such as climbing stairs, has been impossible for even the most advanced robot. This is slowly changing. The robots of Boston Dynamics can climb stairs, open doors, and keep their balance when given a firm push.

No robots were harmed in this gif

But as the computer systems running these robots grow more complex, so does the question of how to efficiently train these systems? After all, a computer needs to be programmed and needs some form of input. Even a neural network that’s capable of learning on its own requires some initial feedback to start learning.

This is where virtual reality provides a solution.

Teaching Artificial Intelligence What to Do

OpenAI is an AI research company founded by Elon Musk. In May 2017, OpenAI let the world know they had created a robotics system that had been trained entirely in simulation. This system consisted of two AI algorithms: a vision network that looks at where everything is, and an imitation network that’s meant to replicate the changes the vision network sees.

A human with a VR headset performed an action in VR once and the robotics system was able to see what was happening and physically replicate that action. The robot was smart enough to then reproduce that behavior in different setups in reality.

Stacking Demo (courtesy of OpenAI)

This is a highly effective way to train algorithms. You don’t need physical resources, as you can build your training assets virtually, at a much lower cost. You also don’t need to be physically present, as you can replicate the robot’s environment in VR and can show it the desired actions from a distance. And you don’t need to act out every possible scenario, as the algorithms of the robot are intelligent enough to understand your main purpose (which was stacking blocks, in the example of OpenAI).

Operating a Robot Through VR

However, even though AI is capable of doing more and more, there are still areas where a human does the job far better. But even here, robots and VR can be of tremendous use. One example is bomb disposal. Defusing a bomb requires human expertise to understand which wires to cut or which parts to deactivate. But it’s also highly dangerous.

A bomb disposal robot

A remotely controlled robot can be sent in. As it stands today, the people operating these robot use controllers and computer screens. But VR can place the human operator into the robot’s mind. It’s a more immersive environment that can provide information otherwise missed.

The robot can instantly replicate the head movements of the human to look around and understand its environment. It can look at the bomb from multiple angles, and it can more accurately cut through it and defuse it.

Being inside the mind of a robot and controlling its physical movements in real-time is something that can already be done with popular devices originally meant for entertainment, such as the Oculus Rift or the HTC Vive.

A similar application of VR works in healthcare too. While we’re still a far way off from robots autonomously performing surgery, robot-assisted surgery (RAS) offers an opportunity for VR and AR to be integrated into the operating room.

Stryker is a Fortune 500 medical technologies company that builds robots for surgeons. It has a burring instrument where a surgeon can highlight bone sections on a 3D planning model, to be shaved away from the patient during the surgery.

Highlighting bone sections on a 3D model before surgery

The instrument (which is held in the surgeon’s hand) then maps that 3D model onto a computer screen during the procedure, providing resistance if the surgeon accidentally moves out of that area. This effective use of this instrument stops surgeons from removing too much bone from a patient.

While this information is still displayed on a computer screen today, it’s a great opportunity for AR, which could overlay the information onto the patient’s body, so the surgeon won’t even have to look up at the screen anymore.

In Conclusion

Virtual reality is a cost-effective way to teach an algorithm what to do. Alternatively, VR allows people to accurately control robots from a distance, in a more immersive environment than computer screens can ever provide. AR can be used in the operating room to provide surgeons with vital information to more accurately do their work.

There are many other use cases where VR and AR can integrate with robotics for a more enhanced experience. This integration will only deepen as both technologies improve.

If you’re curious to know how OneBonsai helps manufacturing and healthcare companies integrate VR into their workflow, contact us using the info below:

Website: onebonsai.com
Twitter: https://twitter.com/One_Bonsai
Facebook: https://www.facebook.com/onebonzai
Linkedin: https://www.linkedin.com/company/onebonsai
Youtube: https://www.youtube.com/channel/UCaibxtMmO4fvXr7S9h0TufQ
Blog: https://lab.onebonsai.com

--

--

Creator of Wormhole Stories. Writes interactive fiction at the intersection of storytelling, technology, and community.