June 17, 2019
Now your phone can become a robot that does the boring work
What you do is what the robot does
WEST LAFAYETTE, Ind. — If any factory worker could program low-cost robots, then more factories could actually use robotics to increase worker productivity.
This is because workers would be able to shift to taking on more varied and higher-level tasks, and factories could produce a greater variety of products.
That’s the idea behind a prototype smartphone app Purdue University researchers have developed that allows a user to easily program any robot to perform a mundane activity, such as picking up parts from one area and delivering them to another.
The setup could also take care of household chores – no more plants dying because you forgot to water them.
Purdue researchers present their research on the embedded app, called VRa, on June 23 at DIS 2019 in San Diego. The platform is patented through the Purdue Research Foundation Office of Technology Commercialization, with plans to make it available for commercial use.
“Smaller companies can’t afford software programmers or expensive mobile robots,” said Karthik Ramani, Purdue’s Donald W. Feddersen Professor of Mechanical Engineering. “We’ve made it to where they can do the programming themselves, dramatically bringing down the costs of building and programming mobile robots,” he said.
Using augmented reality, the app allows the user to either walk out where the robot should go to perform its tasks, or draw out a workflow directly into real space. The app offers options for how those tasks can be performed, such as under a certain time limit, on repeat or after a machine has done its job.
After programming, the user drops the phone into a dock attached to the robot. While the phone needs to be familiar with the type of robot it’s “becoming” to perform tasks, the dock can be wirelessly connected to the robot’s basic controls and motor.
The phone is both the eyes and brain for the robot, controlling its navigation and tasks.
“As long as the phone is in the docking station, it is the robot,” Ramani said. “Whatever you move about and do is what the robot will do.”
To get the robot to execute a task that involves wirelessly interacting with another object or machine, the user simply scans the QR code of that object or machine while programming, effectively creating a network of so-called “Internet of Things.” Once docked, the phone (as the robot) uses information from the QR code to work with the objects.
The researchers demonstrated this with robots watering a plant, vacuuming and transporting objects. The user can also monitor the robot remotely through the app and make it start or stop a task, such as to go charge its battery or begin a 3D-printing job. The app provides an option to automatically record video when the phone is docked, so that the user can play it back and evaluate a workflow.
Ramani’s lab made it possible for the app to know how to navigate and interact with its environment according to what the user specifies through building upon so-called “simultaneous localization and mapping.” These types of algorithms are also used in self-driving cars and drones.
A YouTube video is available at https://www.youtube.com/watch?v=_VCIHPDbcLk.
“We don’t undervalue the human. Our goal is for everyone to be able to program robots, and for humans and robots to collaborate with each other,” Ramani said.
Since creating the prototype, Ramani’s lab has been testing it in real factory settings to evaluate user-driven applications. Ultimately, the app is a step toward creating future “smart” factories, powered by artificial intelligence and augmented reality, that complement and increase worker productivity rather than replacing them, Ramani said.
The work aligns with Purdue's Giant Leaps celebration, acknowledging the university’s global advancements made in artificial intelligence as part of Purdue’s 150th anniversary. This is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.
A grant from the National Science Foundation’s Future of Work at the Human-Technology Frontier program is supporting the lab’s continued research in enabling humans to more easily create, program and collaborate with robots.
Media contact: Kayla Wiles, 765-494-2432, email@example.com
Source: Karthik Ramani, 765-494-5725, firstname.lastname@example.org
Note to Journalists: For a copy of the paper, please contact Kayla Wiles, Purdue News Service, at email@example.com. A YouTube video is available at https://www.youtube.com/watch?v=_VCIHPDbcLk and other multimedia can be found in a Google Drive folder at https://bit.ly/2QOO6Jt. The video was prepared by Jared Pike, communications specialist for Purdue University’s School of Mechanical Engineering.
V.Ra: An In-Situ Visual Authoring System for Robot-IoT Task Planning with Augmented Reality
Yuanzhi Cao1, Zhuangying Xu1, Fan Li2, Wentao Zhong1, Ke Huo1, Karthik Ramani1
1Purdue University, West Lafayette, IN, USA
2Tsinghua University, Beijing, China
We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning, the human, robot and IoT, with one single mobile AR device. Users can perform task authoring with the Augmented Reality (AR) handheld interface, then placing the AR device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot, and the IoT oriented tasks, and guides the path planning execution with the embedded simultaneous localization and mapping (SLAM) capability. We demonstrate that V.Ra enables instant, robust and intuitive room-scale navigatory and interactive task authoring through various use cases and preliminary studies.