October 22, 2019
Watch your ghost teach a robot how to tag-team
WEST LAFAYETTE, Ind. — When Tesla failed to hit weekly production targets in the first quarter of 2018, chief executive Elon Musk blamed it on “excessive automation.” The robots were slowing things down and “underrated” humans could do better.
Musk resorted to pulling all-nighters and sleeping at the factory so that customer deliveries wouldn’t be further delayed.
For those manufacturing jobs where humans have an edge, Purdue University engineers have introduced “GhostX”: An augmented reality platform that turns the user and robot into “ghosts.” The user can then plan out how to collaborate with the robot and work out kinks before actually performing a task.
“The dexterity of the human hand may never be replaced by the robot. But there are a lot of other tasks that are much more easily done by a robot, such as heavy lifting or precise manipulation,” said Yuanzhi Cao, a Ph.D. student in mechanical engineering at Purdue. “If we put their strengths together, productivity would increase.”
The researchers presented GhostX on Tuesday (Oct. 22) at the 32nd ACM User Interface Software and Technology Symposium in New Orleans.
The technology solves a big engineering conundrum: For humans and robots to truly collaborate, they would have to know exactly what each other is doing.
“Factories currently prefer for humans and robots to work separately because they would need an extremely robust method to sync a human with a robot. That’s not the case for current technology,” Cao said.
With GhostX, whatever plan a user makes with the ghost form of the robot while wearing an augmented reality head mount is communicated to the real robot through a cloud connection – allowing both the user and robot to know what the other is doing as they perform a task.
The system also allows the user plan a task directly in time and space and without any programming knowledge. A YouTube video is available here.
“It’s totally codeless. The user programs the robot by demonstrating in an augmented reality environment how the two will work together,” said Karthik Ramani, the Donald W. Feddersen Professor of Mechanical Engineering.
First, the user acts out the human part of the task to be completed with a robot. The system then captures the human’s behavior and displays it to the user as an avatar ghost, representing the user’s presence in time and space.
Using the human ghost as a time-space reference, the user programs the robot via its own ghost to match up with the human’s role. The user and robot then perform the task as their ghosts did.
GhostX gives the user the option of either planning tasks to be completed at the same time or after each other. The system could also possibly work for planning the tasks of hundreds of robots at once, the researchers say.
The team will next explore the use of GhostX for training other humans.
A patent has been filed for this technology through the Purdue Research Foundation Office of Technology Commercialization. The work was funded by the National Science Foundation under grants FW-HTF 1839971, IIS (NRI) 1637961 and IIP (PFI:BIC) 1632154.
Media contact: Steve Tally, 765-494-9809, firstname.lastname@example.org
Writer: Kayla Wiles
Sources: Karthik Ramani, 765 494-5725, email@example.com
Yuanzhi Cao, firstname.lastname@example.org
Note to Journalists: For a copy of the paper, please contact Steve Tally, Purdue News Service, at email@example.com. A YouTube video is available at https://www.youtube.com/watch?v=CikZMBcKLIE. The video was produced by Jared Pike, communications specialist for Purdue University’s School of Mechanical Engineering.
We present GhostAR, a time-space editor for authoring and acting Human-Robot-Collaborative (HRC) tasks in-situ. Our system adopts an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. We propose a novel HRC workflow that externalizes user’s authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. We develop a dynamic time warping (DTW) based collaboration model which takes the real-time captured motion as inputs, maps it to the previously authored human actions, and outputs the corresponding robot actions to achieve adaptive collaboration. We emphasize an in-situ authoring and rapid iterations of joint plans without an offline training process. Further, we demonstrate and evaluate the effectiveness of our workflow through HRC use cases and a three-session user study.