Artificial Intelligence

Enhanced Robot "Vision" Enables More Natural Interaction With Humans

Robot shaking hands

A wide-eyed, soft-spoken robot named Pepper motors around the Intelligent Systems Lab at Rensselaer. One of the researchers tests Pepper, making various gestures as the robot accurately describes what he’s doing. When he crosses his arms, the robot identifies from his body language that something is off.

“Hey, be friendly to me,” Pepper says.

Pepper’s ability to pick up on nonverbal cues is a result of the enhanced “vision” the lab’s researchers are developing. Using advanced computer vision and artificial intelligence technology, the team is enhancing the ability of robots like this one to naturally interact with humans.

“What we have been doing so far is adding visual understanding capabilities to the robot, so it can perceive human action and can naturally interact with humans through these nonverbal behaviors, like body gestures, facial expressions, and body pose,” says Qiang Ji, professor of electrical, computer, and systems engineering, and the director of the Intelligent Systems Lab.

With the support of government funding over the years, researchers at Rensselaer have mapped the human face and body so that computers, with the help of cameras built into the robots and machine-learning technologies, can perceive nonverbal cues and identify human action and emotion.

Among other things, Pepper can count how many people are in a room, scan an area to look for a particular person, estimate an individual’s age, recognize facial expressions, and maintain eye contact during an interaction.

Ji sees computer vision as the next step in developing technologies that people interact with in their homes every day. Currently, most popular AI-enabled virtual assistants rely almost entirely on vocal interactions.

“There’s no vision component. Basically, it’s an audio component only,” Ji says. “In the future, we think it’s going to be multimodal, with both verbal and nonverbal interaction with the robot.”

The team is working on other vision-centered developments, like technology that would be able to track eye movement. Tools like that could be applied to smartphones and tablets.

Ji says the research being done in his lab is currently being supported by the National Science Foundation and the Defense Advanced Research Projects Agency. In addition, the Intelligent Systems Lab has received funding over the years from public and private sources including the U.S. Department of Defense, the U.S. Department of Transportation, and Honda.

  • Computational Science and Engineering
  • School of Engineering
Back to top