Human Factors Minute is now available to the public as of March 1st, 2023. Find out more information in our: Announcement Post!
Sept. 9, 2021

A Deep Dive into Humanoid Robots

A Deep Dive into Humanoid Robots

Listen to this blog post.

A Deep Dive into Humanoid Robots

It’s time for another deep dive! This week we will be exploring the potential benefits of humanoid robots, safety concerns, and the creepy factor of humanoid robots.  

What Humanoid Robots Can Do for Us

Robot assistants and partners have long been a part of science fiction (e.g. C-3PO from Star Wars, Number Five from Short Circuit, and The Terminator or T-800 in The Terminator); it comes as no surprise that engineers and scientists around the world are working hard to bring those dreams to life. 

One of the ways companies are getting people excited about this is by focusing on robots that can do tasks that humans do not want to do, or would be unsafe for us to do. Robots also help to address labor shortages in critical areas, such as manufacturing, healthcare, and childcare. 

At this stage in the robot game, most robots are still very robot-y in appearance. This is not necessarily a problem: it is important to design the robot to fit the need it is being built for. As an example, vacuum robots look like moving discs, which helps them to navigate easily around rooms and objects. They are also tall enough to not get stuck under appliances like refrigerators and ovens, but short enough to get under most furniture like beds and couches. 

Today we are going to look at some non-humanoid robots and the tasks they do that may, in the future, be done by humanoid robots, and we will look at what current humanoid robots are doing. 

On Our Way There

Non-humanoid robots do many jobs in our society, whether we see them doing those jobs or not. As artificial intelligence (AI) and robotic technology develop, the price of these robots will go down, and we will start seeing them in even more places. 

Last year we talked about robot lifeguards and autonomous wheelchairs. The YMCA began using the Manta 3,000 to help its lifeguards identify people who may be drowning in the pool. With drowning being the third highest cause of unintentional injuries that lead to death in the United States, this type of robot will literally be a lifesaver as it gets implemented in more facilities. 

In early 2020, British Airways began trials of an autonomous wheelchair by Japanese startup WHLL. The goal of the autonomous wheelchair implementation is to provide better service for people who have disabilities. These wheelchairs require only the tap of a button to take the user where they want to go and can be rerouted for coffee or restroom stops on the way to the gate. After the user leaves the chair for their plane at their gate, the wheelchair automatically returns to a charging station. 

We have also talked about exosuits and exoskeletons on the show, particularly about medical exosuits and industrial exoskeletons. Soft robotic exosuits were lab-tested with stroke survivors in a collaboration between Harvard and Boston University. They found that the use of the exosuits helped the stroke survivors recover from paralysis faster. Participants were able to walk faster and farther than they would have without incorporating the exosuits into their rehabilitation program. 

Delta Airlines and Sarcos partnered together to develop and test the Sarcos Guardian XO, a battery-powered full-body robotic suit that is designed to boost employee performance, endurance, and safety. The suit allows employees to lift up to 200 pounds repeatedly for up to eight hours. A perhaps more hidden benefit of this type of robotic exoskeleton is that it allows people who might otherwise not have been qualified for heavy lifting jobs due to strength requirements to enter the talent pool. 

Robot security guards have made strides in the last few years as well. A major feature of robotic security guards is surveillance, with these robots providing video footage and movement monitoring as they patrol a set area or along a programmed patrol route. Some of the security guard robots can send alarms to the human monitor’s phone, blare out their own loud alarm, or provide a microphone capability that allows the human monitor to speak to the intruder. 

We are also seeing advances in search and rescue robots. These robots provide search and rescue support in many forms: providing video feed as they search, bringing rescue supplies to people who are stuck, and entering tricky terrain or unsafe disaster sites. Though search and rescue robots can sometimes be a bit off-putting in appearance, they help to provide safety for both those in need of rescuing and the rescuers. 

Humanoid Robots in Our Lives

Though there are fewer humanoid robots in our lives than non-humanoid ones, they are still having an impact in many places and their popularity only seems to be increasing as technology improves. 

We have talked a bit before about humanoid hospital robots. One particular hospital robot, Moxi, was introduced to Texas hospitals in 2019. Moxi was designed to perform the roughly 30% of a nurse’s duties that do not involve patient interaction, such as running errands or dropping off specimens for analysis. With Moxi helping out, the cognitive load on the nurses was reduced and they were able to spend more time with patients, increasing the quality of care provided. 

Similarly, humanoid service robots are finding popularity in care for the elderly. Despite their initial starting cost, service robots overall reduce the massive cost of eldercare. They can fill in the service provider shortage gap that many nations are experiencing with aging populations. The service robots are able to help with small tasks and provide social and emotional support. Some of these robots are specifically designed to assist with mobility and transportation, as well. Though not humanoid, pet-style robots have been found to help dementia patients who might get more agitated by a human caregiver. 

On the other end of the age spectrum, we have childcare humanoid robots. Two such robots are the Vevo and the iPal. Vevo is able to recognize faces and greet people it recognizes. However, its major selling point is that Vevo monitors the body temperature, heartbeat, and amount of movement in children, and warns the parents or caregivers if it detects irregularities in body readings that might require medical attention or other help. 

iPal is a humanoid childcare robot that is more intended as a way to provide interactive entertainment for the children it is meant to care for. It speaks Chinese and English, dances, sings, plays simple games, and answers simple questions. iPal can even assist with math lessons for young children and tells jokes. 

Remember Rosey from The Jetsons? Humanoid robot maids are gaining in popularity, especially as they gain more features. One such robot maid, Aeolus’ Robot, cleans, moves furniture, gets drinks from the fridge, adapts to the home’s daily schedule via AI, connects to an information-sharing network to share information on household objects, remembers where lost things are, links objects to the people they belong to, responds to verbal commands, recognizes changes in posture (i.e. if someone falls), and has Amazon’s personal assistant, Alexa, built into it (can control it from the Alexa app). 

Are We Safe With Robots?

Image source: Nubia Navarro (nubikini) | Pexels

Of course, many people still have significant concerns when it comes to having robots in our homes, hospitals, childcare facilities, and other areas. These concerns are primarily around safety.  

Several stories about robot and AI safety have been featured on Human Factors Cast - autonomous car safety concerns, human-robot interaction with automated aircraft systems, and the hacking of smart home devices. Researchers have found that semi-autonomous driving systems, like self-driving Teslas, can be easily tricked if their driver-assistance systems pick up certain light projections. Quick flashes of light, an image projected onto the road, or even a few frames in a digital billboard can cause the semi-autonomous vehicle to apply the brakes or swerve with no warning to the driver or passengers. These sudden stops or movements can lead to injury or even death. 

After two Boeing Max 737 crashes in 2018 and 2019, the Federal Aviation Administration (FAA) grounded the 737 Max while it began a new assessment of the latest in automated aviation systems. At the time, it appeared that the crashes were caused by inaccurate sensor readings that triggered the flight-control system, the Maneuvering Characteristics Augmentation System (MCAS). We now know that this was essentially what happened: the crashes were caused by both faulty readings and overpowered changes in the plane’s horizontal stabilizer. These safety issues were out of the control of the pilots or anyone else on board, and the amount of media coverage they received made the crashes and concerns about automated aviation systems widely known. 

Smart homes have become relatively commonplace around the world, especially in America. In the United States, smart home devices have a penetration rate of 32.4% and there is an estimated total of 41.3 million smart homes in the nation. These numbers make it even more concerning when there is news of successful smart home hacks. 

In 2019, it was discovered that smart devices could easily be hacked by using lasers to inject inaudible commands into them. Hackers could then cause the devices to unlock doors, visit websites, and locate, unlock, and start vehicles, all without the owners realizing it. 

Companies that own smart devices, such as Amazon (owner of the Alexa smart home assistant), have been found to be recording information, such as private conversations, through the devices, unbeknownst to the home occupants. 

Home robots come with their own additional concerns as well, including potential physical injury and psychological torment. Physical attacks from home robots, especially if a home has more than one robot, are a possibility. We also know that people can get emotionally attached to service robots, particularly the elderly and children, which means that those bonds present a risk of psychological attack via the robot. 

The Uncanny Valley

Image source: Elina Krima | Pexels

Have you ever seen a character in a movie or show that gave you the feeling of nails on a chalkboard, something about it did not seem quite right in the worst way? Or maybe in a Halloween store, some of the decorations and costumes gave you a feeling of nausea and unease? If so, you have experienced the uncanny valley. 

The uncanny valley is a phenomenon we experience when we see something that looks close to human, but not quite close enough or far enough for our brains to place firmly in one category or the other. This can present a problem when it comes to humanoid robots, especially as the use of humanoid robots in a variety of industries and environments increases in popularity. 

Of particular concern is the uncanny valley’s impact on humanoid robots in more delicate areas such as hospitals and schools. To help with acceptance in these domains, it is important that we consider the impact of the uncanny valley and work to better understand how we perceive and relate to humanoid robots. By doing more research in these areas, we may also find a better understanding of mind-blindness, which can show up as having difficulty distinguishing between humans and robots. 

A 2020 study that looked at uncanniness and human, android, and mechanical-looking robot faces found that perceived animacy decreased as exposure time increased only for android faces, not for human or mechanical-looking robot faces. The study also found that manipulation of the spatial frequency of faces eliminated the decrease in android faces’ perceived animacy and reduced their perceived uncanniness, but had no impact on participants’ perception of human or mechanical-looking robot faces. This means that regardless of if the images’ fine (high spatial frequency) or coarse (low spatial frequency) information was preserved, the uncanny valley effect of the android faces was reduced. The findings seemed to indicate that how we perceive uncanniness is related, at least in part, to the temporal dynamics of face animacy perception. 

We should note that the uncanny valley is not restricted to faces, but can apply to any other body part that resembles those of a human. Prosthetic limbs, such as for legs or hands, can also produce the uncanny valley effect in us. The first known example of a prosthetic limb is the iron hand of Roman General Marcus Sergius. He lost his right hand in the Second Punic War (which lasted from 218 to 201 BC) and got the iron hand prosthetic so that he could hold his shield. Since then, prosthetics have gotten more realistic, and advances in technology have made them more versatile. 

The Human Factors Connection

Image source: Pavel Danilyuk | Pexels

We have discussed many of the potential uses and benefits of humanoid robots, as well as some challenges they may have on the road to broad acceptance by the general population. Now we will dive a little deeper and look at some of the research and psychology behind these potential hurdles, and how human factors can help. 

In our 2019 HFES Conference Preview episode, we discussed when we blame humans and when we blame robots for workplace accidents that involved human-robot interaction. It may come as no surprise that we tend to blame humans when a human was operating the robot and we blame the robot when we know that it was autonomous. The trade-off between blame and efficiency will be something for workplaces to consider as automation continues to gain ground. 

Many humanoid robots designed with childcare in mind have some sort of screen on them. These screens might display words, games, videos, educational material, or even robot faces to help the robots connect emotionally with the children. When considering humanoid robots for childcare purposes, we need to keep in mind that we still do not fully understand how screen time impacts a child’s development. A 2017 study found that more touchscreen use by toddlers was linked to a reduction in hours of sleep, but was careful to note that only correlation, not causation, was established. The study authors stressed that more research is needed to establish causation and to rule out other factors that may have impacted the amount of sleep. 

Although having such a wide variety of choices in childcare robots and AI devices seems great, it has been observed that this can lead to analysis paralysis for parents. With so many tools available to help with childcare, parents can get lost trying to pick out the best ones for their situation. They can get overwhelmed by too much data and the fear that they might make mistakes that will be recorded and could have been prevented if they had only chosen the correct robot or AI device. When designing humanoid robots for childcare, we will need to consider ways we can mitigate this analysis paralysis so that parents are encouraged to integrate these tools into their lives rather than be intimidated by them

We have talked quite a bit about humanoid robots in the context of robots taking over tasks completely. It is important to note that some experts believe that the best role for humanoid robots, particularly those integrated into service environments such as homes, daycares, and hospitals, is as a partner or assistant. In other words, the robot would collect data and provide caregivers with analyses of it, but the humans would be responsible for decisions based on this data. This means that the clarity of the presentation and visuals of the data and data analyses will be crucial: they will need to be as easy to understand as possible so that we can help caregivers make the best decisions with the information the robot can provide. 

A 2020 survey from Partners for Automated Vehicle Education found that 48% of Americans say they would never get in autonomous taxis or rideshare vehicles, and that 20% of Americans say that they believe autonomous vehicles will never be safe. However, the survey data also showed that more than half of respondents believed that knowing more about the technology and experiencing it for themselves would likely change their minds. We can use this knowledge to design materials to communicate information about the technology used for humanoid robots and to design experiences that would expose consumers to humanoid robots so that they can become more comfortable with them and secure in their safety. 

In a similar track, researchers have found that our perception of robots revolves around three key aspects: time, proximity, and long-term interactions. The more time we spend with robots, the less uncanny we perceive them to be. Researchers have also found that in terms of mitigating the uncanniness of robots, a robot’s personality has a larger impact than its appearance. In terms of proximity, it was determined that virtual robots were perceived as less uncanny than robots that were in the same room with participants. Uncanniness can also be lessened as the familiarity with the robot increases, but only if that familiarity is achieved through multiple interactions over the span of at least several days. Greater familiarity also led to decreased perception of feelings of discomfort and threat in participants. This information can, again, help us design experiences with humanoid robots that can mitigate the uncanny valley phenomenon and lead to greater acceptance of these robots in a variety of settings. 

These are only some of the ways we can use research and our knowledge of psychology to guide us as we continue working on the acceptance and integration of humanoid robots in our lives. As the use of humanoid robots becomes more widespread and branches out into different fields, we will be able to gain more information about how to ease this transition and the impact it can have on people’s lives. These are exciting times!

As we mentioned in last week’s episode, we highly recommend reading through Thomas Sheridan’s article in Human Factors: The Journal of the Human Factors and Ergonomics Society, Human–Robot Interaction: Status and Challenges. There are many human factors challenges we will need to face with humanoid robots, and there is a lot of opportunity for human factors research in this area.


For more Human Factors Cast content, check back every Tuesday for our news roundups and join us on Twitch every Thursday at 4:30 PM PST our weekly podcast. If you haven't already, join us on Slack and Discord or on any of our social media communities (LinkedIn, Facebook, Twitter, Instagram).

Explore Some More

Episode Links

What Humanoid Robots Can Do for Us

Are We Safe With Robots?

The Uncanny Valley

The Human Factors Connection


Photo by Kindel Media from Pexels (left)

Photo by Alex Knight from Pexels (right)