Are There Robots in the Future of Wellness?

Posted by Joel Bennett on

By Dr. Gale Lucas, OWLS Director of Research
Protect the children because they – and robots – are the future. ~ Adam Carolla
From Futurists and Science Fiction writers to Computer Scientists and Engineers (and even comedians), people predict robots will play a big role in shaping our future. Some play on our fears that these mechanical counterparts will take over many services, put people out of jobs, and create an unhealthy dependence. Many also believe that the future with robots looks bright, efficient and productive. ...But could it also be filled with increased wellness?

A Wellness Dilemma: Failure to disclose health information

Patients sometimes shoot themselves in the foot, medically. Not that they literally end up in the ER with a self-inflicted GSW between metatarsals. Rather, people do themselves a huge disservice when they fail to provide fully honest responses in medical interviews. When patients respond less honestly, healthcare professionals get a less accurate picture and medical history. This can have serious health consequences. Popular medical dramas frequently highlight this plight, pulling off an extra twist or two in the weekly plot. For example, the infamous Gregory House (played by Hugh Laurie) categorically assumes all patients lie. In the real world, doctors and nurses are often frustrated by patients’ reluctance to share accurate medical histories. Such reticence can hurt medical care and resultant health status. Accordingly, research has considered how to gain more detailed and honest medical histories, especially sensitive information, from patients.

GALE_INSET1

Computers increase disclosure of health information

In a recent innovation, we have uncovered a method whereby computers that look human (like robots) can pave the way for greater wellness: such computers could lead patients to behave more openly in medical interviews. I conducted research to test this possibility. People in the study all spoke to a virtual human (animated characters that interact with people in a natural way, i.e., via speech) about their health and mental health. They answered very personal questions regarding a range of sensitive topics. As you read these sample inquiries, consider how you might feel giving answers to a computer:
  • Have you experienced symptoms of mental illness (intrusive thoughts, avoidance behaviors)?
  • Have you ever been diagnosed with depression?
  • What are your views on therapy?
  • What are your deepest regrets?
  • Have you experienced traumatic events that you wish you could forget?

The research showed that participants were actually much more willing to answer these questions in one of two conditions: they were told their interviewer -- the one asking these sensitive questions -- was either run by the computer (Condition 1) or operated by a human being in another room (Condition 2). THE RESULTS. Not only were participants more willing to share personal details when they thought their interviewer was “just a computer” (Condition 1), they also reported being more comfortable in the "robot" interview. Specifically, they scored lower on fear of negative evaluation and impression management. People often hold back information during medical interviews because they feel afraid that they are being viewed negatively by the healthcare professional. However, virtual interviewers reduce this fear, allowing people to open up. Likewise, patients only disclose information that will lead healthcare providers to view them positively; using virtual humans for medical interviews could reduce this tendency, allowing them to provide more honest information.

GALE_INSET2

Study participants’ comments underscored the value of computers for making patients feel more comfortable disclosing sensitive information:

  • "This is way better than talking to a person. I don't really feel comfortable talking about personal stuff to other people."
  • "A human being would be judgmental. I shared a lot of personal things, and it was because of that.”
  • "It was helpful to have someone listen to me non-judgmentally."

Future possibilities

This benefit of robots and virtual humans could extend to other areas of wellness, such as financial wellness. In an interview with Bryan Borzykowski of BBC Capital, I proposed that “People who talk to a virtual agent know their data is anonymous and safe and that no one is going to judge them… The effect on higher rates of honest responding are especially strong when the information is illegal, unethical, or culturally stigmatized. Finances would fall into that latter category. [People] are very embarrassed and don’t want to admit how much credit card debt they have. It’s anxiety producing, so to have someone you can talk to where it’s safe to say that you’re worried that you’ll never get out from all of that debt is important.” Virtual agents can provide such a safe place to talk about debt, which has become an ubiquitous source of shame according to recent polls. Other aspects of wellness are ripe for reaping the benefit of robots. For another example, people immediately become defensive when questions about abuse of alcohol or prescription drugs come up. There are interventions, such as Team Resilience and Health Consciousness and Prescription Drug Misuse, developed by OWLS, specifically designed to help peers talk to each other about these sensitive topics.

Ponder

  • Is it possible that, in conjunction with these programs, robots and virtual agents could help us to open the discussion about such issues plaguing the health of our nation?
  • In this way, as the technology develops, could robots become essential to the future of wellness?
Your ideas are welcome and will be responded to by a virtual human programmed to appreciate diverse points of view.

Share this post



← Older Post Newer Post →


Leave a comment

Please note, comments must be approved before they are published.