One company is using artificial intelligence to experiment with digital mental health care in order to shed light on ethical gray areas surrounding the use of the technology.

Rob Morris, co- founder of Koko, a free mental health service and nonprofit that partners with online communities to find and treat at-risk individuals, wrote on Friday that his company used GPT 3 to help develop responses to 4,000 users.

Morris said in the thread that the company tested a "co-pilot approach with humans supervising the artificial intelligence as needed" in messages sent via Koko peer support, a platform he described in an accompanying video as "a place where you can get help from our network or help someone else."

"We make it very easy to help other people and with GPT-3 we're making it even easier to be more efficient and effective as a help provider," Morris said in the video

GPT3 is a variant of GPT that creates human-like text based on prompt.

It didn't work after people learned the messages were co-created by a machine.

It feels strange and empty. Morris wrote in the thread that when machines say "that sounds hard" or "I understand" it sounds inauthentic. It feels cheap when a chatbot response is generated in 3 seconds.

Morris made some important clarification on Saturday.

"We weren't putting people up to chat with GPT 3 without their knowledge, in retrospect, I could have worded my firsttweet to better reflect this."

The feature was opt in. When the feature was live, everyone knew about it.

Morris said Friday that he pulled this from the platform. He said that the response times decreased due to the technology and that the messages were rated higher than those written by humans on their own.

Ethical and legal concerns 

The company was accused of violating informed consent law, a federal policy which mandates that human subjects give consent before involvement in research purposes.

Eric Seufert said on Saturday that this was profoundly unethical.

Christian Hesketh, who claims to be a clinical scientist, said on Friday that he would not admit it in public. The IRB should have passed this on to the participants.

The option to use the technology was removed after the company realized it felt like an inauthentic experience.

He said that they were giving peer supporters the chance to use GPT 3 to compose better responses. Suggestions were given to help them write more supportive responses.

According to Morris, the study is exempt from informed consent law, as well as previous published research by the company.

Everyone has to give consent to use the service. This would fall under an "exempt" category of research if it were a university study.

He said that this imposed no further risk to users, no deception, and we don't collect any personally identifiable information or health information.

A woman sits on a couch with her phone
A women seeks mental health support on her phone.
Beatriz Vera/EyeEm/Getty Images

ChatGPT and the mental health gray area

The experiment is raising questions about ethics and the gray areas surrounding the use of artificial intelligence in healthcare.

In an email to Insider, Arthur Caplan, professor of bioethics at New York University's Grossman School of Medicine, said that using artificial intelligence without telling users is grossly unethical.

The intervention is not a standard of care according to the doctor. No mental health or psychological group has laid out any potential risks.

People with mental illness need special sensitivity in any experiment, including close review by a research ethics committee or institutional review board before, during, and after the intervention.

The future of the healthcare industry could be impacted by the use of GPT 3 technology.

He said that many artificial intelligence programs may have a future. What happened here can make that future more complicated.

Morris told Insider that he wanted to emphasize the importance of the human in the discussion of artificial intelligence.

He hopes that doesn't get lost in the shuffle.