Tech News

How careful use of AI can benefit mental health services

Prof Pepijn van de Ven from UL discusses his research, which involves using simple AI models to improve mental health interventions.

The topic of AI applications in healthcare has been a hot topic in the tech world lately.

Last month, prominent AI manufacturing companies OpenAI and Anthropic both launched healthcare-focused services focusing on their chatbots.

Although both features – ChatGPT Health and Claude for Healthcare – are designed to help users with tasks such as understanding test results and preparing appointments, others are looking at the power of AI in more focused areas of the healthcare umbrella.

One such researcher is Professor Pepijn van de Ven, a professor in the Department of Computer Engineering at the University of Limerick (UL).

With a background in electronic engineering – and a PhD in artificial intelligence – van de Ven is currently the leader of Ireland’s National Master’s course in AI, delivered by UL in partnership with ICT Skillnet, and the founding director of UL’s D2iCE research center, which conducts research in AI development and deployment with ethical, sustainable and societal uses of AI.

Currently, van de Ven’s research focuses on the use of AI in mental health interventions.

“I’ve been very fortunate and had the opportunity to work with some of the best in what we call Internet intervention, which is any intervention delivered over the web,” he told SiliconRepublic.com.

“Over the past 15 years, I have been involved in research projects focusing on the use of smart technology in the delivery of mental health services with colleagues across Europe, Australia, North and South America, and Ireland.”

He explains that the contributions he and his team have made to these projects are about using artificial intelligence to improve the delivery of these interventions.

“For example, we have shown that AI can perform time-consuming patient assessments that a clinician would otherwise have to do, thereby freeing that person to interact with patients,” he said. “Such screening interviews often use a large number of questions that can be a real burden for patients. We are doing a lot of work in terms of analyzing the questionnaires that are commonly used in mental health during screening to see if they can be shortened.”

‘We will need to think carefully about the use of AI wherever we consider its use to prevent unintended consequences.’

Benefits and precautions

Van de Ven considers his research important because of its potential to help an area of ​​health care that has long suffered from a lack of proper attention.

“Unfortunately, there is still a lot of stigma in mental health and services that are often under-resourced, the well thought-out use of AI has the potential to reduce the barriers to access these services and can make the provision of these services more efficient.

“As our population grows, the demand for health care services, including, of course, mental health care services, will only grow. I think it’s a simple truth that the only way we can ensure quality services for everyone is through the use of AI.”

Another misconception he gets about his work is the belief that “AI equates to productivity technologies like ChatGPT”.

“This misconception, given all the amazing advances in generative AI, has led to a lot of skepticism about the use of AI,” he says. “The models we use are really simple compared to ChatGPT.”

He explains that by using simple AI models in such a sensitive environment, the risk of harm to patients is reduced – adding that he warns against the use of artificial AI and large language models to replace human workers in tasks such as counseling.

“We have to be very careful,” he says.

“We’ve all heard stories of people using productive models like ChatGPT to discuss their mental health issues and really confide in these AI models. And unfortunately, this has led to disastrous results in some cases.”

For example, in December OpenAI was sued over allegations that ChatGPT encouraged a mentally ill man to kill his mother and himself.

“As it is, we cannot guarantee how quickly the production model will respond and for this reason such use needs more research and careful testing before it becomes commonplace.

“While any AI model can cause harm like many other technologies, the simple models we’re building help with very little work and often do so in a way that a doctor can’t understand,” he says. “As a result, their ability to do damage is limited and poorly understood.”

Personae

Another project that van de Ven and his team are involved in – as the only non-Danish partner, he adds – is the Personae project, which aims to fully adapt the online mental health service already used in the Danish healthcare system “called the step care model”, according to van de Ven.

He explains that this model provides support for patients in three different steps, or levels.

At the lowest level, patient involvement is self-directed, while the second level includes an integrated approach where patients can receive self-directed therapy, while also having access to a therapist during online sessions.

The last step or level is the “traditional method”, he says, where patients see a counselor for each session, although in an online format.

“What is expected is that this approach to care will lead to the efficient use of health resources and thus have the opportunity to treat more people with the available resources,” he said. “Our role in this project is to build AI models that can predict what kind of intervention a patient needs based on the information provided by people when they enter the service.

“Down the line, the hope is that our models can also inform which step in the acute care model a patient should receive.”

Regarding the current progress of Personae, van de Ven tells us that his project partners in Denmark have created a new intervention suitable for delivery at these three different levels, as well as a brand new mobile platform to support the delivery of the intervention.

“After two years of hard work, the trial has just started and is going well. In the near future we hope to get more interesting data to further improve the performance of our AI models.”

Speaking of the future, what are van de Ven’s hopes for the long-term impact of his work?

“I hope we can do good for mentally ill patients and their loved ones by improving the services provided to them,” he said. “Cybernet interventions and AI will play an important role in this process, but AI is a double-edged sword.

“We will need to think carefully about the use of AI wherever we look at its use to prevent unintended consequences.”

Don’t miss out on the information you need to succeed. Sign up for Daily BriefSilicon Republic’s digest of must-know sci-tech news.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button