>Business >AI in mental health

AI in mental health

Owing to the COVID-19 pandemic, there has been a paradigm shift that has occurred within the healthcare industry across the board – every aspect of the healthcare domain has been thrust into a new age of care delivery. 

COVID-19 has thrown the logistics equation of healthcare out of whack, and the medical industry has had to reach out for novel ways of imparting quality service to patients, while maintaining safety and regulatory standards. Several healthcare providers looked at going digital as being the solution to their woes, and this was through the medium of telehealth services. 

Nowhere has this been more applicable than in the mental health field – a domain where patient/practitioner proximity has never been a prerequisite, by default. Mental health is one domain within the medical umbrella where the quality of care, or the insight derived by practitioners into the patient’s condition is not compromised by telehealth. While it’s not a bulletproof solution, it still has the wiggle room to accomplish a lot.  

Latest research has echoed the trend to go digital as well – A RAND research discovered that the considerable appreciation in telehealth utilization during the peak of the pandemic situation was fueled by more and more patients seeking treatment for psychological conditions more so than physical ailments. The isolation imposed by the pandemic has been psychologically strenuous on us all. 

The proliferation of digital within the psychiatric domain has also stimulated the utilization of a technology that has seldom been leveraged in the medical sphere, wherein it has historically stayed somewhat of an enigma. That technology is none other than artificial intelligence. 

Over the previous few years, the field has witnessed the application of chatbots and virtual assistants as a viable method of delving into the waters of AI. With the unexpected COVID-19 pandemic, and all of the psychologically strenuous aspects of the situation, institutions have become dependent on AI to possible widen accessibility and availability of psychiatric care. 

The Trevor Project, a suicide prevention and crisis intervention institution for LGBTQ youth, and its leadership have realized the requirement for broader accessibility to digital psychiatric care services during this pressing time. 

Research indicates that a whopping 1.8 million LGBTQ youngsters in the critical age group of 13 to 24 seriously contemplate dying by suicide, annually. These are troubling numbers. What’s even more worrying about this statistic is that on a deeper level, 1 youngster in this subgroup attempts to kill themself every 45 seconds.  

The crucial factor that needs to be addressed is getting connected to every LGBTQ youth in this category, to stage effective interventions. Psychiatric patient intake has increased twofold over the course of the pandemic, and understandably so. To tackle this worrying matter, The Trevor Project has just partnered with Google to initiate the Crisis Contact Simulator, a pioneering training tool that is driven by AI technology. 

The system simulates electronic convos with these LGBTQ youth who are in crisis scenarios and enables aspiring practitioners to get ready for the real thing by simulating authentic training conversations and scenarios, targeted at helping LGBTQ youth. The model will help practitioners in training a larger number of volunteers and make training updates on an ongoing basis. 

While these strategies have proven to be effective and useful, there is also a rather one-of-a-kind opportunity to utilize AI to up the number of trained practitioners. The technology also holds the prospect of making the training procedures more flexible and increasing their quality. Approximately 7/10ths of digital volunteer practitioners do work slots on nights and weekends, so now they can receive training during these times as well. 

Within the domain of research, organizations are also looking into leveraging AI in psychiatric care. A research just published in JMIR looked at the utility of an AI-driven chatbot called Woebot, a psychiatric care digital solution developed to address substance abuse disorders. 

Substance abuse and psychiatric conditions have a tendency to feed off of each other, and the COVID-19 pandemic has only intensified such conditions. Studies from the NYU school of Global Public Health demonstrated that individual with anxiety and depression have an increased likelihood of reporting drinking more in pandemic times in comparison to those who don’t suffer from psychiatric health issues. 

The pandemic situation is a breeding ground for more worrying substance abuse as individuals are in isolation, and are experiencing increasing levels of stress and duress, whether it is in relation to their wellbeing, finances, politics, or society at large. The proceedings of 2020 have been intensified by social isolation and reduction in accessibility of in-person psychiatric care and 12-step programs. 

In the JMIR research, the team illustrated that Woebot was correlated with considerable enhancements in reducing substance abuse, increasing confidence, and in mitigating cravings, depression, and anxiety. The discoveries suggest that chatbots such as Woebot could possibly minimize the toll of substance abuse disorders. 

Woebot will connect with impacted individuals and initiate the conversation or individuals can voluntarily use it, out of their own accord. What’s available now is psycho-ed lessons that are cognitively based. There exist check-ins where persons are queried with regards to their moods and anxiety levels, in addition to tools to leverage to manage anxious thoughts and cravings. 

If there exists one prime advantage of leveraging AI in psychiatric and medical care, it’s the technology’s capacity to collect insight from big data. This benefit is even more valid in applications within the psychiatric care domain. Various innovations have proliferated the mental health landscape. 

To start with, we’ve had the paradigm shift within genetics in which we have all this fresh genetic information and it has proven instrumental in gaining insight into psychiatric health. There’s been neuroimaging to assist us in gaining insight into how the brain functions in addition to smartphone and sensor data, for example from wearables. We are aware that these utilities consist of critical data that we can leverage to customize care for patients, but the challenge has been in unlocking new information for medical insights on an ongoing basis. 

AI frameworks can assist practitioners in sifting through this massive amount of information and obtain medically actionable objectives that will facilitate patient care. Through these frameworks, we have the capacity to provide more personal and preventative medical care. The hope is that we can tackle psychiatric illnesses in a more focused way. 

AI-powered chatbots are a prime example where the major objective is to enhance personalization of medical care. Woebot has it foundations in CBT philosophies. It comes built in with an empathy component that tailors itself to the communications that the patient sends.  

It has been developed with the objective of managing cravings and to assist the patient in developing self-consciousness with regards to their thought patterns, thoughts in relation to their moods, anxiety, depression, in addition to what drives them to use. 

Enhanced accessibility is another advantage imparted by AI tools. Chatbots communicate with patients in real time. They’re available at all hours, are free of cost, and they minimize stigma connected with taking treatment. Regardless of if these tools are leveraged as standalone treatment agents or as a supplement to more conventional care, chatbots impart additional healing content. 

Advanced analytics tech has the possibility of reducing access challenges for marginalized populations. AI can enhance equity and accessibility to psychiatric care services, particularly for LGBTQ youngsters who live in an intersection of identities; this subgroup is typically subjected to injustice and are discriminated against in their day to day lives. This issue is only exacerbated when we tack on racial issues, ethnic considerations, socioeconomic standing, gender identity, and so on. 

AI holds potential to bring down the barriers of convenience, accessibility, and privacy. Having these services available at all hours, and on demand, in several platforms and iterations gives people the room to have the touch conversations they need to have – the topics they might not be at ease in discussing out loud. 

Leveraging AI has innumerable advantages, but the need of the hour is to maintain care strategies that concentrate on the patient-practitioner relationship. Human interconnectedness is always a requirement, particularly in psychiatric care. AI’s function shouldn’t be to supplant humans, but rather to supplement them. The design principle behind such technologies has been to place AI in a supporting role. This is the most relevant application of AI in the psychiatric care space.  

Therapists don’t have the crucial insight they need into sessions – they insight and perspective of a third person. After their education, practitioners are deployed into the field and are mostly functioning by themselves, listening to patient concerns and leveraging their own judgment and academic knowledge. AI can top into these communications with the patients, analyze patterns, detect tonality, and do a plethora of other things at a more precise level than humans, thereby complementing practitioner care by providing a different perspective.  

There do exist challenges to leveraging AI in healthcare. Engagement levels are critical aspect of determining the effectiveness of the technology, be it chatbots or mobile phone applications. Continuous engagement is a great challenge facing mobile phone applications. Usually, as a patient uses it on an increasing basis, the more the benefits they’ll obtain from it. Whatever we can do to facilitate engagement should assist with practitioner outcomes with regards to accuracy and effectiveness. 

Also, developers and analysts are required to ensure they flesh out these tools with relevant protections for at-risk populations. Woebot does have some of these functionalities built in, such as language detection and rules with regards to risk management. Woebot, at its core, is not a suicide prevention tool. 

The inclusion and exclusion criteria were carefully contemplated on, in the research. For instance, if the practitioners felt that if an individual was at liability to overdose, they weren’t included in the preliminary evaluation. 

The information leveraged in training AI modes is a critical facet of their medical utility – and its quality can be a challenge with regards to psychiatric care. AI can only be efficient as the training it receives and the individuals who are leveraging it. There are critical considerations. 

With regards to what AI receives training on, we have to contemplate what the gold standard is that we’re leveraging. It’s particularly difficult to have this gold standard in medical care as we know that existing clinical definitions we leverage are not ideal. The need of the hour is to take a considerate strategy with regards to how we train these algorithms, and it’s not going to be as easy as quantifying individual’s symptoms. 

Tackling simple problems doesn’t evolve the system. It doesn’t take care forward. If there exists an algorithm to identify if someone is schizophrenic or not, it’s useful, but it’s not going to take the domain forward. That’s a marginal improvement. 

The requirement is to have diversity in samples on which these models receive training in, as we only consider one area, one clinic, or one population, these algorithms are bound to have restricted utility. They have to developed from the start with diversity in approach, patients need to be worked with, and feedback from practitioners, data scientists, and regulators has to be analyzed. 

Developing partnerships with stakeholders, patients, and practitioners based on the data that’s required to develop these tools is the need of the hour. Also a pressing need, is how we can attain buy-in from patients and other stakeholders. The chat software that a patient is leveraging to record their information, and the individual behind that platform are attempting to leverage that information to enhance care. Likewise, practitioners are not thrilled at the prospect of being subjected to recordings, so the question becomes: how do we attain buy-in from them? Without that, the system is not going to be functional. 

Looking forward, the role of AI in any domain of healthcare remains vague. And this is particularly true within the domain of psychiatric care. Leveraging AI within therapy is contingent on various determinants, and even if the industry surpasses these obstacles, the tech isn’t probable to appear in front-facing psychiatric care delivery. 

AI has the potential to assist us in research, sifting through information to identify new patterns that assist us in gaining insight on how psychiatric ailments develop, how they reproduce, and how we can take steps to minimize or better yet, eradicate them.  

For AI to play a key role, analysts are required to refine research and analysis in this sphere. There has been continued interest in chatbots, and patients have demonstrated a correlated interest in trying them out. The challenging questions that need to be addressed are: “How efficiently does it function?”, or “can we execute research of increased quality?”. We’ve witnessed the interest, and the hope is that this will spur more development. 

The Woebot tool is due for further testing with regards to evaluation of its efficacy. The recent research that was carried out was the first in three that are viewed as the development of a program of research and constructing the evidence basis for it; taking it from there, enhancements can be made on an ongoing basis. 

One of the most critical things to keep in mind is that in developing these tools should be an ongoing task – and not a one-time event. Leveraging and application of AI tech should evolve with us. 

What’s valid today will not always stand valid in the future. One only needs to look at the COVID-19 pandemic to understand how dramatically things can change, and they can change in a matter of seconds. There’s always bound to be innovation with regards to models, strategies, and knowledge. 

Add Comment