Skip to main content
Since 2004, revealing what drives you!

Need a Psychiatrist? New AI Assistants: Good or Bad Idea?

Some are seeking investment advice from artificial intelligence, considering it a confidant or even a relationship substitute, and as NVIDIA's CEO predicts that by 2025 everyone will have a personal AI accompanying them everywhere, fostering self-reflection and self-validation, a question arises: in the face of the explosion of mental health issues, could AI represent an appropriate response?

Psychiatric services are overwhelmed, wait times to see a specialist are increasing, and the need for psychological support has never been more urgent. In this context, AI is sometimes presented as a future solution, capable of addressing the shortage of professionals and offering accessible support to all. But can this technology truly meet expectations, or is it merely an illusion of progress in a struggling system?

The World Health Organization estimates that one billion people live with a mental disorder. The majority of them are unaware of it or receive no treatment, and the COVID-19 pandemic has further exacerbated these inequalities. Faced with this situation, AI-based initiatives, such as therapeutic chatbots using cognitive-behavioral therapy techniques, raise questions. Studies indicate that these tools can provide temporary support and help reduce certain depressive symptoms. Worrying, isn't it?

Current AI operates on predictive models and learning algorithms that do not grasp personal context or human emotional complexity. A programmed response, even a sophisticated one, remains a standardized and stereotyped answer, and even if nuanced, it is not the result of a genuine understanding of an individual's unique experience. Moreover, many mental health applications available on the market have not undergone rigorous scientific evaluations, raising questions about their reliability and long-term effectiveness. This also highlights concerns about excessive economic liberalism.

For medications, there is a control and validation process, albeit imperfect, before they reach the market. However, for an app that can have significant effects on health, there is no such process. This raises questions about the notion of intrusion into the body.

Influencing means acting on someone else's body, physically or psychologically.

From an ethical standpoint, major concerns arise: data confidentiality, security of sensitive information, and responsibility in case of misinterpretation or inappropriate recommendations. Can an AI be held accountable for a misdiagnosis? What happens if a person in distress relies solely on a chatbot that fails to detect the urgency of their situation? These questions are far from theoretical and are at the heart of the problem, much like viral videos of challenges involving ingesting laundry pods.

However, AI could play a relevant role as a complementary tool. When integrated into a care pathway supervised by professionals, it could improve access to information, ensure follow-up between consultations, and provide a first level of support through key, supervised messages. Early adopters include Woebot, Wysa, and AssistantPsy.fr.

Believing that technology could replace human relationships in a field as delicate as mental health would be a risky oversimplification. The therapeutic alliance relies on elements that no algorithm can yet reproduce: transference, intuition, nuance, the ability to adjust interactions based on subtle signals, and, above all, authentic engagement in a helping relationship.

AI is therefore a tool, not a solution in itself. The real question is not whether it is good or bad, but how it can be used and, above all, regulated wisely, keeping in mind that no technology can compensate for a lack of human connections or replace a deeper reflection on the organization of mental health care.

Perhaps the greatest danger would be to believe or imagine that access to psychological support for the poor could be provided through AI apps costing 10 euros, and that our mental health problems could be solved by technology alone, without a profound reevaluation of our lifestyles and societal priorities.

"Excellence is the result of consistent improvement."

Philippe Vivier

©

Philippevivier.com. All rights reserved.

Article L122-4 of the Code of Intellectual Property: "Any representation or reproduction in whole or in part without the consent of the author [...] is illegal. The same applies to translation, adaptation or transformation, arrangement or reproduction by any art or process."

History & Infos


Practice founded in 2004.
Website and content redesigned in 2012.
SIRET NUMBER: 48990345000091

Legal information.


Addresses


  • 254 rue lecourbe
    75015 Paris
  • 23 avenue de coulaoun
    64200 Biarritz
  • 71 allée de terre vieille
    33160 St Médard en Jalles
  • 16 Pl. des Quinconces
    33000 Bordeaux

Contact