
We are being immersed in a world with all types of artificial intelligence (AI). Most of us feel a mix of amazement and astonishment, and sometimes we’re aghast and alarmed. Experts predict (and we can all see) that the inclusion of AI in our daily lives is likely here to stay.
There is AI used by the general public, such as ChatGPT or Alexa, as well as more sophisticated AI, such as that which is being used by specialists in medicine or computer science. Professionals have wondered if AI will lead to competition or collaboration, and we all likely wonder how this will impact professions and careers of our future generations.
Psychologists and mental health providers are also starting to adopt AI tools in their practices, primarily for routine administrative work, but the way in which AI might eventually be used for clinical assistance in ethical, appropriate ways is evolving. Just like most people, psychologists, too, are considering how AI might be helpful and useful.
Mixed Feelings About AI
Some of the high-tech advancements may remind us of amazing features seen in childhood sci-fi stories like The Jetsons or Star Trek; yet the tragic cases of AI-gone-wrong leave us in shock. Humanized products seem amazingly engaging at first, but our awareness of the other side of the coin is deeply disturbing.
We like how Siri and Alexa can understand our commands, but it’s a bit strange when information seems to have been inadvertently overheard. GPS or Grammarly can be immensely helpful, yet similar computerized algorithms that know our recent purchases or travel plans can feel a bit creepy. We might love the magical way that image recognition can organize our pictures or how Spotify can make song recommendations, but it is more unsettling to know that personal photos or musical preferences are being stored somewhere and might eventually be monetized.
We want our doctors to have access to surgical robots for quicker, smoother procedures, but we get frustrated when clunky customer service bots misunderstand us. Managing email is made easier with those Spam filters, yet knowing that our mail is being sorted in some amorphous tech cloud can feel a bit unnerving. We might appreciate how AI research saves us time, but we also bemoan the diminishing skills of individual inquiry and exploration.
The Both-And Approach
Just as the initial arrival of cars, TVs, comic books, and cell phones led to broad-reaching warnings and predictions of dangers, AI warning bells are sounding. We have all experienced some positive gains, but there is also the concern when things are off-track or questionably handled. And, in a startlingly short period of time, we are already moving from early adoption to reliance.
Humans often struggle with things that are new and different, and some of this likely relates to the growing pains associated with transitions. And despite liking and craving novelty at times, we can also have mixed feelings about things that stretch beyond our usual comprehension and over which we have limited control.
Because most changes are rarely all good or all bad, we usually need to apply the both-and approach. This means accepting that we might be nervous, worried, and concerned while also being curious, interested, and excited. As with most any new life experience, mixed and multiple emotions are normative and expected. We can mindfully appreciate both sides.
Impact on Mental Health
Psychologists and mental health providers have been paying close attention to how the world of mental health is being impacted by AI. It has become clear that some people find it easier to ask about personal mental health issues online and might appreciate the options for seemingly private, personalized 24/7 care. But we are also increasingly aware that there are risks to sharing information in cyberspace.
Parenting in the digital world is also challenging, with families having to work to be attentive to their own and their children’s use of screens as a potential source of avoidance, excessive social comparisons, sleep interrupters, and experience blockers. This is a whole new domain about which enhanced understanding is needed.
The increasing use of AI companions is another area of significant mental health impact. Utilizing AI bots as a sole confidante is problematic. Human relationships are not and should not be ever-validating and without variation in mood. The way in which AI is more of a sycophantic echo chamber is becoming increasingly understood, but the allure of this can be particularly strong when someone is feeling more vulnerable.
Fortunately, there is a benefit to the way in which some of the AI tools allow more community, less isolation, and increased validation, and some AI information allows quick access to important tips and helpful strategies in the domain of mental health. And AI can indeed bridge some gaps in care, especially for those without access to services. AI advancements have also allowed some exciting applications of treatment methods, like virtual exposure therapy for some fears and phobias, which had not been possible as recently as a decade ago.
There is something about a speedy, black-and-white response that sometimes seems more credible, yet increasing skills of media literacy are key. We need to be critical consumers, staying alert to how the advice received online may be misguided. This might sometimes be like airline attendants providing safety briefings to deaf ears, but we must continue to provide solid, not sensationalized, cautions and education.
Resources and Rehearsal, Not Replacement
Turning to AI to brainstorm ideas, seek out resources, or engage in interpersonal rehearsal can all be excellent uses of this available assistance. Family, friends, and therapists may not always have access to some of the latest research available on a particular topic, nor be immediately available to role-play a conversation. However, humans still need humans.
We can use the AI tools that work, such as helpful apps and informational assistance, while also hopefully holding onto the irreplaceable real-person connection. Although we might initially like the ever-enthusiastic and validating engagement we currently receive from AI bots, this is not ultimately realistic nor sustainable. Humans are indeed complex, moody, and imperfect, but they still are the best beings with whom to connect.
Using AI to increase coping and strategize application action can be beneficial, while spiraling into complaining and more isolation is not. Moving forward effectively in real-life relationships and situations helps to utilize the good of AI without becoming over-reliant or overdependent. Training wheels on a bike can be helpful for a child as they first learn to ride, but the goal is for the child to eventually be able to balance on their own.
Psychology Can Play a Role in Future Enhancements
Psychologists and others who specialize in the science of human behavior can assist programmers to better set up AI to help in more realistic ways. Utilizing concepts from psychological science, knowledge about interpersonal needs, and evidence-based therapies can assist in creating appropriate guardrails for safety. And ongoing work, especially for young people, to uphold critical consumerism, fact-checking, and media literacy is key.
