AI and Psychology

Like it or Not, AI Is Coming…It Is Here!

Here are some of the things people are saying about AI and psychology:

In psychology practice, artificial intelligence (AI) chatbots can make therapy more accessible and less expensive. AI tools can also improve interventions, automate administrative tasks, and aid in training new clinicians. On the research side, synthetic intelligence is offering new ways to understand human intelligence, while machine learning allows researchers to glean insights from massive quantities of data. Meanwhile, educators are exploring ways to leverage Chat GPT in the classroom. (Craig, L. 2024)

People get resistant, but this is something we can’t control. It’s happening whether we want it to or not,” said Jessica Jackson, PhD, a licensed psychologist and equitable technology advocate based in Texas. “If we’re thoughtful and strategic about how we integrate AI, we can have a real impact on lives around the world.

Despite many professionals’ fears about its potential dangers, it seems to have arrived. Looking at AI from the perspective of a therapist or others who offer counseling, here are a few of the unexpected issues you might encounter.

There is still cause for concern. AI tools used in health care have discriminated against people based on their race and disability status (Grant, C. 2022).

Be aware that AI can spread misinformation, use chatbots to express love to users inappropriately, and sexually harass minors. In fact, it was concern about issues like these that prompted a March 2023 movement to stop using or developing AI until more research was completed about the subtleties that might accrue to clients.

A lot of what’s driving progress is the capacities these systems have—and that [capacity of AI] is outstripping how well we understand how they work,” said Tom Griffiths, PhD, a professor of psychology and computer science who directs the Computational Cognitive Science Lab at Princeton University. He went on to say, “What makes sense now is to make a big parallel investment in understanding these systems, something psychologists are well positioned to help do.”

How Safe is the Combination of AI and Psychology?

As we ponder AI’s safety in the combination of AI and psychology, some questions rise to the surface. Is it ethical? What protections could help ensure privacy, transparency, and equity as these tools are increasingly used across society? Can society protect itself with rules and regulations?

Psychologists may be among the most qualified to answer those questions.

“One of the unique things psychologists have done throughout our history is to uncover the harm that can come about by things that appear equal or fair,” said Adam Miner, PsyD, a clinical assistant professor of psychiatry and behavioral sciences at Stanford University, citing the amicus brief filed by Kenneth Clark, PhD, and Mamie Phipps Clark, PhD, in Brown v. Board of Education.

Counselors, psychologists, psychiatrists, and others who provide therapy can use their expertise to examine how AI works for, with, and on their clients. Created by human beings, AI can promote the unintentional bias held by AI developers.

When it comes to AI and psychology, psychologists have the expertise to question assumptions about new technology and examine its impact on users. Psychologist Arathi Sethumadhavan, PhD, the former director of AI research for Microsoft’s ethics and society team, has researched DALL-E 2, GPT-3, Bing AI, and others. Sethumadhavan said psychologists can help companies understand the values, motivations, expectations, and fears of diverse groups that new technologies might impact. They can also help recruit participants with rigor based on factors such as gender, ancestry, age, personality, years of work experience, privacy views, neurodiversity, and more.

Psychologists are also taking a close look at human-machine interaction to understand how people perceive AI and what ripple effects such perceptions could have across society. One study by psychologist Yochanan Bigman, PhD, an assistant professor at the Hebrew University of Jerusalem, found that people are less morally outraged by gender discrimination caused by an algorithm as opposed to discrimination created by humans (Journal of Experimental Psychology: General, Vol. 152, No. 1, 2023). Study participants also felt that companies held less legal liability for algorithmic discrimination. (Bigman, Y. 2023)        

To truly democratize AI, the infrastructure must support a simple interface that allows users to query data and run complex tasks via natural language. “The architecture is moving in a way that supports the democratization of analytics,” says Richard Spencer Schaefer, Chief Health Informatics Officer, US Department of Veterans Affairs.

“The technologies that we’re putting in place are enabling physicians to be a part of the development of AI, and because of the level of validation involved, I think there will be more trust in the models we develop.” Richard Spencer Schaefer, Chief Health Informatics Officer, US Department of Veterans Affairs.

“If you were one of those people who learned how to work with computers, you had a very good career. This is a similar turning point: as long as you embrace the technology, you will benefit from it.” Andrew Blyton, Vice President and Chief Information Officer, DuPont Water & Protection. A bit of a cavalier attitude, I believe. However, if we embrace the technology, keeping in mind the same ethics we observe everywhere else in our practices, we will eventually find we are as comfortable with AI as we are with all the electronic helpers we use today.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.