Is AI good or bad for employee health and wellbeing? Can you answer that one? You better learn how to!

0
Is AI good or bad for employee health and wellbeing? Can you answer that one? You better learn how to!

Current research provides a very mixed and conflicting picture as to whether AI is having a positive or negative impact on employee health and wellbeing.

For instance, a study by Magnus Lodefalk, an Economics Researcher at Orebro University in Sweden, found that using AI tools supported wellbeing by reducing stress among employees performing many or new tasks. This, he said, was because it helped them undertake such activities more quickly and efficiently.

But an article published in the Harvard Business Review entitled ‘Using AI at Work Makes Us Lonelier and Less Healthy’ found that, although employees may become more efficient using the technology, they also felt more isolated. The upshot was they were more likely to indulge in negative behaviors, such as resorting to alcohol and experiencing insomnia.

So, what is the truth of the matter here? Dr Hidenori Tanaka, Group Leader in AI Research at the CBS-NTT Program on the Physics of Intelligence at Harvard University’s Center for Brain Science, says:

In psychology, there’s something called ‘Theory of Mind’. This is about recognizing other people have thoughts and feelings that are different from our own, which means we adapt our communication depending on who we’re speaking to. AI does the same: it’s a mirror, so in a sense, when you’re talking to AI, you’re talking to yourself as it tries to mimic you. It’s why AI will support your opinions. This means whether the technology is good for you or not is partly due to the human in the equation – although culture and organizational systems make a lot of difference too.

Dr Shonna Waters, Co-Founder and Chief Executive of human capital performance advisory and consulting firm Fractional Insights, comes to not dissimilar conclusions:

The studies aren’t just measuring one experience of AI or how its implemented…There are multiple sub-populations in a company and depending on how much anxiety, fear, worry, or optimism they feel, they’ll react differently to the choices leaders make and how they deploy AI. Some of it also depends on how much agency they have in the face of fear.

Four ways of responding to AI

Her research has identified four different psychological profiles of how people respond to AI. These comprise:

  1. Visionaries: These people are early adopters who are optimistic about AI and less worried about its possible negative impacts on themselves and their lives than others are. Out of a global sample of 2,000 employees, they make up 39% of the total
  2. Disrupters: While members of this group see AI’s potential, they are also concerned about the effect it could have in future (33%)
  3. Endangered: These individuals do not fully understand the technology’s potential and pace of adoption. Therefore, they are not appropriately worried about the disruption it is likely to cause or the value it could bring (17%)
  4. Complacent: This group is currently not seeing a clear effect from AI either with regards to themselves or their work environment. As a result, they act like ostriches and refuse to worry until they see an impact (10%).

Paradoxically, Waters says, the more people are afraid of the technology, the more they tend to use it as a means of trying to protect themselves and their jobs. Some 71% of those questioned, for instance, believed that AI was essential for career progression, while 42% expressed fear their roles would be eliminated. Waters points out:

Part of what’s happening in mental health and wellbeing terms is about identity and job security issues. So, it isn’t just about personality. It’s also about organizational design. Someone that’s a Visionary in one company might be Endangered in another due to how much they trust in leadership competence and care. Where trust exists, there’s a 32% performance advantage over a situation in which trust is low.

Challenges in AI implementation

A further important factor when implementing AI is to ensure clarity over what is allowed and expected behaviourally. Fractional Insight’s study indicates, for example, that while 94% of respondents use the technology, 45% hide the fact. As Waters says:

We’re in the weird situation where CEOs are saying, ‘You’ll be fired if you don’t use it’, but you’re punished if you do. There’s a lack of clarity here as everything is so dynamic, but the situation makes it harder to govern effectively in terms of safe usage and setting the rules of the road, which creates more anxiety.

Unsurprisingly given the confusion, some 70% of employees indicated this lack of clarity was a key reason for not being able to adopt AI with confidence. Being empowered with the necessary tools, time, and feelings of psychological safety to actually use the tools, on the other hand, was vital.

Another particularly challenging element of AI compared with other technologies is the pace of change at which is it evolving. Waters explains:

Change is always challenging for people, especially if there’s ambiguity. But AI is changing so rapidly that we’re not just talking about needing a period of upskilling, adjustment, and resetting norms. It’s continuous, and that’s a huge challenge for the frontline, managers and leaders. We generally talk about leaders providing a clear future vision of where we’re going, but the truth is most of them don’t know. So, they’re frozen into silence and, in the absence of information, employees assume the worst.

Even if their employer commits to human-centric AI adoption or a no lay-off policy, employees still watch the news and see AI-related redundancies taking place elsewhere. Unsurprisingly then, 61% are experiencing “meaningful AI angst”. As Waters says:

It’s not just about jobs or tasks. AI is threatening identity. It’s about cognitive work expertise, so ‘is my contribution still valuable if AI can analyze something in seconds that took me days?’. It’s making a significant number of people worry that even if they have a job, their skills will become obsolete. It also makes them question what is unique about being human and what value they can create and provide. So, the acceleration brought about by AI is creating pressure, which means people have less time to process the psychological impact of it and adapt…It also gives leaders less time to be thoughtful about change management and how they integrate tools into workflows.

AI and mental health

This accelerated adoption curve is particularly damaging for people with underlying mental health conditions, believes Tanaka:

AI has potential upsides and downsides for mental health. The downside is potential psychosis for those with schizophrenia or a serious detachment condition as they tend to be very insecure. AI is sycophantic, so if the two come together, it can generate consequences, including addiction problems. If someone talks to AI for hours for many days straight, it makes the brain very tired and then bad things happen. It can lead to existential crises and there have been lawsuits after people committed suicide. So, the safety of AI can depend on whether an individual is in an emotionally stable state. If they have a mental health condition, that matters.

The problem of addiction is a particular potential challenge in a startup context, Tanaka adds. Here, overworking and constant brain overstimulation due to interaction with an AI that never sleeps lead to a dopamine deficit. This makes people work even harder to get a hit, which can result in addictive behavior.

To make matters worse, over-relying on an AI that tells you what you want to hear can cause some people to reduce how much they interact with other humans, who may be perceived as more problematic. This, in turn, can bring about isolation and loneliness, which generates its own mental health issues.

 

Tomorrow, I’ll explore what action employers can take to ensure the emotional wellbeing of their employees as automation increases.

 

link

Leave a Reply

Your email address will not be published. Required fields are marked *