Future of AIAI

When AI Becomes a Therapist: Why Workplace Well-being Needs More Trust, Not More Tech

By Dr. Serena H. Huang, Founder, Data with Serena

When an employee confides that they prefer ChatGPT over their company’s Employee Assistance Program (EAP) because it “won’t rat them out,” we need to stop and ask what’s broken. 

This is not just about data or privacy. It is about trust. And without trust, AI will fail the very people it claims to help. 

A recent Harvard Business Review article found that therapy and companionship are now the top use cases for GenAI tools. Not coding help. Not drafting emails. Emotional connection.  

When I asked audiences at AI and data conferences whether they expected “AI romantic partner” to be a reality a year ago, very few raised their hands. Yet, here we are.  

AI is not replacing human support because it is better. It is replacing it because it feels safer and more accessible. Employees are turning to AI when they do not trust the systems, processes, or people around them. That’s the real crisis. 

The Gap AI is Filling 

AI is filling a void left by under-resourced, stigmatized, or misaligned workplace well-being systems. In many environments, people do not feel safe seeking care. They worry that asking for help could lead to being flagged, judged, or even terminated. 

The result is a shift in behavior. Instead of reaching out to a manager or using employer-sponsored support programs, people turn to chatbots. They talk to GenAI models like ChatGPT about their stress, burnout, even trauma. Not because it is ideal. Because it feels like the only option that will not come with consequences. 

The Business Case for AI in Employee Well-being 

Companies are increasingly turning to AI to support employee well-being because a healthy workforce is also a productive one. The benefits are not hypothetical. Through my work, I have seen three key applications gain traction recently. 

  1. Early Detection of Burnout and Sentiment Change

AI can identify employee burnout before it escalates into crisis. By analyzing data from calendars, communication patterns, and pulse surveys, AI tools can spot patterns that indicate rising stress. 

Take a look at your calendar. Is it sustainable to be in 10 or more meetings every day? Do you know which of your team members are experiencing collaboration overload? When this data is connected to other metrics such as absenteeism, performance trends, and turnover, leaders gain valuable insights. 

One of the biggest changes in the last two years has been the rise of natural language processing tools. These systems analyze employee feedback, performance reviews, and even internal communication on slack channels to detect sentiment shifts. Unlike traditional HR metrics, which are often lagging indicators, these tools offer a proactive lens into team morale and energy. 

When used responsibly, this information empowers leaders to act early and support their team before the issues become unmanageable. 

  1. Personalized Recommendations for Well-being

Well-being is not one-size-fits-all. Some employees may simply need a reminder to take breaks or stretch. Others may need urgent access to counseling. AI can help personalize these recommendations based on data patterns, such as sleep and activity data from wearables. 

Personalization of well-being should also operate at the community level. In cultures where mental health is stigmatized, even the wording of an outreach message can determine whether someone engages with care. This is where collaboration between well-being teams and employee resource groups (ERG) becomes essential. 

A major financial services company shared with me that its partnership with Black and Latino employee groups helped tailor outreach in culturally relevant ways. That collaboration resulted in a more successful implementation of the company-sponsored well-being programs. 

  1. Expanding Access to Mental Health Support

AI-powered chatbots can help employees navigate benefits, book appointments, and even provide some emotional support. These tools are not intended to replace therapists, but they can fill some of the critical gaps. Especially during off-hours or in locations where licensed professionals are less available. 

Tools like Woebot are rule-based AI systems. Their responses are designed by psychologists and continuously refined to reduce risk. Systems like these provide responses that are predictable and controlled, reducing the risk of providing harmful advice to users. 

However, not all AI tools are built this way. Tessa, a chatbot developed to help patients with eating disorders, was taken offline after it gave harmful advice to a patient. Its GenAI model had pulled content from the internet that contradicted the clinical guidance of the development team.  

GenAI is powerful but can be very unpredictable. Without strict safeguards and continued monitoring, these tools can amplify bias or worse, offer guidance that is inappropriate in high-stake situations like self-harm. 

The risk is not that employees will use AI tools. The risk is that they already are 

The RAI Framework: Responsible AI for Workforce Well-being 

To avoid common pitfalls and design AI tools that truly support employees, I advocate for 10-part Responsible and Ethical AI Principles. This comes directly from my work with Fortune 100 companies and is detailed in my new book, The Inclusion Equation: Leveraging Data and AI for Organizational Diversity and Well-being. 

  1. Nonmaleficence: Ensure that AI systems do not cause harm to employees. This principle emphasizes the importance of designing and deploying AI tools that prioritize employee safety and well-being. 
  2. Transparency and Explainability: Clearly communicate to employees how AI-powered well-being tools work, what data are collected, and how that data will be used. Ensure that AI-powered well-being tools provide transparent and explainable outputs, enabling employees to understand the reasoning behind recommendations or interventions. 
  3. Privacy and Consent: Implement robust safeguards to protect employee privacy, including data encryption, secure storage, and access controls. Obtain explicit consent from employees before collecting and analyzing their data. 
  4. Autonomy: Prioritize employee autonomy, allowing individuals to opt-out of AI-powered well-being tools or request human support if desired. 
  5. Fairness: Ensure that AI systems are designed to be fair and unbiased, avoiding perpetuation of existing inequalities. 
  6. Accountability and Human Oversight: Establish clear lines of accountability for AI-powered well-being tools, including designated responsibilities and consequences for misuse. Implement human oversight and review processes to detect and correct potential biases or errors in AI decision-making. 
  7. Monitoring: Regularly monitor and evaluate the effectiveness and ethics of AI-powered well-being tools, making adjustments as needed. 
  8. Collaboration: Collaborate with experts in AI ethics, data privacy, and mental health to ensure that AI-powered well-being tools are developed and implemented responsibly. 
  9. Inclusivity: Actively involve diverse groups of employees in the design, development, deployment, and oversight of AI systems. This ensures that the tools are equitable and considers the needs of all employees, especially those from underserved communities. 
  10. Governance: Establish clear governance structures for AI systems, including policies, procedures, and oversight mechanisms to ensure compliance with ethical standards as well as global and regional regulations. 

The Human Is Not Optional 

As organizations rush to embed AI across the employee experience, we must resist the temptation to automate humanity out of the equation. Emotional well-being is not a productivity metric to optimize. It is a fundamental human need. AI can play a critical role in building healthier workplaces, but it cannot replace the leadership, empathy, and trust that well-being ultimately depends on.  

Remember: AI can scale support, but only humans can truly care. 

About the Author: 

Dr. Serena Huang is revolutionizing how organizations approach talent, wellbeing, and DEI through data and AI. A F100 AI consultant and career strategist, she is the founder of Data with Serena and author of The Inclusion Equation – Leveraging Data & AI for Organizational Diversity and Wellbeing.  

Author

Related Articles

Back to top button