Saturday, November 8

Neurodiverse professionals may see unique benefits from artificial intelligence tools and agents, research suggests. With AI agent creation booming in 2025, people with conditions like ADHD, autism, dyslexia and more report a more level playing field in the workplace thanks to generative AI.

A recent study from the UK’s Department for Business and Trade found that neurodiverse workers were 25% more satisfied with AI assistants and were more likely to recommend the tool than neurotypical respondents.

“Standing up and walking around during a meeting means that I’m not taking notes, but now AI can come in and synthesize the entire meeting into a transcript and pick out the top-level themes,” said Tara DeZao, senior director of product marketing at enterprise low-code platform provider Pega. DeZao, who was diagnosed with ADHD as an adult, has combination-type ADHD, which includes both inattentive symptoms (time management and executive function issues) and hyperactive symptoms (increased movement).

“I’ve white-knuckled my way through the business world,” DeZao said. “But these tools help so much.”

AI tools in the workplace run the gamut and can have hyper-specific use cases, but solutions like note takers, schedule assistants and in-house communication support are common. Generative AI happens to be particularly adept at skills like communication, time management and executive functioning, creating a built-in benefit for neurodiverse workers who’ve previously had to find ways to fit in among a work culture not built with them in mind.

Because of the skills that neurodiverse individuals can bring to the workplace — hyperfocus, creativity, empathy and niche expertise, just to name a few — some research suggests that organizations prioritizing inclusivity in this space generate nearly one-fifth higher revenue.

AI ethics and neurodiverse workers

“Investing in ethical guardrails, like those that protect and aid neurodivergent workers, is not just the right thing to do,” said Kristi Boyd, an AI specialist with the SAS data ethics practice. “It’s a smart way to make good on your organization’s AI investments.”

Boyd referred to an SAS study which found that companies investing the most in AI governance and guardrails were 1.6 times more likely to see at least double ROI on their AI investments. But Boyd highlighted three risks that companies should be aware of when implementing AI tools with neurodiverse and other individuals in mind: competing needs, unconscious bias and inappropriate disclosure.

“Different neurodiverse conditions may have conflicting needs,” Boyd said. For example, while people with dyslexia may benefit from document readers, people with bipolar disorder or other mental health neurodivergences may benefit from AI-supported scheduling to make the most of productive periods. “By acknowledging these tensions upfront, organizations can create layered accommodations or offer choice-based frameworks that balance competing needs while promoting equity and inclusion,” she explained.

Regarding AI’s unconscious biases, algorithms can (and have been) unintentionally taught to associate neurodivergence with danger, disease or negativity, as outlined in Duke University research. And even today, neurodiversity can still be met with workplace discrimination, making it important for companies to provide safe ways to use these tools without having to unwillingly publicize any individual worker diagnosis.

‘Like somebody turned on the light’

As businesses take accountability for the impact of AI tools in the workplace, Boyd says it’s important to remember to include diverse voices at all stages, implement regular audits and establish safe ways for employees to anonymously report issues.

The work to make AI deployment more equitable, including for neurodivergent people, is just getting started. The nonprofit Humane Intelligence, which focuses on deploying AI for social good, released in early October its Bias Bounty Challenge, where participants can identify biases with the goal of building “more inclusive communication platforms — especially for users with cognitive differences, sensory sensitivities or alternative communication styles.”

For example, emotion AI (when AI identifies human emotions) can help people with difficulty identifying emotions make sense of their meeting partners on video conferencing platforms like Zoom. Still, this technology requires careful attention to bias by ensuring AI agents recognize diverse communication patterns fairly and accurately, rather than embedding harmful assumptions.

DeZao said her ADHD diagnosis felt like “somebody turned on the light in a very, very dark room.”

“One of the most difficult pieces of our hyper-connected, fast world is that we’re all expected to multitask. With my form of ADHD, it’s almost impossible to multitask,” she said.

DeZao says one of AI’s most helpful features is its ability to receive instructions and do its work while the human employee can remain focused on the task at hand. “If I’m working on something and then a new request comes in over Slack or Teams, it just completely knocks me off my thought process,” she said. “Being able to take that request and then outsource it real quick and have it worked on while I continue to work [on my original task] has been a godsend.”

https://www.cnbc.com/2025/11/08/adhd-autism-dyslexia-jobs-careers-ai-agents-success.html

Share.

Leave A Reply

eight − 5 =

Exit mobile version