Apex Trader Funding - News
Your company knows you’re reading this story at work
CNN
—
Last month, news surfaced that major companies like Walmart, Starbucks, Delta and Chevron were using AI to monitor employee communications. The reaction online was swift, with employees and workplace advocates worrying about a loss of privacy.
But experts say that while AI tools might be new, watching, reading and tracking employee conversations is far from novel. AI might be more efficient at it — and the technology might raise some new ethical and legal challenges, as well as risk alienating employees — but the fact is workplace conversations have never really been private anyway.
“Monitoring employee communications isn’t new, but the growing sophistication of the analysis that’s possible with ongoing advances in AI is,” said David Johnson, a principal analyst at Forrester Research.
“What’s also evolving is the industry’s understanding of how monitoring in this way impacts employee behavior and morale under various circumstances, along with the policies and boundaries for acceptable use within the workplace.”
A recent study by a company called Qualtrics, which uses AI to help filter employee engagement surveys, found that managers are bullish on AI software but that employees are nervous, with 46% calling its use in the workplace “scary.”
“Trust is lost in buckets and gained back in drops, so missteps in applying the technology early will have a long tail of implications for employee trust over time,” said Johnson, even as he called a future of AI-powered employee monitoring “inevitable.”
How it’s used
One company that’s getting AI into common work-related software, including Slack, Zoom, Microsoft Teams and Meta’s Workplace platform, is seven-year-old startup Aware.
Aware is working with Starbucks, Chevron, Walmart and others; the startup says its product is meant to pick up on everything from bullying and harassment to cyberattacks and insider trading.
Data stays anonymous until the technology finds instances that it’s been asked to highlight, Aware says. If there’s an issue, it will then be flagged to HR, IT or legal departments for further investigation.
A Chevron spokesperson told CNN the company is using Aware to help monitor public comments and interactions on its internal Workplace platform, where employees can post updates and comments.
Meanwhile, a Starbucks spokesperson said it uses the technology to improve its employees’ experience, including watching its internal social platforms for trends or feedback.
Walmart told CNN it uses software to keep its online internal communities safe from threats or any other inappropriate behavior as well as to track trends among employees.
Delta said it uses the software to moderate its internal social platform, routine monitoring of trends and sentiments, and record retention for legal purposes.
Other monitoring services exist, too. Cybersecurity company Proofpoint uses similar technology to help monitor cyber risks, such as incoming phishing scams or if an employee is downloading and sending sensitive work data to their personal email account. (Disclosure: CNN’s parent company Warner Brothers Discovery is a subscriber.)
Proofpoint, which is used by many Fortune 100 companies, recently rolled out newer capabilities to restrict the use of AI tools, such as ChatGPT, on company systems if it’s against company policies. This would prevent employees from not sharing sensitive company data with an AI model, which could resurface in future responses.
Still, the inclusion of AI in the workplace raises concerns for employees who may feel like they’re under surveillance.
Reece Hayden, a senior analyst at ABI Research, said it’s understandable that some workers could feel a “big brother effect.”
“This could have an impact on willingness to message and speak candidly with colleagues over internal messaging services like Microsoft Teams,” he said.
A new spin on an old method
Social media platforms have long used similar methods. Meta, for example, uses content moderation teams and related technologies to manage abuse and behaviors on its platforms. (Meta has recently been heavily criticized over allegations of inadequate moderation, in fact, particularly around child sex abuse.)
At the same time, employee behavior has been monitored on work systems since the dawn of email. Even when employees are not on a secure work network, companies are able to monitor activity through browsers. (Aware, however, only works on corporate communications services, not browsers.)
“Trying to understand employee patterns is not a new concept,” Hayden said, pointing to companies tracking things like log on times and meeting attendance.
But what’s changing with this process is applying more advanced AI tools directly into employee workflows. AI software could allow companies to quickly analyze thousands of data points and key words to give insight into trends and what workers are discussing in real time.
Hayden said companies may want to track employee conversations not because they care about what your weekend plans or latest Netflix binge people are watching.
“This will help gain more granular, real-time insights on employees,” Hayden said.
He added that this can help companies better shape internal messaging, policies and strategies, based on what the software is learning about its workforce.
The trust factor
Although the rise of AI in the workplace could introduce legal and ethical challenges, along with issues around accuracy and relevancy, Johnson at Forrester Research said he views the biggest complication ahead as gaining employee trust in both the short and long term.
Simply put, people don’t want to feel like they’re being watched.
He said organizations need to be careful about how they embrace the technology; if a company uses it to determine how productive their employees are, or if workers are unhappy, followed by disciplinary action or termination, it could be years before their employees will trust them again.
“It’s critically important to be cautious and deliberate” in using this technology, he said.