Keep Your Employees Happy and Save Cost with Your Own AI

Introduction

In today's interconnected digital workplaces, internal communication platforms serve as crucial conduits for collaboration and information exchange. However, with the proliferation of digital interactions comes the risk of workplace incivility and inappropriate behavior, ranging from disrespectful behavior to unethical conduct.

Research has shown a positive correlation of serious financial impact caused by uncivil and inappropriate behavior. It has been found that employees who are subjected to these behavior can be distracted from work and cause delays. This costs about USD $14,000 annually per affected employee (Pearson et al., 2010). "According to a study conducted by Accountemps and reported in Fortune, managers and executives at Fortune 1,000 firms spend 13% percent of their work time—the equivalent of seven weeks a year—mending employee relationships and otherwise dealing with the aftermath of incivility (Porath et al., 2013)." The recurring cost of incivility and inappropriate behavior is expensive.

$14,000

per employee per year

cost by uncivil and

inappropriate behavior

Recent advances in AI, particularly large language models (LLMs) (or more commonly known by the name GPT, a subset of LLMs) unlocks new capabilities in analyzing large amount of text and detect subtle hints of uncivil and inappropriate behavior. This whitepaper explores the potential of leveraging AI to detect and mitigate such behaviors from large volume of communications. By proactively fostering a more respectful and inclusive work environment, organizations can improve productivity, retain good employees, and save cost.

Supercharge Management with a New Generation of AI

The deployment of Large Language Models (LLMs), such as GPT (Generative Pre-trained Transformer), offers a significant advantage over traditional AI methods in detecting uncivil and inappropriate workplace behavior. Unlike conventional AI systems that rely on predefined rules or patterns, LLMs possess advanced natural language understanding capabilities, enabling them to comprehend context, tone, and nuances within workplace communications. GPT, in particular, has demonstrated remarkable proficiency in processing and generating human-like text across various domains (Brown et al., 2020).

By leveraging its vast pre-trained knowledge and contextual understanding, GPT can detect subtle linguistic cues indicative of uncivil or inappropriate behavior, even in complex and dynamic workplace interactions. Moreover, LLMs excel in adapting to evolving language patterns and can continuously improve their detection accuracy over time through fine-tuning and continuous learning (Radford et al., 2019). Thus, the utilization of LLMs like GPT presents a superior solution for proactively identifying and addressing workplace behavior issues, ultimately contributing to a healthier and more inclusive work environment.

Costs and Benefits with AI

With new capabilities come new set of costs and benefits. Obviously, benefits must outweigh cost for a solution to be desirable.

Costs

Manpower and computation power are two main contributing factors of cost. Managers and employee success specialists will need to spend time to read and act on reports generated by the AI system. IT administrators will need to implement, maintain, and troubleshoot the AI system. These are cost that will add onto existing workforce. An organization may decide to establish a special unit to manage the AI system and its output more efficiently, but may incur more manpower.

Computation power is a primary concern to implementing any AI system. By carefully budgeting and optimizing processing latency and accuracy, it is possible to lower the cost to a level that is much lower than other systems that require interactive responses. In some cases, it can even be implemented with ordinary CPUs, which can lower the cost by 4 to 10 folds.

Benefits

The most desirable and obvious benefit with AI is the amount of communication that it can help process when compared to human given the same amount of time. Armed with the superior processing capacity of AI, management can offload most processing to AI and only spend time when a warning comes up. "According to a study conducted by Accountemps and reported in Fortune, managers and executives at Fortune 1,000 firms spend 13% percent of their work time—the equivalent of seven weeks a year—mending employee relationships and otherwise dealing with the aftermath of incivility (Porath et al., 2013)." If we assume managers spend 30 minutes each week to go through an AI-generated report, it will only total to 24 hours, or just 1 day, a year, to proactively prevent incidents. When compared to handling an aftermath, it is a 49x cost saving in terms of time spent.

Implementation Considerations

Proper implementation of LLMs for analyzing internal communication necessitates careful consideration of various factors to ensure effectiveness, legality, and ethicality.

Data Privacy

Firstly, organizations must prioritize data privacy and security, particularly when handling sensitive employee communications. Implementing robust encryption protocols and access controls can safeguard against unauthorized access and mitigate the risk of data breaches. Additionally, adherence to relevant regulatory frameworks, such as GDPR or HIPAA, is imperative to maintain compliance and protect employee rights.

Transparency

Transparent communication with employees about the purpose and scope of LLM analysis is essential to foster trust and alleviate concerns regarding privacy invasion. Providing clear guidelines on acceptable communication practices and the consequences of misconduct can also serve as a deterrent against inappropriate behavior while promoting a culture of accountability.

Bias Mitigation

Furthermore, organizations must address potential biases inherent in LLMs to ensure fair and equitable analysis of internal communication. Bias can manifest in various forms, including gender, race, or cultural biases, which may skew the detection of uncivil or inappropriate behavior. Regular audits and validation checks can help identify and mitigate biases within LLM algorithms, promoting fairness and accuracy in detecting workplace behavior issues.

Human Oversight

Despite the capabilities of LLMs, human oversight remains essential. Automated detection systems should complement rather than replace human judgment, with trained personnel responsible for reviewing flagged instances and determining appropriate responses.

Interpretability and Explainability

The decisions made by LLM-based detection systems must be interpretable and explainable. Employees should understand the rationale behind flagged behaviors and have recourse to challenge or appeal decisions.

Continuous Model Improvement

Additionally, ongoing monitoring and feedback mechanisms enable organizations to continuously refine LLM models and adapt to evolving language dynamics within the workplace.

By prioritizing transparency, fairness, and continuous improvement, organizations can harness the power of LLMs to effectively analyze internal communication while upholding ethical standards and promoting a positive work environment.

A Reference Implementation with Bovo AI

In this section, we present a robust reference implementation facilitated by Bovo AI, meticulously designed to address the imperatives of data privacy, transparency, bias mitigation, human oversight, interpretability, and continuous improvement within organizational communication monitoring systems.

Bovo Reference Architecture
Bovo Reference Architecture

How It Works

Every component shown above, with the exception of Cloud LLMs, can be deployed internally to an organization's network without sending any data to external third parties.

  1. A configurable filter screens the data, facilitating the elimination of sensitive or irrelevant information prior to further processing.

  2. A preprocessor optimizes the communication for comprehension by Large Language Models (LLMs).

  3. A configurable router allows for the selection of appropriate LLMs based on diverse criteria such as budget considerations, detection accuracy, and compliance requisites.

  4. Behavioral signals within the communication are scrutinized and stored, with the communication discarded promptly thereafter.

  5. Aggregated signals can be queried on the dashboard, or be configured to alert management according to severity.

Data Privacy

Our reference implementation caters to varying levels of data privacy requirements, offering options ranging from internal deployment to cost-effective cloud solutions. By confining all components within the organizational network, data transmission to external entities is obviated, thereby minimizing any impact on existing compliance frameworks.

Transparency

Configurable dashboards furnish team members with aggregated metrics pertaining to communication performance. This transparency empowers individuals to introspect and enhance their communication practices proactively.

Bias Mitigation

The system incorporates mechanisms for the sampling of communication inputs and associated behavioral signals, facilitating auditing processes. This functionality enables auditors to discern and rectify any inherent biases in behavioral signal detection.

Human Oversight

Instances of flagged uncivil or inappropriate signals undergo meticulous human inspection, ensuring the preservation of ethical standards and fostering trust in the system's operations.

Interpretability and Explainability

Behavioral signals are accompanied by elucidating explanations, courtesy of advancements in generative AI technology. This enhancement endows stakeholders with unprecedented clarity regarding the rationale behind flagged messages, previously unattainable with conventional approaches.

Continuous Model Improvement

An iterative feedback loop is established, wherein anonymized communication data and behavioral signals are sampled for human evaluation. Insights gleaned from this process inform ongoing enhancements to the AI model, thereby perpetuating a cycle of continuous improvement.

Conclusion

In conclusion, Large Language Models offer unprecedented potential as scalable tools for promoting respectful workplace communication. By harnessing LLMs' capabilities in language understanding, generation, and analysis, organizations can proactively address challenges related to improving performance and saving operational cost due to uncivil or inappropriate workplace behavior. Integrating LLM-based solutions can play a pivotal role in fostering a culture of respect and belonging among employees.

References

  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

  • Pearson C. The Cost of Bad Behavior: How Incivility Is Damaging Your Business and What to Do about It. Hum. Resour. Manag. Int. Dig. 2010;18:23–25. doi: 10.1108/hrmid.2010.04418fae.002.

  • Porath C., Pearson C. The Price of Incivility. [(accessed on 20 August 2021)];Harv. Bus. Rev. 2013 91:114–121.

  • Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 9.