Recent studies reveal that a majority of global organisations are deliberating the prospect of implementing a ban on ChatGPT usage in the workplace. However, assessments of employee sentiment suggest that this might not be the most prudent course of action.
According to a recent report by Blackberry, leaders worldwide are exploring the notion of restricting the use of ChatGPT and similar generative AI platforms. A survey conducted among 2,000 IT decision-makers across various countries, including Australia, the US, Canada, the UK, France, Germany, the Netherlands, and Japan, indicates that 75 per cent of employers are either contemplating or actively enforcing bans on the utilisation of this technology in their work environments.
The primary reasons cited for advocating a ban on generative AI include concerns about its perceived potential risks to data security and privacy (67 per cent) and its potential implications for corporate reputation (57 per cent).
The rapid pace of AI advancements has left many employers grappling with understanding and mitigating emerging threats, prompting them to consider a blanket ban on the technology as they strive to gain a better grasp of its risks.
However, Dr Sean Gallagher, Director at the Centre for the New Workforce and AHRI Future of Work Advisory Panel Member, warns against overlooking the relentless pace of AI progress. Speaking at the AHRI National Convention and Exhibition last August, he highlighted the disruptive nature of AI, surpassing expectations by achieving milestones in human creativity earlier than anticipated.
"This is arguably the most transformative technology we've ever encountered," Gallagher emphasised.
Bill Gates has even suggested that AI, particularly generative AI, will wield power comparable to that of the mobile phone or the internet. Some experts go even further, likening it to electricity, a universal technology poised to revolutionise every aspect of our lives, Gallagher added.
What are the implications of implementing a ban on ChatGPT?
Opting for a prohibition on AI and relegating the utilisation of such platforms to clandestine practices poses a significant risk, says Gallagher.
Referring to a US study, Gallagher highlights that a considerable majority, specifically 7 out of 10 employees, employing ChatGPT at their workplace choose not to disclose this activity to their employers. Although Australian workers exhibit a higher degree of transparency regarding AI use, less than half of them (43 per cent) are willing to divulge their AI usage to their supervisors.
Given the widespread availability of ChatGPT and similar generative AI tools, Gallagher contends that it has become exceedingly challenging for leaders to effectively enforce a blanket ban on their usage.
He cautions that leaders who enforce such prohibitions may find employees resorting to using these platforms clandestinely via their mobile devices, subsequently transferring the output to their computers.
While certain web developers have devised applications theoretically capable of detecting text generated by ChatGPT, users can easily circumvent these measures by simply altering a few words or sentences in AI-generated content.
Even within the education sector, where concerns regarding AI's impact on academic integrity abound, policymakers are increasingly recognising the futility and counterproductivity of hastily imposing bans on AI usage.
Establishing Measures to Address AI Concerns
To address apprehensions surrounding generative AI, Gallagher proposes a more efficacious strategy, with the development of comprehensive policies and training protocols for its usage.
He highlights that a significant factor contributing to employee unease regarding AI stems from its lack of open embrace in the workplace. Recent research conducted jointly by the University of Queensland and KPMG revealed that 75 per cent of individuals expressed concerns about the risks associated with AI utilization at work.
Gallagher advocates for a proactive role for HR in aiding employees in exploring and implementing AI models. This proactive stance can help assuage apprehensions about the technology and cultivate an environment conducive to openness and curiosity regarding AI.
To ensure the effective enforcement of company policies on AI, Gallagher recommends a gradual, bottom-up implementation approach. Essentially, employers should initiate testing of AI in small-scale, repetitive, and routine tasks while diligently monitoring associated risks and challenges.
The Outlook for Employment in the AI Era
In addition to being challenging to implement, a ban on generative AI also deprives organisations of the significant productivity enhancements it can deliver. By assuming responsibility for mundane, time-consuming tasks, this technology can liberate employees to concentrate on more impactful endeavours.
Gallagher cites a study conducted by MIT, which involved two cohorts of professionals completing writing assignments, with only one group permitted to utilise ChatGPT for assistance. The participants who leveraged ChatGPT concluded tasks 11 minutes quicker, accompanied by an 18 per cent enhancement in output quality.
"When provided with the opportunity to utilise these tools, workers reported substantially higher levels of job satisfaction and self-efficacy," states Gallagher.
To harness the advantages securely, he recommends allocating time for experimentation and peer-to-peer training with these tools, fostering a psychologically safe environment for continual learning as AI capabilities advance.
Considering this, organisations integrating generative AI into their operations should contemplate how they wish their employees to utilise the extra time afforded, he suggests.
Gallagher’s vision of the future of human work in the AI age boils down to four main areas:
Manoeuvring Through Uncertainty
AI thrives on copious amounts of historical data and patterns, enabling it to effortlessly handle repetitive tasks. However, in the face of unforeseen or unprecedented circumstances, humans are better equipped to take the reins.
"We can chart a course for the future by collaborating with diverse individuals who offer unique perspectives and viewpoints," remarks Gallagher.
Embracing Abstract Thinking
Similarly, humans' inclination towards conceptual reasoning makes their contributions preferable to AI when it comes to innovative problem-solving.
By relieving employees of monotonous tasks and engaging them in endeavours requiring abstract thinking, organisations can foster a culture rich in creativity and curiosity.
Cultivating a Profound Understanding of People
With AI assuming responsibility for tasks less centred on human interaction, individuals can concentrate on refining their understanding of relationships with stakeholders.
"Identifying shifts in customer and client demand patterns, and devising strategies to meet them amidst various circumstances... that represents invaluable and distinctly human work," Gallagher explains.
Context, Significance, and Judgement
While AI serves as a valuable decision-making tool, the ultimate authority should rest with human judgment.
"It's implicit... Humans are the arbiters. Every word written, every line of code, every image created. Individuals must take ownership, even if generative AI plays a significant role," Gallagher emphasises.
By prioritising training and cultivating a culture around these four domains, organisations can enhance their capacity for strategic initiatives and safeguard their employees' job security.
"Fortunately, these activities hold the utmost value within every organisation," Gallagher concludes.
In considering their strategies concerning ChatGPT, leaders should bear in mind that the future of work is inherently intertwined with AI. Successfully integrating AI requires thoughtful consideration for the mutual benefit of businesses and their workforce.