Published on June 28th, 2023 📆 | 8406 Views ⚑
0Release AI From the Shadows, Argues Wharton Professor
Workers are using AI tools to boost individual productivity but keeping their activity on the down-low, which could be hurting the overall performance of their organizations, a professor at the Wharton business school contends in a blog posted Sunday.
âToday, billions of people have access to large language models (LLMs) and the productivity benefits that they bring,â Ethan Mollick wrote in his One Useful Thing blog. âAnd, from decades of research in innovation studying everyone from plumbers to librarians to surgeons, we know that, when given access to general-purpose tools, people figure out ways to use them to make their jobs easier and better.â
âThe results are often breakthrough inventions, ways of using AI that could transform a business entirely,â he continued. âPeople are streamlining tasks, taking new approaches to coding, and automating time-consuming and tedious parts of their jobs. But the inventors arenât telling their companies about their discoveries; they are the secret cyborgs, machine-augmented humans who keep themselves hidden.â
Mollick maintained that the traditional ways that organizations respond to new technologies donât work well for AI and that the only way for an organization to benefit from AI is to get the help of their âcyborgsâ while encouraging more workers to use AI.
That will require a major change in how organizations operate, Mollick contended. Those changes include corralling as much of the organization as possible into the AI agenda, decreasing the fears associated with AI use, and providing incentives for AI users to come forward and encourage others to use AI.
Companies also need to act quickly on some basic questions, Mollick added. What do you do with the productivity gains you might achieve? How do you reorganize work and kill processes made hollow or useless by AI? How do you manage and control work that might include risks of AI-driven hallucination and potential IP concerns?
Disrupting Business
As beneficial as bringing AI out of the shadows may be, it could be very disruptive to an organization.
âAI can have a 30% to 80% positive impact on performance. Suddenly, a marginal employee with generative AI becomes a superstar,â observed Rob Enderle, president and principal analyst of the Enderle Group, an advisory services firm in Bend, Ore.
âIf generative AI isnât disclosed, it can raise questions about whether an employee is cheating or whether they were slacking off earlier,â he told TechNewsWorld.
âThe secrecy part isnât as disruptive as it is potentially problematic for both the manager and the employee, particularly if the company hasnât yet set policy on AI use and disclosure,â Enderle added.
AI use could generate an unrealistic view of an employeeâs knowledge or capability that could lead to dangerous expectations down the road, said Shawn Surber, senior director of technical account management at Tanium, a provider of converged endpoint management, in Kirkland, Wash.
He cited the example of an employee who uses an AI to write an extensive report on a subject for which they have no deep expertise. âThe organization may see them as an expert, but really, they just used an AI to write a single report,â he told TechNewsWorld.
Problems can also arise if an employee is using AI to produce code or process documentation that feeds directly into an organizationâs systems, Surber added. âLarge language model AIs are great at generating voluminous amounts of information, but if itâs not carefully checked, it could create system problems or even legal problems for the organization,â he explained.
Mindless AI Usage
AI, when used well, will give workers a productivity boost which isnât inherently disruptive,â maintained John Bambenek, principle threat hunter at Netenrich, an IT and digital security operations company in San Jose, Calif.
âIt is the mindless use of AI that can be disruptive by workers, simply not reviewing the output of these tools and filtering out non-sensical responses,â he told TechNewsWorld.
Understanding the logic behind generative AI results often requires specialized knowledge, added Craig Jones, vice president of security operations at Ontinue, a managed detection and response provider in Redwood City, Calif.
âIf decisions are blindly driven by these results, it can lead to misguided strategies, biases, or ineffective initiatives,â he told TechNewsWorld.
Jones asserted that the clandestine usage of AI could cultivate an environment of inconsistency and unpredictability within an organization. âFor instance,â he said. âif an individual or a team harnesses AI to streamline tasks or augment data analysis, their performance could significantly overshadow those not employing similar resources, creating unequal performance outcomes.â
Additionally, he continued, AI utilized without managerial awareness can raise serious ethical and legal quandaries, particularly in sectors like human resources or finance. âUnregulated AI applications can inadvertently perpetuate biases or infringe on regulatory requirements.â
Banning AI Not a Solution
As disruptive as AI might be, banning its use by workers is probably not the best course of action. Because âAI provides a 30% to 80% increase in productivity,â Enderle reiterated, âbanning the tool would, in effect, make the company unable to compete with peers that are embracing and using the technology properly.â
âIt is a potent tool,â he added. âIgnore it at your peril.â
An outright ban might not be the right way to go, but setting guidelines for what and what canât be done with public AI is appropriate, noted Jack E. Gold, founder and principal analyst at J. Gold Associates, an IT advisory company, in Northborough, Mass.
âWe did a survey of business users asking if their companies had a policy on the use of public AI, and 75% of the companies said no,â he told TechNewsWorld.
âSo the first thing you want to do if youâre worried about your information leaking out is set a policy,â he said. âYou canât yell at people for not following policy if there isnât one.â
Data leakage can be a considerable security risk when using generative AI applications. âA lot of the security risks from AI come from the information people put into it,â explained Erich Kron, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.
âItâs important to understand that information is essentially being uploaded to these third parties and processed through the AI,â he told TechNewsWorld. âThis could be a significant issue if people arenât thinking about sensitive information, PII, or intellectual property theyâre providing to the AI.â
In his blog, Mollick noted that AI is here and already having an impact in many industries and fields. âSo, prepare to meet your cyborgs, â he wrote, âand start to work with them to create a new and better organization for our AI-haunted age.â
Gloss