top of page
APF Community

CHALLENGING OUR NARRATIVES ABOUT THE FUTURE OF WORK

By Richard Yonck



Good forecasting and scenario building need us to remain open to the trends that influence them, so that we continually challenge our assumptions. Yet, everyone, including futurists, tend to get stuck in certain narratives. 


What are some of the assumptions surrounding the future of work that are not being thoroughly challenged today and how can we use this knowledge to more accurately anticipate relevant futures?


For centuries, probably millennia, people have worried about how progress and change will affect their livelihoods. From the Luddites fearing the automation of the textile industry to dockworkers fighting containerization to knowledge workers feeling threatened by the digital revolution, we've seen this apprehension played out time and again. 


Now, we find yourselves facing a new era of unemployment anxiety with concerns that artificial intelligence is poised to replace workers by the tens of millions, leading to massive unemployment and the possible elimination of half the population’s means of livelihood.


When the Oxford Martin School published their Future of Employment study in 2013, they forecasted that AI and automation would replace 47% of all jobs over the next 20 years. Understandably, people were alarmed. Subsequent studies over the next few years echoed these concerns. For instance, the 2015 Robot Revolution Report by Bank of America and Merrill Lynch said 50% of jobs would be replaced by the mid-2030s. A similar report from PwC in 2017 put the number at just under 40%. 


Each of these reports stated that close to half of all current jobs would disappear or be replaced with very different jobs by sometime around 2035. And not just information economy jobs — we’re also talking about jobs in the construction and building trades, retailing, the hospitality industry, healthcare, and more. The implication being, that were that to happen, we would need a major overhaul of how we educate and retrain our workforce. Alternately, we would need to transform unemployment and social support systems everywhere. Either way, the impact on national and global economies would be devastating. While some people, particularly tech leaders, have suggested Universal Basic Income (UBI) could be a partial solution, this would probably be entirely inadequate for addressing such a major transformation.


To be fair, there has been considerable criticism and pushback about these reports since their release as well as subsequent corrections. Nevertheless, the fears and narratives they stoked still persist, especially among the general public. Many of the narratives from that time are still with us today, regardless of their validity.


I frequently like to point out that we are now midway into this supposed future of catastrophic job loss and replacement. However, during that same timeframe, we’ve seen nothing close to this scenario playing out. Neither are the jobs disappearing nor are they being replaced on a massive scale with entirely different jobs. In the U.S., for example, there have been millions of jobs created each year since 2013, with the singular exception of 2020, the height of the COVID-19 pandemic. If AI is going to meet its projected deadline, it better get to work!


AI has been developing for some 75 years, incrementally gaining in its abilities and applications. From expert systems to scanners with optical character recognition to credit card fraud detection, it's been insinuating its way into our businesses and personal lives. As it’s done this, something interesting has been happening. In each affected application or area, AI gradually becomes increasingly capable until we stop being aware of it. At this point, the only time we really take notice is when it doesn’t function properly. Once it reaches a certain level of reliability, it tends to disappear and fade into the background, getting out of the way so we can get on with the work people do best.


In other words, our work and our daily lives are already teeming with AI. 


Recently, we've seen the arrival of a new form of artificial intelligence: generative AI. Much of this takes the form of large language models (LLMs) and large multimodal models (LMMs), vast collections of data that have typically been scraped from across the web. Using transformers, AI frameworks, such as retrieval-augmented generation (RAG) and other generative algorithms, this new generation of AI produces results unlike anything we've ever seen.


As a result, many of us frequently interpret the resulting outputs as representative of creativity, self-awareness, consciousness or even sapience. But generative AI is none of these things. It is simply you, me and our data reflected back on us. It’s all of the creative and uncreative acts we've uploaded to the Internet these past three decades repackaged anew using automation. We are seeing mirror images of ourselves transformed through the algorithms of a novel data management system, the workings of which we don’t yet fully understand or comprehend. As I explored in a previous Compass article, we will probably be incorrectly ascribing human traits to AI for decades to come.


These programs are not replacement workers coming for our jobs, though this is a narrative that serves the interests of many of their purveyors. They are tools we will use, hopefully to allow us to perform our work in new and better ways.


UP FOR ADOPTION


Something else we may have misjudged is the rate at which this new form of AI will be adopted and incorporated into business practices. For several years now, developers, evangelists and social media influencers have breathlessly assured us that anyone who doesn’t adopt these new tools immediately will be left in the dust. While I believe in progress and recognize AI as a huge part of our future, every technology goes through much the same life cycle. Efforts to short circuit or accelerate this cycle frequently leads to failure – or worse, a loss of belief in the technology itself.


We’ve seen this cycle of overhype occur again and again, both with technologies that aren’t ready for prime time, as well as for individual products. This happened with the latest iteration of virtual reality (VR), aka the Metaverse, and NFTs, as well as AI assistants such as Humane’s AI Pin and the Rabbit r1.


Over the past year, there have also been several reports that push back on the idea that business is rushing to adopt generative AI, though the data can be confusing. According to McKinsey, two-thirds of respondents surveyed said they regularly use generative AI, as did 75% of global knowledge workers surveyed by LinkedIn. 


Meanwhile, researchers at Goldman Sachs recently reported that companies have been taking their time to adopt generative AI and that the labor impact isn’t yet significant. While business investments in AI have skyrocketed in the past few years, only 5.4% of companies say they have used it to produce goods or services, based on U.S. Census Bureau data. According to The Economist, there’s been little evidence of increased productivity from generative AI, with real output per employee in median rich countries essentially unchanged.


Issues around reliability, implementation challenges and identifying suitable use cases are among the most commonly cited reasons for the slow adoption. 


It’s also likely that the amount of time needed to routinely double-check AI-generated output isn’t being fully considered. If we complete a task in half the time, only to need double that to ensure it’s been done properly, that’s not a real gain.


ROLES VS. TASKS


When we talk about workers being replaced by AI, what’s frequently absent from the conversation is the difference between Roles and Tasks. In discussions around generative-AI, what’s often touted is the huge productivity gains for this or that task. But employees aren’t tasks. They perform roles – in their jobs, in their departments, in their companies. These roles are made up of a multitude of tasks, typically more than can readily be listed. Importantly, many of these tasks remain outside the current ability of AI, and probably will for the foreseeable future. 


Furthermore, many of these tasks are people skills — interacting with fellow employees, customers, vendors and other stakeholders, as well as providing services that directly involve caring, human interaction. Then there are the creative tasks, those duties that entail real creativity rather than the ersatz recombination that generative AI performs. This is really important to keep in mind as we think about the ways such technologies will and won’t disrupt the workforce.


This isn’t to say that generative AI is not a great tool, because it is. Used correctly, it can make us more efficient, spark and spur new ideas, and help us avoid our blind spots. But we need to use it correctly if we don’t want it to produce more problems than it solves.


IMPACT ON HUMAN SKILLS AND APTITUDE


One of the prime narratives around AI, particularly generative AI, is the idea it will make us more efficient and improve our output. I’ve actually been a big believer in the idea that these next few decades will be driven by hybrid AI, the blending of human and artificial intelligence to leverage this. But there are signs that suggest this might not be how the future of work actually develops.


In a recent study by Harvard University and Boston Consulting Group, the impact of using generative AI was measured on consultant efficiency. For the study, 758 BCG consultants were given 18 realistic consulting tasks. Those consultants performing common tasks using GPT-4 completed 12.5% more of them and did so 25% more quickly. However, for uncommon tasks, those consultants using AI were 19% less likely to produce the correct solutions.


Perhaps not surprisingly, the less skilled consultants saw a much greater improvement in their performance (43%), while the most experienced and skilled consultants saw the least gain (17%). Combine this with other studies that show skill levels drop as workers become more reliant on AI to perform their tasks and the downstream risks should become apparent. Rather than producing better employees, could AI, combined with human nature, drive all of us toward the mean?


Will AI – particularly generative AI – make us better workers, or lazier, less skilled ones? Economics and improved efficiency metrics seem like straightforward calculations, but when you add in the idiosyncrasies of human behavior, the result may not be the one we expect.


CONCLUSION


The impact of AI on work will continue to be full of surprises over the coming decades. There is little doubt that artificial intelligence in its many forms will continue to dramatically affect our society, our workplaces and our lives. However, the pace at which this takes place and the level of disruption it causes should be regularly questioned throughout the course of its development. 


The strategies and policies we implement in order to prepare, educate and support our workforce need to recognize there will always be things we can’t adequately anticipate in the present. What seemed like self-evident, commonsense decisions only a few years ago, may be far more questionable in the light of present knowledge. We need to continually, actively challenge the narratives that we build around AI to ensure the course we chart for it reflects our current understanding of the technology. In other words, there’s still a lot of work ahead of us as we prepare ourselves for the future of work.


REFERENCES

  1. Ahmad SF et al, “Impact of artificial intelligence on human loss in decision making.” Humanities & Social Sciences Communications. 2023. https://pubmed.ncbi.nlm.nih.gov/37325188/ 

  2. Bank of America, “Robot Revolution – Global Robot and AI Primer.” 2015.

  3. Carnegie, M. “Does Using AI Make Me Lazy?”, Wired. Sept 21, 2023. https://www.wired.com/story/does-using-ai-make-me-lazy/

  4. Dell’Acqua, Fabrizio, et al., “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality.” SSRN, Sept 18, 2023.

  5. Frey, C.B. and Osborne, M.A., “The Future of Employment: How Susceptible are Jobs to Computerization?” University of Oxford. 2013.

  6. McKinsey, “The state of AI in early 2024: Gen AI adoption spikes and starts to generate value.” Aug 2024.

  7. Microsoft and LinkedIn, “2024 Work Trend Index Annual Report.” https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part. May 2024.

  8. U.S. Bureau of Labor Statistics. https://www.bls.gov/data.

  9. The Economist, “What happened to the artificial-intelligence revolution?” July 2, 2024.


 

Richard Yonck is a Seattle-based futurist, author and keynote speaker, who helps organizations and audiences explore, anticipate and plan for future change. He’s the author of two books about the future of artificial intelligence: Heart of the Machine and Future Minds. He’s also written for a wide range of publications including Scientific American, Fast Company, Wired, GeekWire, World Future Review, The Futurist, Salon, and many others. He’s a member of the Association of Professional Futurists, the World Future Studies Federation, the National Association of Science Writers and a TEDx speaker.


0 views0 comments

Comments


bottom of page