top of page
APF Community

QUESTIONING THE PRIMACY OF TECHNOSCIENCE

Do advanced IT systems support or undermine humanity’s natural desire for positive, humanly compelling futures?


By Richard A. Slaughter



Few would deny that, during recent decades, successive waves of increasingly powerful technologies have emerged almost without warning. While, on the one hand, they’ve produced many useful products and services, on the other, they’ve not only disrupted many long-standing assumptions and ways of life, but also created entirely new hazards and dilemmas. 


This is particularly true of Information Technology (IT). This article, however, is one of a series that seeks to question the primacy of technoscience. In so doing it suggests that technoscientific innovation in general, and aspects of IT in particular, are neither ‘neutral’ nor ‘value-free.’ Rather, they are permeated by unacknowledged ideological and market-oriented imperatives. If humanity is to avoid the repressive, Dystopian futures now coming into view, such hidden factors need to be brought to full awareness where they can be more widely understood, evaluated and, where necessary, modified or replaced.


THE EARLY INTERNET

More than 30 years ago early internet pioneers believed that they were creating a global capability for innovation that would enhance freedom and democracy around the world (Naughton, 2024a). What actually happened is that following initial government funded R&D, the emerging oligarchs pursued a series of radical innovations and patented them in conditions of extreme secrecy. This enabled them to invade “unprotected human space” for their own ends (growth, profit, power) long before anyone understood what was happening. What author Shoshana Zuboff calls “surveillance capitalism” soon became normalised. Policy settings were established within corporate hierarchies that were, and remain, entirely unequipped to make wise, long-term decisions in the wider public interest*.

* This process is described in detail in Zuboff, (2019). Also see Slaughter (2020) for an in-depth review of this vital text.


FAILURES OF GOVERNANCE

The role and responsibility of any competent government is to define the ground rules and regulations required to sustain civilised life. Thus-far, however, the U.S. government has failed to adequately manage or regulate IT innovations. Viewed from overseas, the primary driver of this incapacity appears to be that government autonomy was undermined by Neoliberal ideology. The latter had successfully lobbied for “small government” in favour of the primacy of “the market.” Then, following 9/11 the government formed an alliance with Google for reasons of “national security,” which again restricted its ability to adequately represent the broader public interest (Taplin, 2017). 


While the peak of Neoliberalism may have passed, the U.S. corporate sector still retains a questionable and unwise degree of autonomy regarding its own, limited interests and their wider implications. This is essentially why projects of dubious social value such as Large Language Models (LLMs) and so-called Artificial Intelligence (AI) are being single-mindedly pursued for economic returns despite the chaos and “boom and bust” outlook that accompanies them (Naughton, 2024b). The IT oligarchs became highly profitable in the short-term but largely unaccountable for the wider consequences. 

The point is not that Silicon Valley (SV) and its competitors are, in any sense “bad.” Rather, that they all promulgate thin, one-sided and deeply problematic views of human life and the wider world (Cook, 2020). These, for example, serve to obscure the distinction between genuine human needs and endlessly manufactured wants. While the myth of “technological neutrality” certainly did not begin in these places it continues to provide a convenient cover story for a naive and often confused public.


RISE OF BAD ACTORS

Some of the most avid adopters of new technology keep a low profile in civil society but have expanded without limit in the dark spaces beyond it. For them a “supercharged” internet driven by high-tech apps has led to an explosion of criminality and danger. It is here, in the digital underworld of the Dark Web, that criminals of all kinds, the Mafia and countless secret military operations constantly work to scam, steal, threaten and undermine civilisation as we have known it (Glenny, McMafia, 2009). 


What this reflects is the sobering fact that “the Achilles Heel of any and all advanced technology is that it increases the power, reach and destructive capacity of ‘bad actors’ everywhere.” Hence, whatever ‘good’ positive innovation may bestow in one or more location can be readily cancelled out by the enabling of destructive ends elsewhere. In a divided and fractured world this is a true dilemma and, as such, recipe for disaster (Slaughter, 2023a).


HUBRIS, NEMESIS, DENIAL OF LIMITS

Contrary to the assertions of high-tech enthusiasts, concerns about the “perils of science-led progress” have not been absent over the last two centuries. From Thoreau’s Walden to the novels of H.G. Wells, the many critics of technology-led developments and the unstoppable rise of Dystopian science fiction, there have been many and varied expressions of a single key idea. One that was foreshadowed long ago in the ancient myth of Icarus (burned when he flew too close to the sun) and shrewdly summarised for post-modern times in Brian Aldiss’ succinct definition of Sci-Fi. Namely that the underlying “story” of this civilisation is that of “hubris clobbered by nemesis.”

Then, as futurists and others have documented, the mid-20th Century saw change processes accelerating, along with signature events and dilemmas. Among the former were the use of the first nuclear weapons, the spread of plastic and other pollutants around the world and early signs that global limits were being transgressed. Dilemmas accumulated as the full costs of power, growth, nature as object, and the denial limits became unavoidable.*

* Yet a great deal of time, effort and money went into denying them anyway. See (Slaughter, 2022).


UNDERSTANDING TECHNOSCIENCE

More recently, however, fresh light has been shed on the human predicament that reveals new options for informed hope and effective action (Slaughter, 2023b). For example, King’s ground-breaking work on how technoscience is conducted, makes clear how it operates as a covert ideology. It then becomes easier to consider strategies for returning the initiative to democratic review and decision-making. Among his suggestions, consider the following:

  • No technology is neutral. They all come equipped with a dense array of ideological assumptions, values and social interests embedded within them. 


  • Technologies cannot be properly assessed and should certainly not be rushed into practice without careful evaluation of these hidden factors.


  • Technology interests and promoters actively avoid such issues, and strongly resist suggestions that early designs and design intentions be subject to thorough-going assessment.


  • Such interests like to keep things simple. For example, they like to represent the transition from analog to digital as a straightforward and largely beneficial process.


  • In realty this is a transition between eras from thousands of years within which human life, history, identity and civilisation itself developed, to a very different reality (King, 2023). 

A further implication that’s seldom mentioned is that the digital realm is fundamentally alien to humans whose lives are necessarily grounded in the continuing physicality of a strictly analog world. What might be called “digital reality” lies, by definition, beyond human senses. It is accessible only via extensive high-end tech infrastructures owned and controlled by the world’s largest organisations (i.e., governments and corporations). The resulting power imbalance is widely overlooked but also radical and obvious.*

*Slaughter, 2023 is an Open Text for free download. It provides a short, readable summary of how the present situation came into being over recent decades The conclusion also provides an outline of solutions under way and available now. For a more in-depth account see (Slaughter, 2024). Forthcoming, Futures



AMBIGUOUS PRODUCTS, UNKNOWN RISKS

A truly enormous range of issues confronts us in this unstable context. Consider:

Few, for example, can have failed to notice that banks, other financial interests and mainstream media argue that analog money (coins, notes) finally be abolished, and all transactions moved over to digital systems (Wilson, 2024). 


Little or no mention is made of the fact that digital networks play neatly into a corporate strategy for financial dominance and control. Nor do financial interests readily acknowledge that these very same networks are already under under direct and continuing threat not only from criminals but also from delinquent nation states. 


The early uses of drones were widely portrayed as harmless tools for civic and recreational use. However, a familiar shift is currently underway as military uses accelerate the slide from human-controlled weapons to semi-, or autonomous, systems. This, coupled with facial recognition tech in the wrong hands, is as Dystopian as it gets.


Looking a little further ahead, we know that teams of scientists around the world are competing to design and produce the first quantum computers. The day that one is proven to work will, apparently, be known as “Q-day.” But, apart from the fact that they would “disrupt existing digital networks,” compromising on-line security, little attention is being devoted to understanding exactly what other risks and dangers are involved, let alone how they could be managed (Holland, et al, 2024). 


Then, at a deeper level, we also have a right to ask: what evidence is there that humanity is in a fit state to colonise and manipulate any part of the very foundations of physical existence? The same could be said of Nick Bostrom and other high-tech boosters who champion what they refer to as General Artificial AI (Bostrom, 2013).*

*They argue that it will benefit humanity, but the risks are existential, and few can be confident that they are in a position to fully evaluate the consequences.


BACK TO WHAT FUTURE?

One of the early temples at Delph, Greece, has a portico with two prominently displayed injunctions. One simply says, ‘Know Thyself.’ The other: ‘Everything in Moderation.’ Echoing across the centuries, our over-confident, chronically out-of-balance culture seems to have lost sight of what such propositions mean or why they were once viewed as significant. 


It should no longer appear controversial to acknowledge that our obsession with “knowledge for power” continues its unprecedented assault upon the natural world, and upon our vulnerable selves. In this context, “business-as usual” means allowing the owners of capital, and their raw, unfinished version of technoscientific progress, to lure us ever further toward overshoot-and-collapse 


TIME FOR REFLECTION

Rapid technoscientific innovation in a divided and unstable world cannot but undermine human prospects of liveable futures. It creates a continuing series of existential hazards that humanity, at its present stage of development, is ill-equipped to understand or manage. Rigorous, foresightful, multi-disciplinary evaluation and assessment, coupled with governance reform and determined conflict resolution remain rare, but are all part of any long-term solution. One of the most effective actions that the U.S. government could undertake would be to finally assert its duty to repeal section 230 of the 1996 Communications Decency Act.*

*This section of the Act frees digital platforms from any responsibility they may otherwise have for the material placed on them by users. It also means that they cannot be prosecuted for such material regardless of how dangerous it may by. This single failure is at the root of much on-line abuse and dysfunctionality. Failure to act decisively t has meant, in turn, that other nations have had to work even harder to catch up with, let alone begin to counter a host of malign online abuses.


It's time to pause, to take stock. To not follow blindly into the chaos and inhumanity of ever more technically compromised futures. Rather, it’s time to reflect, to better understand ourselves and each other, to engage more fully with the broader spectrum of distinctively human capability and awareness. In so doing we can better appreciate that technology can serve multiple ends, some of which are best avoided entirely.* It's appropriate to leave the last words to Lewis Mumford who, long before the IT revolution began, understood what was at stake. He wrote:

Nothing that humans have created is outside their capacity to challenge, to re-mould, to supplant, or to destroy ... machines are no more sacred than the dreams from which they originated (Mumford, 1944).

*See Mumford, 1944 and Loeb (2021) offers a fascinating summary of what Mumford referred to as ‘the deal’ that societies make, often unconsciously, with technology, especially, but not only in present times. Also see (Le Guin, 1986), Always Coming Home. Her novel provides a rich portrait of ‘The Kesh’ who elected to place technology at the periphery and to focus more centrally on ritual and meaning. As the child of anthropologists, Le Guin’s account is richer and more grounded than most.


REFERENCES:


  1. Bostrom, N. (2013). Superintelligence, OUP.

  2. Cook, K. (2020) The Psychology of Silicon Valley, Palgrave, MacMillan. https://link.springer.com/book/10.1007/978-3-030-27364-4

  3. Glernny, M. (2009) McMafia. Seriously Organised Crime, London: Vintage.

  4. Holland, A. Graham, J. & Dalton, (2024) A. Up and atom. Melbourne. The Age, May 12.

  5. King, R. (2023). Here be Monsters. Is Technology Reducing our Humanity? Melbourne: Monash University.

  6. LeGuin, U. (1986) Always Coming Home, London: Gollancz.

  7. Loeb, Z. (2021).  The Magnificent Bribe, Real Life, Oct 25. 

  8. https://reallifemag.com/tag/syllabus-for-the-internet

  9. Mumford, L. (1944). The Condition of Man. New York: Harcourt Brace. P. 415.

  10. Naughton, J. (2024a). The internet is in decline – it needs rewilding, Guardian, May 5. https://www.theguardian.com/commentisfree/article/2024/may/04/the-internet-is-in-decline-it-needs-rewilding

  11. Naughton, J. (2024b). From boom to burst, the AI bubble is only heading in one direction, Guardian, April 14. https://www.theguardian.com/commentisfree/2024/apr/13/from-boom-to-burst-the-ai-bubble-is-only-heading-in-one-direction

  12. Slaughter, R. (2024) Human agency and the technoscientific dilemma: Contesting the role of technology in shaping our collective futures. In Press, forthcoming, Futures.

  13. Slaughter, R. (2023b) Deleting Dystopia: Asserting Human Values in the Age of Surveillance Capitalism. University of S Queensland. Open Access Text.  https://usq.pressbooks.pub/deletingdystopia/ 

  14. Slaughter, R. (2023a) Contesting technoscience for human futures. APF Compass, December. https://foresightinternational.com.au/wp-content/uploads/2023/12/Slaughter-Contesting-Technoscience-APF-Compass-Dec-2023.pdf

  15. Slaughter, R. (2022). Future-making against the odds. APF Compass, September. https://foresightinternational.com.au/wp-content/uploads/2022/09/Slaughter-Future_Making_Against_the_Odds_Reflections_on_the_LtG_Compass_Sep22.pdf

  16. Slaughter, R. (2020) Confronting a high-tech nightmare: Review of Zuboff’s The Age of Surveillance Capitalism, Journal of Futures Studies, 24 4, 92-104. https://foresightinternational.com.au/wp-content/uploads/2020/07/Slaughter_Confronting_High_Tech_Nightmare_JFS_24_4_2020.pdf

  17. Taplin, J. (2017). Move Fast and Break Things, New York, Little, Brown & Co. 

  18. Wilson, B. (2024) Cash is dead. Why do we pretend it isn’t? Melbourne Age, May 12.

  19. Zuboff, S. (2019). The Age of Surveillance Capitalism. London: Profile Books. 


 

Richard A. Slaughter completed his Ph.D. in Futures Studies at Lancaster University in 1982. He later became internationally recognized as a futurist / foresight practitioner, author, editor, teacher and innovator. During the early 2000s he was Foundation Professor of Foresight at Swinburne University of Technology, Melbourne. He currently works out of Foresight International, Brisbane, Australia and can be reached at: foresightinternational.com.au. Readers may like to hear two episodes of FuturePod (113 & 116) on the skewed narratives of affluent nations and avoiding a Digital Dystopia, at futurepod.org. His recent book, Deleting Dystopia, can be downloaded from: https://usq.pressbooks.pub/deletingdystopia/

11 views0 comments

Comments


bottom of page