top of page
APF Community

WHAT WE OWE THE FUTURE, A MILLION-YEAR VIEW

Book review of William MacAskill’s What We Owe The Future: A Million-Year View


By Sami Makelainen, IFTF Affiliate



As foresight professionals, we are comfortable thinking about the future. Still, the majority of futures we explore tend to be in the 10 to 20-year horizon, with the occasional 50- or 100-year project. Rarely do we delve into, in any meaningfully concrete manner, the really long-term future.


This is where William MacAskill’s book What We Owe The Future: A Million-Year View challenges us to think further — much further. MacAskill grapples with the profound question of our obligations to future generations. This thought-provoking book delves into the concept of long-termism, challenging readers to consider the far-reaching consequences of our actions, not just decades, but centuries and millennia into the future. It's a philosophical exploration, but one that comes with immensely practical implications for foresight professionals and the general audience as well.


It’s worth noting that effective altruism and long-termism, while thought-provoking, are distinct, yet related, concepts that also attract controversy. Critics argue that these frameworks can prioritize speculative future benefits over pressing current needs, potentially justifying actions that may not be prosocial or beneficial in practice. 


However, MacAskill presents a well-reasoned case for considering the long-term impacts of our choices and advocates for expanding our moral circle to include future generations.”

MacAskill's approach is interdisciplinary, weaving together strands of moral philosophy, economics, and historical analysis. His central argument is both simple and challenging: 


Our decisions today have immense implications for the future, and we have a moral duty to consider the long-term well-being of those who will come after us. 


This responsibility, he argues, extends far beyond obvious issues like climate change and environmental stewardship — it encompasses every societal decision we make, from how we structure our governments to how we prioritize scientific research.


CHANGE WILL SLOW DOWN

One of the book's most compelling insights is the recognition that societies undergo periods of rapid change in values and norms, followed by periods of relative stability. 


MacAskill argues that we are currently living in an era of extraordinary fluidity. I would tend to agree with this, and it is evidenced by the dizzying pace of technological, social, and cultural change that many find unsettling — a phenomenon Alvin Toffler famously termed "Future Shock" decades ago. We are living it now, and the speed of change is challenging — and one might argue surpassing — society’s capacity for dealing with it.


This rapid rate of change is, however, not sustainable indefinitely. Technologists sometimes say that “the pace of change will never be as slow as it is today,” a view that is patently false. Eventually, our values and norms will ossify once more, becoming "locked in" for a period of time — how long of a period, depends on many factors. Once this occurs, altering them can be exceedingly difficult, even if they landed in a suboptimal or problematic place. The author explores several developments that could precipitate a global values lock-in, including the advent of artificial general intelligence (AGI).


If we achieve AGI, it has the potential to enshrine our current values for an extended period — possibly for an extraordinarily long time, given the potential for AGI to surpass human intelligence. How might this happen? If an AGI decides that whatever values it has initially been imbued with are the gold standard, it might “decide” to never change them; and, like social media on steroids, it has the power to spread those values globally, both covertly and overtly.


Given our history of past and ongoing moral errors (such as slavery, genocide, and the subjugation of women), it's highly likely that we are committing grave ethical missteps even now, although we may not recognize them as such. Locking in our present values under such circumstances would be catastrophic, perpetuating injustices and curtailing positive moral progress.


This realization underscores the urgent need to design institutions that can oversee the transition to a post-AGI world in a way that preserves a diversity of values and allows for beneficial moral evolution. Despite the rapid changes we are going through now, we urgently need mechanisms to prevent premature values lock-in, and to ensure that the values we do lock in are the best possible ones. This is a daunting challenge, but one that MacAskill argues is among the most crucial facing humanity today.


WE SHOULD GO TO SPACE, JUST NOT RIGHT NOW

One potential critique of the book's million-year perspective is that it may lead us to prioritize the wrong issues. For example, some may argue that space exploration should be a top priority, given the arguably beneficial goal of long-term survival by becoming a multiplanetary species. The logic is that by spreading to other planets, we can reduce existential risk and ensure the long-term survival of humanity. 


Space exploration is indeed something that MacAskill is making a moral case for — but, thankfully, he points out that that doesn’t mean we should pursue it now.


Prematurely focusing on becoming a multiplanetary species, with woefully inadequate institutions and social structures for stability, would be irresponsible and likely dangerous. Our current focus should be squarely on addressing fundamental challenges here on Earth, such as improving governance, reducing poverty and inequality, and developing crucial technologies. 


If we fail to get our house in order on this planet, spreading to others will only export our problems, not solve them.


Moreover, becoming a multiplanetary species is a much more distant prospect than many realize. The technical, economic, and logistical hurdles are immense, and we are likely centuries away from substantial human settlement on other worlds. In the meantime, we must ensure that Earth remains habitable and that human civilization thrives here.


HOW RESILIENT IS KNOWLEDGE?

One point I found myself disagreeing with was MacAskill’s overly optimistic assumptions on the robustness of human knowledge. He claims there are reasons to believe that civilization would not collapse even if most people on Earth died due to events such as nuclear war, catastrophic climate change, or a pandemic. 


Further, MacAskill argues that even in the case of 99% of people perishing, “most knowledge would be preserved, in the minds of those still alive, in digital storage, and in libraries.

 

To me, this fundamentally underestimates the complexity and depth of our specialization, and entirely misses the distinction between tacit and explicit knowledge, or “work as imagined” and “work as done.” I believe civilization’s knowledge is much more fragile than that; we have lost important knowledge a number of times, sometimes taking centuries or millennia to rediscover it. In the event of a dramatic simplification of our civilization, this is likely to happen again. And we now have many more pockets of deep specialization our civilization relies on with only a handful of experts in those pockets, and sometimes single physical points of failure. 


For example, Spruce Pine, North Carolina, holds the only two mines in the world where we get the high-purity quartz needed for semiconductors and solar panels. If something were to happen to the mines, it would end computer chip manufacturing as we know it. Humans could eventually adapt, but it would be painful for several years (Material World by Ed Conway).


Much of our most important knowledge is tacit or embodied, residing in the skills and experiences of practitioners rather than in static records. If we lose the people who understand and can use this knowledge, it can be incredibly difficult to recreate, as the history of lost technologies demonstrates; and for many of the deeply specialized skills, we have far fewer people with those skills than many people realize, even globally. Ensuring the continuity of knowledge across generations is thus another key long-termist challenge, but one that isn’t given enough thought in this book.


CASE FOR LONG-TERMISM

Despite these caveats, What We Owe The Future is ultimately an optimistic book, and one I would recommend reading. It encourages readers to expand their ethical circle of consideration to include future generations, and to make choices that positively shape the long-term trajectory of humanity. At the same time, the book does not shy away from the immense complexity of this task, acknowledging the difficult trade-offs and uncertainties involved. But merely starting to expand your horizons can result in better thought-out ideas and actions.


While the book's primary focus is on the long term, it does also offer some short-term, tactical guidance for personal life choices. The advice centers on the importance of exploration and iteration — trying out different paths, learning from the results, and adjusting course as needed. This approach, it is argued, is more likely to lead to fulfillment and positive impact than rigid, long-term planning. Amidst the changes we are going through at present, it’s hard to disagree with this position.


In an era of rapid change and uncertainty, adaptability is key. By cultivating a mindset of experimentation and continuous learning, individuals can navigate the challenges of the present while keeping an eye on the long-term future. This means being open to new ideas, seeking out diverse perspectives, and being willing to change one's mind in light of new evidence.


The writing is engaging and accessible, but with the subject matter being weighty and thought-provoking, readers should come prepared to grapple with challenging ideas and to question long-held assumptions. 


The actionable “Now what?” kind of questions often elude these kinds of books, but MacAskill provides both societal and personal calls to action, giving readers concrete ways to start applying long-termist thinking in their own lives and communities. Whether it's choosing a career path that aligns with long-termist goals, supporting organizations that prioritize future generations, or advocating for policies with long-term benefits, there are myriad ways to put these ideas into practice.


What We Owe The Future is a significant work that makes a compelling case for considering the long-term consequences of our actions. By expanding our moral horizons to encompass future generations, we can make better decisions today that positively shape the trajectory of human civilization. The book challenges us to be good ancestors (and incidentally makes the book The Good Ancestor by Roman Krznaric a good companion read to this), to steward the world and its resources with the interests of future generations in mind. 


It is a reminder that our legacy is not just what we achieve in our own lifetimes, but what we set in motion for centuries and millennia to come. In a time of great uncertainty and rapid change, this perspective is more important than ever.

 

Sami Makelainen brings to IFTF expertise in strategic foresight and the ethical operational integration of emerging technologies. His journey in the tech world started with the development of online commerce and banking platforms in the early 1990’s, progressing through significant roles in mobile industry at Nokia and leading the Strategic Foresight practice at Telstra Corporation in Australia.


At IFTF, Sami is involved in a number of research projects, an instructor of Foresight Essentials, and co-creator and instructor of the Three Horizons of AI course.


Holding an MSc in Computer Science, Sami's broader engagements include his role as a Senior Industry Fellow at RMIT FORWARD, and involvement at University of Melbourne’s Centre for AI and Digital Ethics (CAIDE). Through running his consultancy Transition Level, Sami continues to influence the field of AI governance, helping organizations worldwide navigate the evolving tech landscape with foresight and responsibility, and guiding organizations to implement Generative AI without accidentally creating a SciFi dystopia in the process.


Sami is an extreme learner and an avid reader, with interests ranging from aviation to resilience engineering. His current personal research project is on the theme of 'managing collapse' – a study into the resilience, adaptability and massive change of systems in the face of potential societal, economic, or technological upheavals.

19 views0 comments

Comentários


bottom of page