top of page
Annette L Gardner, PhD, MPH

How are we Doing? Engaging in High Quality, Effective Foresight Work


Image Source: Canva
Image Source: Canva

A perennial concern of futurists and foresight practitioners is whether or not foresight work is having the desired impact.


In his 1981 article, “How to Tell Good Work from Bad,” Roy Amara argued that the futures/foresight field needed to develop criteria for judging the quality of its work if it was going to advance. He proposed general criteria for evaluating futures/foresight work, all of which are relevant today: conceptual explicitness, analytical clarity, and utilitarian objectives. And: does the work produce or guide action?[1]


Martijn Van der Steen and Patrick Van der Duin (2012)[2] indicated that systematic evaluation can improve the quality and effectiveness of foresight, benefitting clients and the profession. These concerns are echoed by Annette Gardner and Peter Bishop (2019)[3], Alex Fergnani and Thomas Chermack (2020)[4], and others, about whether futures studies (and by extension, foresight) can be viewed as credible if it’s not willing to take a hard look at itself.


At one level, the barriers to evaluating foresight are intellectual, or whether thinking about the foresight is evaluable given its long time-horizon, the lack of appropriate evaluation constructs, designs, and methods, attribution and proving causal linkages, and how foresight evaluation fits with scientific inquiry and theory building and testing. [5] A major challenge to evaluating foresight is the difficulty in identifying impacts due to the long time it takes for foresight work, such as alternative scenarios and foresight targeting policy, to manifest their impacts. At another level, there is not a culture of evaluation and some futurists/foresight practitioners do not see the value in evaluating their work, which may reflect a lack of understanding of why and how evaluation is carried out.


Despite these challenges and slow progress, the knowledge base of foresight evaluation is significant and growing. These advances include:


  • Two themed issues on foresight evaluation models and impacts, as well as published findings from evaluations of alternative scenario projects, national foresight programs, corporate foresight,environmentaland horizon scanning systems, and foresight education, trainings, and workshops.

  • A robust body of work on evaluating foresight work in the public sector, at the local and national levels, including identifying impacts.

  • Considerable work completed to develop impact schemas and typologies.

  • Foresight evaluation, as standard practice, being incorporated in mainstream foresight guides, such as a section on ‘Evaluating Impact’ in the 2020 RSA report, A stitch in time? Realizing the value of futures and foresight.[6]

  • Foresight organizations developing internal evaluation capacity, such as The Finnish Innovation Fund’s (Sitra) comprehensive process, outcomes, and impact evaluation approach.[7]

  • Futurist and foresight associations such as the World Futures Studies Federation and the Association of Professional Futurists (APF) committed to strengthening foresight evaluation.

  • Evaluators are beginning to learn about foresight and its role in supporting strategy, adopting a forward-thinking mind-set.

Increasingly, the evaluation arena is applying systems thinking and complex adaptive system approaches to evaluating transformative initiatives, as well as adding foresight methods, such as scenario planning and the futures wheel to evaluation practice. The bridge between the two disciplines is being built from both sides.


Taking Action

In 2019, World Futures Review published a special issue on foresight evaluation that raised many examples of foresight evaluation — designs, methods, and lessons learned — in multiple arenas, including education, health, government, as well as organizational and individual foresight thinking and behavior. However, while examples of foresight evaluation exist, Annette Gardner and Peter Bishop concluded that they were by no means the majority and there was considerable room for improvement.[8]


n January 2021, the Association of Professional Futurists launched the APF Foresight Evaluation Task Force, a 24-member group of futurists, foresight practitioners, foresight evaluators, and evaluators. The overarching aim was to standardize the quality of foresight practice and support achievement of foresight aims. The committee was chaired by Annette Gardner, PhD (the author) and supported by former APF Chair and Board Member Jay Gary, PhD.


APF Foresight Evaluation Task Force Members
APF Foresight Evaluation Task Force Members

The Task Force engaged in a thoughtful examination of the state of foresight evaluation and the ways in which evaluation can address the challenges raised by foresight work such as the long-time horizon for detecting impact. It assembled evaluation constructs and methods to tackle many of the challenges posed by foresight and strengthen the implementation and quality of foresight. As a result of this work, the Task Force developed a model of the benefits of including evaluation as a key component in futures/foresight projects, supporting practitioner excellence, improved program outcomes, and field building.


The process model describes the Task Force’s approach, outputs, and outcome over the 18-month period.

Benefits of Evaluation in Foresight

The Task Force also explored the challenges and opportunities for expanding foresight evaluation capacity, including surveying the APF membership on existing evaluation capacity. It used this information to guide the development of useful and accessible evaluation capacity building resources.


Task Force participants were divided into four Work Groups, three of which drew on the findings from the APF survey on member evaluation capacity conducted by Work Group 1. The Groups developed specific resources that could be completed in a 12-month period: an online curated foresight evaluation bibliography; a foresight evaluation guide; and an online foresight evaluation toolkit. Each Work Group developed a set of Recommendations, many of which are action-steps that could be undertaken by APF, as well as suggestions for refining Work Group products.


The findings from the Foresight Evaluation Capacity Survey administered to APF members in September 2021 confirmed an observation of the committee members -- many foresight practitioners do not evaluate and/or do not know how to evaluate their futures/foresight work. However, more foresight practitioners said they do or can evaluate their work than anticipated. Respondents also identified specific foresight activities where they would like evaluation support, which included: alternative scenarios, scanning, and foresight trainings/workshop, guiding development of the online foresight evaluation toolkit.




Developing a curated foresight evaluation bibliography was conducted by Work Group 2 to assess the state of foresight evaluation, identify resources that could inform Task Force thinking and activities and develop a resource that could be used in the near-term by APF members. Informed by the survey findings, the Work Group collected a robust set of resources — peer review articles and reports — and populated an online bibliography in Zotero, an accessible platform.


Work Group 3 prepared three frameworks to help with the evaluation of foresight:

  • The challenges of evaluating foresight and practical actions to address them;

  • Different approaches to foresight evaluation for different foresight approaches; and

  • Questions that might be asked to evaluate foresight with different purposes that seek to achieve impact at different levels.

Based on the APF survey findings, Work Group 4 designed and developed an online evaluation toolkit. It drew on other online evaluation and foresight toolkits and designed a landing page that could be included on the BetterEvaluation website. Work Group members developed two subsections that provide guidance and resources for evaluating alternatives scenarios and scanning.


Summary

The time has never been better for solidifying and building on the existing scholarship and practice in regard to futures/foresight evaluation. Private sector and government clients want to see the value-added of foresight to decision-making and operations. Evaluation resources — designs, methods, and cases — are readily available and foresight practitioners can realize the benefits of evaluating the quality of their foresight work, increasing its effectiveness and impact.


The APF Foresight Evaluation Task Force was an important vehicle for exploring and thinking about foresight evaluation and applying this knowledge to developing capacity-building products and recommendations for building a culture of evaluation. A key learning from the APF Survey is that there is no ‘one-size-fits-all’ approach to providing resources. To address the diversity in foresight practitioner evaluation expertise, resources could be clustered by three skill levels of evaluation capacity. Second, APF is well-positioned to integrate and leverage evaluation to support professionalization of foresight and strengthen the knowledge base of futures studies, supporting its mission:


To advance the practice of professional foresight by fostering a dynamic, global, diverse, and collaborative community of professional futurists and those committed to futures thinking who expand the understanding, use, and impact of foresight in service to their stakeholders and the world.


Further Reading

If you would like to learn more about evaluation in futures/foresight and appropriate evaluation guides, I recommend the following resources:


  • BetterEvaluation Website, a knowledge platform and global community. https://www.betterevaluation.org/

  • Gary, Jay, “Foresight Training: Moving from Design to Evaluation,” World Futures Review. (2019) Vol 11 Issue 4

  • Georghiou, L. and Keenan, M. “Evaluation of national foresight activities: Assessing rationale, process, and impacts.” Technological Forecasting and Social Change.73 (2006) 761-777.

  • Johnston, R., “Developing the capacity to assess the impact of foresight.” Foresight. Vol. 14, No. 1. 2012. Pp. 56-68.

  • Robinson, S., Professional Development Program Evaluation for the Win! How to Sleep Well at Night Knowing Your Professional Learning is Effective. (2018) Frontline Education. https://www.frontlineeducation.com/program-evaluation/

  • Rohrbeck, R. and Kum, M.E., “Corporate foresight and its impact on firm performance: A longitudinal analysis.” Technological Forecasting & Social Change. 129 (2018) 105-116.


REFERENCES:

  1. Amara, R. How to Tell Good Work from Bad.” The Futurist. 15(2) (April 1981), 63-71.

  2. Van der Steen, M. and Van der Duin, P. “Learning ahead of time, how evaluation of foresight may add to increased trust, organizational learning and future oriented policy and strategy.” Futures 44 (2012) 487-493.

  3. Gardner, A. and Bishop, “Expanding Foresight Evaluation Capacity.” World Futures Review. 2019. Vol. 11(4) 287-291.

  4. Fergnani, A. and Chermack. T. “The resistance to scientific theory in futures and foresight, and what to do about it.” Futures & Foresight Sci. 2020.

  5. Rowland, NJ. And Spaniol, MJ. “On inquiry in futures and foresight science.” Futures & Foresight Sci. 2020.

  6. Shallow, A. et al. A stitch in time? Realizing the value of futures and foresight. RSA. October 2020.

  7. Link to the Sitra approach: https://www.sitra.fi/app/uploads/2022/04/sitra-evaluation_framework_december_2021-006.pdf

  8. Gardner, A. and Bishop, “Expanding Foresight Evaluation Capacity.” World Futures Review. 2019. Vol. 11(4) 287-291.


 

Dr. Annette L. Gardner

Dr. Annette L. Gardner, PhD, MPH, is a political scientist and has worked as a professional futurist since 1985. She was a Senior Associate at the Institute for Alternative Futures and a foresight educator at the University of California, San Francisco. She is an experienced researcher and evaluator and has conducted evaluations in different areas, including health care reform, emerging models of health care, information technology, and advocacy and policy change initiatives.Additionally, she and Claire Brindis, DrPH, wrote the definitive evaluation guide, Advocacy and Policy Change Evaluation: Theory and Practice (2017, Stanford U Press). An evaluation educator, Gardner regularly conducts courses and workshops on evaluation designs and concepts.

216 views0 comments

Recent Posts

See All

Comments


bottom of page