Læringsseminar 4: Hacking By Promt

(Denne session vil blive afholdt på engelsk)

  • Timing: 2 sessions, 1 hour and 20 minutes each 
  • Level: Open to everyone. No AI knowledge is needed, but experts are welcome! (We might consider grouping people with different levels of expertise into two groups, but for now, I assume each session will have participants with varied expertise).
  • Learning Objectives: Gain a better understanding of how to creatively use AI tools for evaluation, such as conversational AI (e.g., ChatGPT, Claude), image generation, and voice recognition technologies. Develop deeper insight into AI’s inherent biases and the knowledge base from which AI derives its answers, helping critically assess the validity and limitations of AI responses in evaluation contexts. Remain aware of the ethical considerations involved in using these technologies responsibly.
  • Approach: The sessions will be very interactive. They will revolve around conversations with AI, alternated with some practical case studies. Throughout the session, I will provide informal guidance to help participants refine their questioning techniques and deepen their analysis of AI responses. At strategic points, short interludes (brief presentations or videos) will highlight key learning points. This dynamic format aims to blend hands-on practice with targeted instruction, making complex concepts accessible and applicable for participants regardless of their prior experience with AI.

At a time when the full capabilities of AI are still being uncovered through trial and error, the best way to explore the potential of conversational AIs for our work is to “hack” them—to experiment actively. This experimentation requires a deep understanding of “AI personalities” and a keen awareness of the ethical and epistemological risks and challenges involved. Our sessions will be highly practical, and we will explore the potential of AI for evaluation through conversations. 

By directly asking AI, “How can you help us in the evaluation process?” we aim to

  • understand what AI can do for evaluation. 
  • enhance our skills, testing AI tools and practising prompting, hands-on. 
  • gain perspective: delve into the ethical implications of using AI to critically assess AI’s potential biases and influences and question how to responsibly integrate AI into evaluation practices. 

This exploration will also hold a mirror to ourselves, to reflect on our own practices and assumptions. We’ll question the fundamental nature of our work: Is AI simply likely to reinforce what we already do, or can it help us to explore new, transformative approaches (such as participatory, feminist, and complexity-aware evaluations)? These discussions will help us determine how AI can be best utilised—not just as a tool for efficiency but as a catalyst for meaningful change in evaluation practices – and how we can help to promote such responsible use, through our individual and institutional practices. 

Bio

With over 20 years of experience in evaluating humanitarian, development, and peacebuilding projects across more than 50 countries, I’ve collaborated with a diverse range of organizations, from grassroots groups to the UN. I hold a deep belief that accountability and learning can coexist and that evaluation should be empowering, enjoyable, and transformative. I’ve been fortunate to work alongside commissioners and organizations that share this vision, turning each evaluation into a chance to pilot new approaches. Beyond evaluations, my work involved developing M&E frameworks, methodologies and training materials linking evaluative thinking to resilience, accountability, participation, and feminist approaches. I’m constantly seeking to infuse evaluation with fresh ideas and perspectives, from complexity-driven approaches to integrating cutting-edge technologies like artificial intelligence. I enjoy bridging traditional divides, such as melding participation with technology or blending theoretical analysis with practical application. I firmly believe in the power of effective communication in evaluation, having extensively explored various methods, including multimedia, real-time blogging, and cartooning, to ensure our insights lead to tangible change.