Presenting

Unmaking AI

Engaging Critically and Creatively with GenAI

A three-hour workshop for attendees of OzCHI 2024 held at the
University of Queensland, St Lucia, Meanjin/Brisbane

How can researchers engage with AI in creative and critical ways? Generative AI offers new approaches but also introduces significant social, cultural, political, and environmental impacts. Grasping these possibilities and problems is key. In this workshop, participants will be introduced to AI models, will see how other researchers use them, and will carry out hands-on “unmaking” activities using a custom card deck designed for experimentation and reflection. The workshop is intentionally “no tech,” requiring no devices, formal training, or prior knowledge of technical systems.

1 Dec 2024

1:00-4:30 PM

Room TBD

Beyond the Black Box

Generative AI (GenAI) models are rapidly being rolled out, disrupting industries and playing key roles in high stakes areas. Yet if these powerful models have potential, they also raise new problems, harvesting artistic work without consent, internalising toxic values, and reproducing stereotypes, amongst others. Given these stakes, we urgently need to develop a critical understanding of the operations, limitations, and societal impacts of these models. Yet models and AI more broadly are often pervaded by an array of myths and misconceptions. The rapid advancement in GenAI presents an urgent challenge: how can we devise methods and tools to better understand these opaque systems?

Beyond Bias

While critical AI research has moved towards opening these black boxes in recent years, much of this work has focused on “bias” narrowly understood. Numerous studies have highlighted bias across gender, race, class, disability and other categories, which translate into social harms as models are adopted and employed. In response, some models now include self-evaluations of bias in various forms or disclaimers around their use (e.g, Google Gemini, Open AI’s ChatGPT etc.). Yet this framing and its responses, often result in merely tweaking parameters or “bubble gum and tape” makeshift fixes. Models are artefacts driven by data curation, training setups, developer cultures, business models—a result of decisions and forces that can be identified and understood. In doing so, we seek to anchor model bias, harm, and risk within a more generalised analytical framework.

Towards “Unmaking” AI

How can we open up these black boxes and engage more substantively? In this workshop, we introduce a framework for “Unmaking AI” and a card-based toolkit for engaging creatively and critically with generative AI. Our Unmaking AI framework is comprised of four distinct components. Unmaking the Ecosystem analyses the values, structures, and incentives surrounding the model’s production. Unmaking the Context explores how users, communities, and specific problem settings shape AI usage. Unmaking the Data analyses the images and text the model draws upon, with their attendant particularities and biases. And Unmaking the Output analyses the model’s generative results, revealing its logics through prompting, reflection, and iteration.

Hello AI Card-Deck

To operationalise this framework in an accessible and practical way, we have developed a card-based design tool: Hello AI. Action cards provide concrete activities for participants. Reflection cards provide provocations and key questions for discussion. And Consideration cards aim to catalyse debate and further inquiry for the users. For instance, participants may pose a thorny question to an LLM, reflect on disparities between this result and their expectation, and consider the implications of such claims on their discipline and society more broadly. Cards can be chosen and combined in many different ways, forming a flexible and enjoyable way to develop critical technical literacy. Together, the Unmaking AI framework and complimentary Hello AI structure activities and cultivate technical literacy. They “unmake” not only in the sense of unpacking the black box, but also unravelling the misconceptions that continue to surround AI technologies.

Workshop Outcomes

This workshop aims to catalyse a community of researchers and practitioners who are interested in understanding and shaping discourse about critically engaging with and dismantling the black box that is generative AI. Building on this workshop, we plan to develop a set of resources focusing on Unmaking AI. This includes: developing an assortment of Unmaking AI design tools; a reading library/reference list; an Unmaking AI playbook that contains guard rails, principles and helpful techniques, as well as a catalogue of use cases that can inform real-world contexts and be instructive for both practitioners and researchers. These resources aim to provide scaffolding and support HCI researchers and practitioners to critically, reflexively and creatively engage with generative AI tools so we can better shape them.

Workshop Timetable

  • Introduction (30 min)
    A critical introduction to AI using the “unmaking” framework (unmaking the ecosystem, unmaking the context, unmaking the data, and unmaking the output), ensuring all participants on the same page
  • Inspiration (30 min)
    Lightning talks from three researchers who describe how they’ve used AI in creative and critical ways in real-world projects 
  • (Morning Tea break)
  • How not to use GenAI: a negative case study (20 min)
    Provides fictional researcher who curates an exhibition on the “Australian Identity” using GenAI in problematic ways: workshop participants identify these issues.
  • Introducing the Hello AI card deck (20 min)
    Introduces the deck, its aims, and how to use it. Steps groups through two cards.
    “Research Suggests”. Ask an LLM a Controversial question from your discipline.
    “Portrait Gallery”: Prompt a model to generate an “accurate” portrait of your research subjects using key adjectives. 
  • Hands on with Hello AI (45 min)
    Participants choose three cards and do the Activity using the tools provided (2 laptops with ChatGPT, 2 with EasyDiffusion). They then carry out discussion based on the Reflection and Consideration cards.  
  • Plenary (25 min)
    Each table reports back insights, findings, and any challenges. This feedback will be used to further refine the cards and develop resources (tools, readings, techniques) to support HCI practitioners to use AI in creative and critical ways.

Workshop Organizers

Luke Munn

Luke Munn is a media studies scholar based at UQ in Mianjin/Brisbane. His wide-ranging research investigating the social, political, and environmental impacts of digital technologies has been featured in highly regarded journals such as Cultural Politics and Big Data & Society as well as popular forums like The Guardian and The Washington Post.  He has written six books: Unmaking the Algorithm (2018), Logic of Feeling (2020), Automation is a Myth (2022), Countering the Cloud (2022), Red Pilled (2023), and Technical Territories (2023). His work combines diverse digital methods with critical analysis that draws on media, race, and cultural studies. His recent work has pursued creative and critical engagements with AI technologies, including “The uselessness of AI ethics” in AI and Ethics, “Truth Machines” in AI and Society, and a chapter on “Digital Labor, Platforms, and AI” (2024).

Awais Hameed Khan

Awais Hameed Khan is a Research Fellow at the University of Queensland node of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S). Awais is a design researcher and practitioner, who is interested in the democratisation of technology through participatory and user-centred approaches. His work focuses on designing mechanisms that enable users to have greater autonomy, agency, and control over the systems they use. Awais has a PhD in Human-Centred Computing and Design, Master of Interaction Design, and Bachelor of Business Administration (Hons.). His research interests include: service design; digital development; data trails and digital privacy; design methods and practices; social and tangible computing; speculative design; and new and emergent technologies. He has published and presented research on these topics in leading international HCI and design research venues.

Danula Hettiachchi

Danula Hettiachchi is a Lecturer at the School of Computing Technologies, RMIT University and a researcher interested in Crowdsourcing, Social Computing, Responsible AI and Human-Computer Interaction. Danula is an Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S). He has served as a program committee member in a range of premier conferences including as an Associate Chair at CHI. Danula has co-organised several academic workshops at CHI and CSCW.

Samar Sabie

Samar Sabie is Assistant Professor at the Institute of Communication, Culture, Information and Technology at the University of Toronto where she directs the Open Design Colaboratory. Her research examines how the diversity of urban communities requires re-examining our normative design methods, and how work in other fields such as STS, political philosophy, and sociology could help us re-operationalize these design methods in more just and inclusive ways. She is a co-editor of the ACM TOCHI special issue Unmaking & HCI: Techniques, Technologies, Materials, and Philosophies Beyond Making (2024) and was a co-organizer of the 2022 and 2024 CHI workshops on unmaking.

Lida Ghahremanlou

Lida Ghahremanlou is an Affiliate of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S) from Microsoft. With over 10 years of experience in academia and industry, Lida is an AI Researcher and Data Scientist Lead at Microsoft, where she utilises LLMs for data analytics of employee experience surveys. She has a PhD in Computer Science from RMIT University, and is a member of the RMIT Industry Advisory Board for the Center for Industrial AI Research and Innovation. Lida also collaborates with Western Sydney University as a Research Partner Investigator.

Saarim Saghir

Saarim Saghir works in Strategy at Google, where he focuses on solving complex business problems using technology and the latest Al breakthroughs. Saarim has over 10+ years of experience working in strategy, across technology, consulting, development, and consumer goods sectors. He has an insatiable curiosity for exploring novel ways in which technology can be used to support users. He is captivated by the quest to make using technology feel like an effortless extension of our human selves.

Nicholas Lambourne

Nicholas Lambourne is a senior machine learning engineer at Canva, where he serves as part of their ML Platform group. His role encompasses the facilitation of prototype machine learning applications by more than 100 machine learning professionals around the globe, whose work reaches more than 150 million customers worldwide. He holds degrees in finance, psychology, and computer science from The University of Queensland where he also previously served as senior research assistant in the Human Centered Computing lab, working on automatic speech recognition applications. Nicholas’ past research has also included work at the intersection of automata theory and quantum computing.

Liam Magee

Liam Magee is Professor of Education Policy, Organization and Leadership at the University of Illinois and an Associate Investigator in ADM+S. Encompassing digital, media and urban studies, his research examines how digital technologies reshape conditions of knowledge, social relations and cultural form. His books include Towards a Semantic Web: Connecting Knowledge in Academic Research and Interwoven Cities. He has co-authored articles for Futures, Big Data & Society, and Geoforum. His current research investigates how AI works across different scales of human subjectivity, social stratification and geopolitical organisation. He has contributed to studies of intersectional bias and cultural understandings of AI, and techniques for analysing AI via interviews, media analysis and code experiments.