MDRAI → Master in Design for Responsible Ai

What does it take to design AI systems responsibly and creatively?
We prepare professionals with the skills to anticipate risks, maximize opportunities, and implement AI systems ethically, responsibly, and imaginatively across organizations and society.

Visit to IAAC-VAlldaura

Next academic year 26-27
Start — Late Sept  26
End — Mid June 27
Grad Ceremony — Late July 27
Course length — 400 hours
ECTS — 60 credits
Campus — Barcelona
Language — English
Format — Blended learning
In-person sessions in Barcelona
Kick-off:
1 week late Sept 26
Term 2 residency:
8 weeks between Feb-Mar 27 
Final projects + Grad show: 
1 week mid June 27
Online synchronous sessions 
Oct 26 — Jan 27
April — Early June 27
Schedule
Online:
Mon-Thu, 4 to 7 pm CET
In-person:
Mon-Thu, 4 to 8 pm CET
Dedication:
Mandatory sessions: 12 h/week
Asynchronous: 8 h/week
Tuition fee
Academic year 26-27
13.450€
plus 500€ in registration fees
Payment methods:
Single or multiple installments.
More info
Direction
Andrés Colmenares
Coordination
Martí Ramírez

Introduction

MDRAI offers an eclectic, rigorous and evolving context to reimagine how we design—and redesign—AI systems for a more just, transparent, and sustainable futures.

A pioneering program for future-driven Responsible AI leaders

MDRAI is a pioneering blended learning program, created in collaboration with IAM, a creative research and strategic design lab working with organisations to develop responsible innovation using futures as design tools, with over a decade of experience developing collaborative learning experiences in tech & society.

A novel way of learning to think through AI, beyond the hype

The Master in Design for Responsible AI is focused on developing the critical skills and knowledge needed to design, develop and integrate AI systems in ethical, responsible and creative ways.

Pushing Responsible AI, beyond checklists

You will learn how to anticipate and mitigate the risks and negative impacts of AI, while also maximizing the opportunities and positive outcomes that AI systems can have for companies, governments and society at large.

Preparing professionals for systems-level change, beyond a black box

By exploring the evolving social, cultural, ecological, ethical and technical dimensions of different types of AI systems, participants learn how to design implementations, creative strategies, methodologies, literacy tools and responsible tech narratives.

Who is MDRAI for

Designed for future leaders of Responsible AI looking to navigate complexity and shape AI systems with responsibility, creativity, and cross-disciplinary insight.

For professionals seeking to navigate AI’s complexity

The Master in Design for Responsible AI is designed for individuals who want to engage deeply with the social, ethical, and technical dimensions of emerging technologies. MDRAI is for designers, technologists, strategists, and researchers who recognize that responsible innovation requires critical thinking, collaborative practice, and the ability to work across disciplines.

For career shifters and up-skillers in Responsible AI
The program is ideal for people looking to expand or transition their careers toward Responsible AI, whether their background is in design, business, engineering, data science, policy, or the humanities. Participants share a commitment to understanding how AI systems function, how they shape society, and how to guide their adoption responsibly in business or public-sector environments.

For working professionals who need flexibility and depth

With its blended format, MDRAI supports those balancing professional commitments while seeking rigorous, community-centered learning. Weekly online sessions and in-person residencies create a dynamic rhythm that fosters continuous engagement, cross-disciplinary collaboration, and hands-on experimentation with peers from around the world.

For future leaders of responsible and equitable tech

This program is for individuals who want to go beyond principles and checklists to create tangible, value-driven impact with AI systems. It is designed for people motivated to design, critique, and govern AI systems with accountability and equity in mind—preparing them to influence how AI is imagined, built, and deployed across diverse sectors.

Faculty

more faculty

Lauren Benjamin Mushro, PhD

Sapien AI, Museum of Science, Aspen Institute

Advisor and course leader on AI Governance

Ariel Guersenzvaig, PhD

Elisava

Leads course on Ethics of Technology and Design at MDRAI and AI Ethics and Philosophy at MAIAD.

Dasha Simons

University of Amsterdam & IBM

Leads course on AI Governance and AI Ethics in corporations

Andres Colmenares

IAM

Program director and facilitator of multiple modules

Ayşegül Güzel

AI of Your Choice

Leads Skills Lab on risk and impact assessments

Kasia Odrozek

UNESCO

Advisor and lead of AI Ethics and Impacts of AI Systems

Leandro Ucciferri

Ranking Digital Rights

Leads seminar on human and digital rights

Buse Çetin

AI Forensics

Leads course on AI Policy

Jillian Powers, PhD

Slalom

Leads course on Responsible AI Implementation Strategies

Eryk Salvaggio

University of Cambridge

Leads courses on Critical AI

Martín Pérez Comisso, PhD

Universidad de Chile

Leads course on socio-technical systems

Caroline Sinders

Convocation Design + Research

Leads course on human rights centered design

Abdelrahman (Abdo) Hassan

Decathlon

Leads courses on decision intelligence and critical making

Katrin Fritsch

Green Web Foundation

Guest lecturer and advisor on Environmental Impacts of Tech

Approach

Embracing complexity through collaboration

The approach of MDRAI has a strong focus on transdisciplinary and collaborative learning offering participants a unique context to develop skills and capabilities required to understand and analysing complex topics while specialising on the design and implementation of strategies, methodologies and decision-making techniques for Responsible AI systems in business or public sector organizations.

Bridging theory and practice in responsible tech

The curriculum is designed to create a flow between theory and practice through creative research, critical storytelling and strategic design, enabling participants to both explore and understand the complex social, ethical, and technical dimensions of AI, and to translate these insights into clear, impactful narratives and projects that contribute to the development of Responsible Tech, engaging with diverse audiences.

Designing as decision-making 

The program frames design as a decision-making practice, where every choice in research, strategy, or implementation of AI systems carry ethical, social, and technical implications. Participants learn to approach design systematically, considering trade-offs, anticipating risks, and embedding values throughout socio-technical systems, ensuring that decisions are informed, accountable, and aligned with the principles of Responsible AI.

Pushing Responsible AI beyond checklists

MDRAI approaches Responsible AI as an active, practice-based discipline rather than just a static checklist of principles. We emphasize continuous engagement, reflection, and adaptation, integrating ethical, social, and technical considerations into every stage of design and deployment. Participants learn to navigate complexity, make informed trade-offs, and translate abstract guidelines into tangible strategies, systems, and interventions that have real-world impact.

Internet Tour by Mario Santamaría, MDRAI + MDEF 

Format

MDRAI is a blended learning program, offering flexibility in terms of location and schedule while maximizing learning outcomes, collaboration, and the capacity to produce meaningful, responsible AI design work.

MDRAI adopts a blended‑learning format that intentionally merges the flexibility of online sessions with immersive in-person engagement. Delivered over 9 months (end of September until mid-June), the program allows part‑time study which is ideal for working professionals, requiring a commitment of approximately 20 hours per week, 12 of which are dedicated for synchronous sessions from Monday to Thursday (16:00-20:00 CET). This flexibility supports diverse lifestyles and enables learners to balance professional commitments with academic growth.

Each week participants engage in synchronous online sessions that foster continuous engagement, peer collaboration, and cross-disciplinary exchange regardless of location. This mix of modalities can increase engagement, motivation, and knowledge retention compared with traditional formats. Online resources remain accessible and learners can revisit materials on demand, supporting deeper understanding and long-term learning.

The blended format is complemented by three in-person residencies in Barcelona, offering immersive collaboration and hands-on work. The kick-off week at the end of September sets the foundations for the collaborative learning journey; a 9-week residency in February-March enables weekly workshops and design sprints for final projects happening between Mon-Thu (16:00-20:00 CET); and a final week in mid-June focuses on final project presentations and participating in Elisava’s Grad Showcase.

By combining the scalability and convenience of online learning with the richness of face‑to‑face interaction, the program cultivates a strong sense of community, peer support, and shared creative research culture. Hybrid learning models like this foster better peer interaction and communal learning, bridging the advantages of remote access with the social and motivational benefits of in-person engagement.

Upcoming events

Jan 21 & 22, 2026  

Conference & workshops

propmt:UX 2026

Wed, Feb 11, 2026

Masters’ Talks

Karel Martens & Thomas Castro

Unbound

February 16 — 20, 2026  

MIWW workshop

Tereza Ruller, The Rodina

The Synthesized Self

Mon, Mar 2 , 2026

Beyond Sessions 

Gaston Welisch

Augury Birdwalk

Program structure

1 Creative Research & Imagination Collaboratory

In this module we investigate and analyze complex topics related to the sociological imagination of AI using creative, collaborative, and intuitive methods. Through reflective critical thinking, you will examine the narratives that shape technological futures and develop responsible alternatives, encouraging pluralistic visions of AI that question hidden assumptions.

1.1
Better metaphors of AI

In this track we develop a collaborative research project to investigate how metaphors influence the ways AI is understood, communicated, and designed. Participants explore past and present social meanings of AI metaphors to imagine alternative ones that align with more equitable, ecologically aware, and culturally grounded visions of the concept of artificial intelligence.

1.2
Digital garden(ing)

In this course we develop the practice of cultivating a personal online space for ideas to grow, evolve, and interconnect over time. Through hands-on exercises, collaborative exploration, and reflective writing, participants learn how to organize knowledge, build creative workflows, and design digital ecosystems that support curiosity, learning, and long-term thinking.

1.3
Read/Write club

In this course we engage with readings that critically examine key underlying concepts of AI through artistic, cultural, and philosophical lenses. Participants write essays individually and collaboratively, exploring how artists and thinkers interrogate AI’s aesthetics, politics, and impacts in social imaginaries. Discussions and shared reflections deepen understanding and foster creative, critical communication of insights.

In this track we develop a collaborative research project to investigate how metaphors influence the ways AI is understood, communicated, and designed. Participants explore past and present social meanings of AI metaphors to imagine alternative ones that align with more equitable, ecologically aware, and culturally grounded visions of the concept of artificial intelligence.

In this course we develop the practice of cultivating a personal online space for ideas to grow, evolve, and interconnect over time. Through hands-on exercises, collaborative exploration, and reflective writing, participants learn how to organize knowledge, build creative workflows, and design digital ecosystems that support curiosity, learning, and long-term thinking.

In this course we engage with readings that critically examine key underlying concepts of AI through artistic, cultural, and philosophical lenses. Participants write essays individually and collaboratively, exploring how artists and thinkers interrogate AI’s aesthetics, politics, and impacts in social imaginaries. Discussions and shared reflections deepen understanding and foster creative, critical communication of insights.

2 Decoding AI Systems

In this module we provide a multidimensional understanding of AI as complex systems. Through socio-technical analysis, temporality studies, critical AI investigation, and business-focused courses, participants explore AI’s capabilities, limitations, and impacts, developing foundations for responsible adoption, decision-making, and explainable communication across diverse contexts.

2.1
Introduction to socio-technical systems

In this course we explore AI from a socio-technical perspective, examining the design, functioning, use, and lifecycle of technology. Through reflection and analysis, they consider the diverse social, cultural, and technical dimensions that shape how these systems operate and impact society.

2.2
Temporality of AI

In this course we explore our relationship with time as a hidden axis of AI, design, and society. Participants examine multiple temporalities, trace AI’s material and social rhythms, and learn from afro-diasporic and Indigenous perspectives. Through reflection, case studies, and speculative exercises, participants develop tools for integrating temporality into responsible, decolonial, and pluriversal AI design.

2.3
AI Fundamentals for Business: Decision Intelligence

In this course we develop a grounded literacy of human and artificial intelligence, tracing AI’s historical, social, and political roots. Participants reframe intelligence as co-decision-making between humans, data, and institutions, exploring the concept of decision intelligence, and applying ecosystem thinking to critically navigate technological infrastructures, power, and bias, fostering responsible decision-making in complex contexts.

2.4
AI Fundamentals for Business: AI adoption in corporations

In this course we examine how corporations adopt, govern, and debate AI in practice. Through case studies, debates, and experiential activities, participants develop critical insights and practical strategies for responsible AI adoption in business contexts.

2.5
AI Fundamentals for Business: Trends in AI Implementation

In this course we explore AI as an emerging economic force, examining narratives and movements shaping business, society, and climate futures. Using project-based methods, participants analyze trends, critique assumptions, and develop counter-scenarios. Guest sessions and hands-on exercises guide participants in translating AI narratives into responsible business cases with risk analysis and planetary KPIs.

2.6
Critical AI Studies

In this course we critically examine Large Language Models and Natural Language Processing, exploring their technical, historical, and social dimensions. Participants learn to demystify AI systems, analyzing their infrastructure and social impacts, and situating technologies within broader cultural and historical contexts, developing a foundational practice in critical, socio-technical AI analysis.

In this course we explore AI from a socio-technical perspective, examining the design, functioning, use, and lifecycle of technology. Through reflection and analysis, they consider the diverse social, cultural, and technical dimensions that shape how these systems operate and impact society.

In this course we explore our relationship with time as a hidden axis of AI, design, and society. Participants examine multiple temporalities, trace AI’s material and social rhythms, and learn from afro-diasporic and Indigenous perspectives. Through reflection, case studies, and speculative exercises, participants develop tools for integrating temporality into responsible, decolonial, and pluriversal AI design.

In this course we develop a grounded literacy of human and artificial intelligence, tracing AI’s historical, social, and political roots. Participants reframe intelligence as co-decision-making between humans, data, and institutions, exploring the concept of decision intelligence, and applying ecosystem thinking to critically navigate technological infrastructures, power, and bias, fostering responsible decision-making in complex contexts.

In this course we examine how corporations adopt, govern, and debate AI in practice. Through case studies, debates, and experiential activities, participants develop critical insights and practical strategies for responsible AI adoption in business contexts.

In this course we explore AI as an emerging economic force, examining narratives and movements shaping business, society, and climate futures. Using project-based methods, participants analyze trends, critique assumptions, and develop counter-scenarios. Guest sessions and hands-on exercises guide participants in translating AI narratives into responsible business cases with risk analysis and planetary KPIs.

In this course we critically examine Large Language Models and Natural Language Processing, exploring their technical, historical, and social dimensions. Participants learn to demystify AI systems, analyzing their infrastructure and social impacts, and situating technologies within broader cultural and historical contexts, developing a foundational practice in critical, socio-technical AI analysis.

3 Analyzing Impacts of AI

In this module we equip participants to analyze social, cultural, and ecological impacts of AI systems through ethics, human rights, and justice lenses. Combining conceptual frameworks, risk analysis, and other practical skills, participants explore digital rights, epistemic justice, digital sustainability and anticipatory ethics, applying critical tools and methods to assess, audit, and test AI systems.

3.1
Frameworks for Ethics, Human Rights, and Justice

In this track we explore ethical, human rights, and justice frameworks for evaluating AI’s impacts across individual, societal, and planetary scales. Participants critically examine social, economic, environmental, and more-than-human dimensions of AI impacts to identify inequities, and develop anticipatory, responsible and intersectional reasoning around current and emerging applications of AI.

3.2
Perspectives on Impacts of AI

In this track we examine social, cultural, and economic impacts of AI across topics such as labor, welfare, culture, and democracy through a guest lecture series with leading experts coming from academia, business, arts, policy and tech.

3.3
Skills Lab

In this track we equip participants with critical and technical methods to assess and mitigate ethical, social, and environmental risks of AI. Participants design interventions and accountability tools to enhance transparency, fairness, and sustainability, applying anticipatory and systemic thinking to develop responsible innovation strategies for AI deployments.

In this track we explore ethical, human rights, and justice frameworks for evaluating AI’s impacts across individual, societal, and planetary scales. Participants critically examine social, economic, environmental, and more-than-human dimensions of AI impacts to identify inequities, and develop anticipatory, responsible and intersectional reasoning around current and emerging applications of AI.

In this track we examine social, cultural, and economic impacts of AI across topics such as labor, welfare, culture, and democracy through a guest lecture series with leading experts coming from academia, business, arts, policy and tech.

In this track we equip participants with critical and technical methods to assess and mitigate ethical, social, and environmental risks of AI. Participants design interventions and accountability tools to enhance transparency, fairness, and sustainability, applying anticipatory and systemic thinking to develop responsible innovation strategies for AI deployments.

4 Navigating AI Governance, Policy & Safety

In this module we develop a critical understanding of the legal, ethical, and social foundations of Responsible AI. Through lectures, case studies, debates, and workshops, students analyze global AI policy landscapes, AI ethics principles, and governance frameworks, with a focus on the EU AI Act, preparing them to assess and guide responsible AI practices.

4.1
Foundations of Responsible AI

In this course we explore what responsibility means in research and innovation around digital technologies. Participants reflect on and assess diverse socio-technical systems worldwide, applying frameworks for responsible design. Through discussion and analysis, they examine AI research and projects where responsibility is actively integrated as a guiding principle.

4.2
AI Policy and Regulation

In this course we study global and European AI policy initiatives, including the EU AI Act and Trustworthy AI guidelines. Participants analyze regulatory frameworks, debates, and case studies, developing critical insight into how policies shape ethical AI adoption, corporate practices, and societal outcomes, preparing them to navigate and influence AI policy landscapes responsibly.

4.3
AI Governance and Safety

In this course we focus on the design and implementation of governance, compliance, and safety strategies for AI systems. Through workshops, case studies, and collaborative exercises, participants develop tools to ensure transparency, accountability, and ethical alignment, translating legal and ethical principles into actionable practices for responsible and safe AI deployment in public and private contexts.

In this course we explore what responsibility means in research and innovation around digital technologies. Participants reflect on and assess diverse socio-technical systems worldwide, applying frameworks for responsible design. Through discussion and analysis, they examine AI research and projects where responsibility is actively integrated as a guiding principle.

In this course we study global and European AI policy initiatives, including the EU AI Act and Trustworthy AI guidelines. Participants analyze regulatory frameworks, debates, and case studies, developing critical insight into how policies shape ethical AI adoption, corporate practices, and societal outcomes, preparing them to navigate and influence AI policy landscapes responsibly.

In this course we focus on the design and implementation of governance, compliance, and safety strategies for AI systems. Through workshops, case studies, and collaborative exercises, participants develop tools to ensure transparency, accountability, and ethical alignment, translating legal and ethical principles into actionable practices for responsible and safe AI deployment in public and private contexts.

5 Designing for Responsible AI Implementation

In this module participants integrate responsible AI practices across the entire AI pipeline, from data collection to deployment and monitoring. Using diverse design methodologies, participants analyze and address ethical, social, and systemic risks, developing strategies for fairness, transparency, accountability, and inclusion in real-world AI applications.

5.1
Applied AI Ethics: Philosophy of technology in practice

In this course we introduce the philosophy and ethics of technology, exploring perspectives on technological neutrality, determinism, and normative design. Participants engage with ethical theories, from deontology to care ethics and Buen Vivir, and apply them through mediation analysis, impact assessments, and professional ethics exercises, developing skills to critically evaluate and responsibly enact technology in practice.

5.2
Applied AI Ethics: AI Ethicist’s Toolbox

In this course we guide participants to explore personal situatedness, organizational roles, and design-oriented strategies for scaling ethical practices. Through reflection, role-play, and hands-on prototyping, participants identify blind spots, leverage strengths, and develop practical approaches to embed responsible AI practices and inclusive, impactful ethics in real-world contexts.

5.3
Responsible AI and Design Business Lab

In this course we demystify AI definitions, frameworks, and practices, guiding participants through regulations, AI system lifecycles in corporate landscapes. Using exercises, case studies, and critical making, participants explore responsible AI, governance, evaluation, and sociotechnical imaginaries, developing practical skills to audit, design, and operationalize AI responsibly while navigating organizational dynamics.

5.4
Designing AI Governance and Strategy: Embedding Responsibility in Enterprise AI

In this course we equip participants with key skills to embed responsible practices into AI strategy and its implementation. Through case studies and hands-on exercises, participants learn to align AI deployment with ethical principles, build governance frameworks, manage risks, and integrate transparency, fairness, and accountability across the AI lifecycle.

5.5
Designing for Trustworthy AI

In this course we explore how design can advance Trustworthy AI by translating values into technology and material culture. Participants use imaginative, prototype-based ethical thinking to resolve value conflicts, operationalize principles like fairness, privacy, and transparency, and develop actionable strategies for AI decision-makers and development teams, moving beyond risk mitigation toward proactive value-centered design.

In this course we introduce the philosophy and ethics of technology, exploring perspectives on technological neutrality, determinism, and normative design. Participants engage with ethical theories, from deontology to care ethics and Buen Vivir, and apply them through mediation analysis, impact assessments, and professional ethics exercises, developing skills to critically evaluate and responsibly enact technology in practice.

In this course we guide participants to explore personal situatedness, organizational roles, and design-oriented strategies for scaling ethical practices. Through reflection, role-play, and hands-on prototyping, participants identify blind spots, leverage strengths, and develop practical approaches to embed responsible AI practices and inclusive, impactful ethics in real-world contexts.

In this course we demystify AI definitions, frameworks, and practices, guiding participants through regulations, AI system lifecycles in corporate landscapes. Using exercises, case studies, and critical making, participants explore responsible AI, governance, evaluation, and sociotechnical imaginaries, developing practical skills to audit, design, and operationalize AI responsibly while navigating organizational dynamics.

In this course we equip participants with key skills to embed responsible practices into AI strategy and its implementation. Through case studies and hands-on exercises, participants learn to align AI deployment with ethical principles, build governance frameworks, manage risks, and integrate transparency, fairness, and accountability across the AI lifecycle.

In this course we explore how design can advance Trustworthy AI by translating values into technology and material culture. Participants use imaginative, prototype-based ethical thinking to resolve value conflicts, operationalize principles like fairness, privacy, and transparency, and develop actionable strategies for AI decision-makers and development teams, moving beyond risk mitigation toward proactive value-centered design.

6 Critical Design & Media Lab

In this module we develop participants’ ability to synthesize and share knowledge critically, creatively and collaboratively. Through discussions, workshops, and editorial challenges, participants practice critical design and storytelling, engage with diverse perspectives, and cultivate awareness of societal, cultural, and ecological contexts, fostering an open, distributed, and interactive learning environment.

6.1
xperimental Media & Critical Practices

In this track we explore media as a site for experimentation and critical inquiry of AI. Participants use iterative prototyping, speculative design, and creative coding to interrogate cultural, social, and technological systems, developing outputs that challenge their assumptions of Responsible AI.

6.2
Collaborative Knowledge Ecosystems

In this track we co-create a distributed and open learning environment to process insights from other modules. Through group workshops, peer-to-peer mentorship, and cross-disciplinary projects, participants develop strategies for knowledge synthesis, collaborative storytelling, and reflective practice, fostering engagement with communities while connecting societal, ecological, and cultural contexts.

6.3
Media, Technology, and Societal Imagination

In this track we challenge how media and technology shape perceptions, values, and social futures. Participants critically analyze and prototype interventions that explore emerging socio-technical imaginaries, connecting creative practice to ethics, policy, and governance, while experimenting with formats that communicate insights to diverse audiences.

In this track we explore media as a site for experimentation and critical inquiry of AI. Participants use iterative prototyping, speculative design, and creative coding to interrogate cultural, social, and technological systems, developing outputs that challenge their assumptions of Responsible AI.

In this track we co-create a distributed and open learning environment to process insights from other modules. Through group workshops, peer-to-peer mentorship, and cross-disciplinary projects, participants develop strategies for knowledge synthesis, collaborative storytelling, and reflective practice, fostering engagement with communities while connecting societal, ecological, and cultural contexts.

In this track we challenge how media and technology shape perceptions, values, and social futures. Participants critically analyze and prototype interventions that explore emerging socio-technical imaginaries, connecting creative practice to ethics, policy, and governance, while experimenting with formats that communicate insights to diverse audiences.

7 Final Project

In this module we guide participants in developing and communicating critical design tools for Responsible AI that contribute to the common good. Combining research, design, and critical reflection, participants develop transdisciplinary projects using diverse media and methodologies, receive iterative feedback, and present their work publicly, to demonstrate the skills, knowledge, and ethical practices gained throughout the program.

7.1
Phase 1: Research & Conceptualization

In this phase participants conduct research, explore contexts, and define project objectives, using iterative feedback to refine questions, frameworks, and methodologies that will guide the design and development of their final project.

7.2
Phase 2: Design & Prototyping

In this phase, participants translate research into actionable proposals through design fictions, tools, narratives, and media experiments. Participants apply transdisciplinary methods and creative techniques to develop prototypes, iteratively testing ideas while integrating ethical principles throughout the project.

7.3
Phase 3: Communication & Public Engagement

In this phase, we develop strategies to share projects with wider audiences. Participants craft compelling narratives, document processes, and present findings using diverse formats, from reports to video essays. Emphasis is placed on accessibility, public impact, and reflective dialogue, culminating in internal and public presentations with feedback from peers, tutors, and external experts.

In this phase participants conduct research, explore contexts, and define project objectives, using iterative feedback to refine questions, frameworks, and methodologies that will guide the design and development of their final project.

In this phase, participants translate research into actionable proposals through design fictions, tools, narratives, and media experiments. Participants apply transdisciplinary methods and creative techniques to develop prototypes, iteratively testing ideas while integrating ethical principles throughout the project.

In this phase, we develop strategies to share projects with wider audiences. Participants craft compelling narratives, document processes, and present findings using diverse formats, from reports to video essays. Emphasis is placed on accessibility, public impact, and reflective dialogue, culminating in internal and public presentations with feedback from peers, tutors, and external experts.

Due to our interest to constantly improve our programme and the professional realities of our teachers, we keep the right to make changes in the content and the professors of the course.

Featured projects

Empirical Lab for Critical AI Literacy

Final project

  • Tatiana Pilnik
  •  MDRAI 24

Winner of

GRAILS 2025

DAIRE

Final project

  • Patricia Amaro
  • Mela Gómez Mogollón
  • Netta Tzin
  •  MDRAI 24

Alumni highlights 2025

Patricia Amaro ↗

Patricia is a global leader in eCommerce and digital transformation, known for redefining how Unilever, Reckitt, and Mars drive growth through technology. A WeQual EMEA Award winner, she has pioneered eB2B ecosystems, scaled platforms to hundreds of millions in revenue, and delivered major gains in sales and profitability. A Stanford and SingularityU alumna, she blends strategic foresight, people-centered leadership, and deep expertise in tech-enabled growth. She has joined MDRAI faculty bringing business perspectives on Responsible AI adoption strategies.



Tatiana Pilnik

Tatiana is a sociotechnologist, service designer, and researcher working at the intersection of technology, social transformation, and business innovation based in Brazil. As a former Head of UX & AI Strategy at Silverguard, she designed products, AI governance models, and stakeholder ecosystems that make invisible systems such as cybercrime and gender gaps, visible and actionable. With experience spanning UX leadership, consumer intelligence, and large-scale research across Latin America and Europe, her work blends human insight, cultural fluency, and systemic thinking. After graduating, she was hired by Google and has published in outlets such as MIT Sloan Review Brasil.



Netta Tzin ↗

Netta is a product and business strategist specializing in AI-driven, data-centric products, with over a decade of experience bridging music, arts, and emerging technologies. Co-Founder and Head of Product at MySphera, she develops machine-learning solutions for artists, creators, and music ecosystems. Her multidisciplinary background spans industrial engineering, international management, and responsible AI design, alongside extensive work in startups, cultural innovation, and creative AI. A lifelong social activist, Netta’s work is driven by systems thinking, artistic curiosity, and a commitment to equitable technological futures.



Jennifer Simonds

Jennifer is a UX and Responsible AI design leader based in Berlin, integrating accessibility, ethics, and technology to build public-interest digital services. With over a decade of experience, she supports institutions including the Bundesdruckerei, KBA, and EU bodies in creating inclusive, compliant, human-centered systems. She is developing her final project, Trust by Design, into an independent consulting practice that helps organizations transform cybersecurity and AI governance into actionable, human-centered design.



Alumni highlights 2024

Gustavo Nogueira de Menezes

Gustavo is a temporality researcher from the Amazon rainforest and founder of Temporality Lab. Specialising in temporalities and decolonial perspectives, Gustavo helps individuals and societies reflect on change using ancestral and contemporary methods. Based in Amsterdam, he has collaborated with companies like Globo, Google, Natura, Netflix, Nubank, and Spotify. As an AI researcher he explores AI’s impact on time perception, Ancestral AI and has joined the MDRAI faculty as a creative research & learning fellow.

Diane Ortiz-Macleod
Diane is a data scientist and artist working as a Responsible AI strategy consultant based in Canada, drawing on a technical and creative background across AI development, design, and project management. She has led global AI transformation initiatives and advises businesses, nonprofits, and arts organizations, including serving as a fractional COO for an AI-as-a-Service platform. Since completing the master, she has taught over 100 executive leaders in AI foundations, developing tailored curricula on risk mitigation, data privacy, model bias, transparency, and sustainability.

Marta Gierszewska

Marta is a UX leader, designer, and researcher with over eight years of experience in digital product development based in Poland. Currently Head of UX at Profitroom, she specializes in the intersection between UX/UI, responsible AI and booking technologies. She has a background in cognitive science, team leadership and software testing and has led design teams across global projects for clients as FIFA and founded Poland’s first meetup at the intersection of art, science, and technology.

Monique Lemos

Monique is the founder and CEO of topofutures, a plural futures lab and consultancy focused on research, strategy, and new narratives across creative media. A Brazilian anthropologist, cultural strategist, and Responsible AI specialist, she works nomadically to stay attuned to global trends. With a background in production and cultural strategy and degrees in Cinema, Sociology and Politics, and Digital Anthropology, her research intersects generative AI, ancestral memory, Black imaginaries, social perceptions of technology, and data literacy.