Designing for AI and Closing the Innovation Gap

Designing for AI and Closing the Innovation Gap

Discover why so many AI projects fail, how designers can close the innovation gap, and where real, buildable value quietly emerges when AI is treated as a practical, imperfect design material rather than technological magic.

Designing for AI and Closing the Innovation Gap

During his talk at Domus Academy within the Disrupting Patterns Talk series, John Zimmerman challenged one of the most persistent myths surrounding artificial intelligence: that its impact depends mainly on ever-more powerful algorithms. For Zimmerman, UX and Service Designer, Professor of AI and Human–Computer Interaction, the real bottleneck is not the technology itself but the way we design for AI.

He describes a deep “innovation gap” between what AI systems could do and what organisations actually build. AI, he argues, often fails not because models are weak, but because teams choose the wrong problems, ideate poorly, misunderstand costs and performance, and treat AI as a kind of superhuman magic rather than a flawed, narrow, but incredibly fast material.

The Talk

Zimmerman’s own trajectory is a sequence of early, concrete explorations of AI in everyday life such as: family coordination, adaptive environments that respond to visitors and family members, tools to detect depression, and agents that help elderly citizens with cognitive decline stay longer in their homes.

These examples underline a key point: AI only becomes meaningful when it is grounded in real contexts and real data, and when it is carefully situated in human workflows. That alignment is never automatic. It is the result of deliberate design choices. In his words:

“Possibly the biggest problem is that non-AI people tend to think of AI as a superhuman intelligence. And I think this is largely driven by the media who love clickbait stories like Target knows teens pregnant before parents, AI outperforms doctors in diagnosing cancer, Google engineer says chatbots sentient. None of these are true. There’s a thin aspect of truth, but they are all tremendously misleading on what AI can realistically do. And the consequence of that, when non-AI people get together and try to think about what to make, is they tend to only think about really difficult tasks where you need great expertise, and you need nearly perfect model performance to create value. And so the majority of your opportunity space is simply never considered. You’re just basically focusing on the unbuildable things.”

Despite the abundance of AI talent in large companies, Zimmerman sees a surprising amount of “missed low-hanging fruit.” Starbucks does not use a simple inference to land habitual app-payers directly on the pay screen when they enter a store. Instagram does not automatically suggest the tags influencers repeatedly use. Car companies invest enormous resources in fully self-parking cars for a tiny niche of drivers, while ignoring simpler, universally useful features like reliably detecting whether a parking spot is large enough.

These are not failures of technology, but failures of priorities and imagination. Teams chase technically glamorous features instead of simple, high-value ones. That gap between hype and practicality is where Zimmerman sees the biggest opportunity for designers.

The paradox is that AI success stories are everywhere—Amazon’s warehouse robots, Airbnb’s pricing predictions, predictive analytics in agriculture—yet statistics show shockingly high failure rates for AI projects, especially generative ones. For Zimmerman, the explanation lies in the broken relationship between three key groups: data scientists, designers and managers.

Data scientists tend to propose technically interesting but user-irrelevant solutions. Managers, influenced by sensationalist media narratives, overestimate AI’s intelligence and approve unrealistic projects. Designers are often brought in too late, after fundamental decisions have already been made. All of this is amplified by a widespread misunderstanding of model performance and what “AI 90% accuracy rate” means in different contexts.

“We can get model performance to 90%. Is that good enough? Well, 90% in the US is like, that’s an A minus. Is that good enough? Seems pretty good. If you think about automatic speech recognition, most sentences have about 14 words. So at 90%, roughly one or a little more than one word in every sentence will be wrong. Is that good enough? It depends on the application. If I’m transcribing a court case where words matter, it is nowhere near good enough.”

One of the most provocative claims in Zimmerman’s talk is that classic user-centred design does not work for AI if applied in the usual way. Normally, designers start by identifying user pain points, then search for possible solutions. But AI projects begin from a different starting point: the constraints of what can realistically be built with available data and models.

If teams first lock onto a user problem and then insist that AI must solve it, they often create solutions that can never work. Instead, Zimmerman argues, teams need to co-discover the overlap between user needs and AI capabilities, exploring both dimensions in parallel. That requires designers to work closely with data scientists from the very beginning, not as decorators at the end.

“If you do user-centered design, what you typically do? You go out and you pre-decide who the customer is going to be. You study them, you notice some pain points, some opportunities, you draw focus on, oh, if we could work on these, this would be transformative. This would be a huge opportunity. The problem with that approach is it almost never aligns with where AI can actually deliver value. So like you’ll find real problems, they’re just not AI opportunities. So the question really is, how do you co-find the intersection of an AI opportunity and a user need?”

Zimmerman and his research group tried to help designers by building a taxonomy of AI capabilities, hoping that more knowledge would lead to better ideas. It did not. Designers generated more “AI-flavoured” concepts, but most remained unbuildable. The real breakthrough came when they stopped focusing on how inferences are made and started focusing on what level of performance creates value.

 “Mechanisms is how AI makes an inference. Capabilities are what can I actually do with an inference. Just to pull in a little design theory, if you’re familiar with Donald Schön’s concept of reflection in action, which is that creative moment where the idea sort of comes out of you. In order to do this, you need a lot of tacit knowledge. And Schön used the metaphor of the jazz musician who knows their instruments so well they can create music while playing music, right? So it’s like we have to understand the materials.”

Borrowing a metaphor from Cassie Kozyrkov, former Google’s Chief Decision Scientist, Zimmerman describes AI as “an island full of drunk people”: exceptionally fast and capable of processing vast amounts of data, yet not particularly intelligent. This reframing prompts a crucial question: in which tasks do we genuinely need speed and scale, but not deep intelligence or perfect accuracy?

Instead of aspiring to perfection in impossible domains, teams should hunt for contexts where imperfect inference still produces clear benefits.

His work with ICU clinicians illustrates this. Rather than tackling extremely hard problems like optimising patient sedation—which would demand perfect data and near-perfect models—the team focused on something much simpler: predicting which medications will likely be needed the next day. This does not increase clinical risk, but significantly improves operational efficiency

Across multiple industries—from healthcare to insurance, software security, news and accounting—Zimmerman’s brainstorming framework showed that when teams are sensitised to simple AI capabilities and moderate performance, they consistently generate more valuable, buildable concepts.

The key ingredients are:

  • recognising AI as a design material with strengths and limits,
  • bringing technical, business and domain experts into the same room early,
  • and evaluating ideas not only on technical novelty, but on business value, user acceptance and ethical impact.

For designers, this implies a shift: from focusing on consumer desire alone to embracing service design and value co-creation, where usefulness, revenue and responsibility are considered together from the start.

Students’ Questions

Zimmerman’s answer is refreshingly direct: designers never truly had decision-making power in the first place. Strategic decisions live with executives who control budgets and can green-light or stop projects. Rather than chasing the fantasy of “owning the table,” designers should focus on increasing their influence.

That influence comes from speaking the language of business and technology—being able to talk credibly about cost, risk, value and feasibility. If designers can shape the criteria by which AI concepts are selected, they can have a profound impact on which projects move forward, including filtering out ethically problematic ones. Designers may not be in charge, but they can quietly steer which ideas survive.

Zimmerman is clear: designers are and will remain essential, but their core superpower is evolving. Where design once centred on envisioning new objects and interfaces, in the AI era the crucial skill becomes systemic thinking and facilitation.

Designers are the ones who can orchestrate conversations between engineers, product managers and domain experts, ensuring that ideation actually happens and that it explores the right territory. This is a social, relational role: designers bring structure to chaos, keep teams honest during the creative process, and champion quality ideas in a context that often favours speed over reflection. AI is unlikely to replace that kind of facilitation.

Zimmerman suggests a more layered relationship. AI has a broad but shallow grasp of human activities; designers, through research, build a deep understanding of specific contexts. The opportunity is to use AI to absorb large amounts of generic, obvious knowledge quickly, so designers can spend their limited research time exploring the subtle, nuanced areas where design has the greatest impact.

He compares this to the shift from analogue to digital video editing. Tools like Avid did not just make editing more efficient; they made it more creative, by freeing time and energy for experimentation. Similarly, once we move past the current messy phase, AI can act as a support that enables designers to be more exploratory, not less.

Zimmerman frames “material” in broad design terms. Materials are not only wood, metal or plastic; they can also be music or film—mediums with characteristic behaviours, constraints and expressive possibilities. Wood expands, contracts, burns; it is perfect for some applications and poor for others.

AI, he says, is similar. The material is data and the inferences drawn from it. Algorithms and models are the tools that shape this material, like saws and chisels. The craft lies in developing a felt understanding of what certain kinds of data can do, where they are strong, where they are brittle. AI engineers are like highly skilled craftspeople working close to the raw material. Designers, in turn, assemble components and construct narratives of use. For students, the challenge is to find their place in this ecosystem and to think in terms of capabilities and constraints, not magic.

AI is evolving at an “inhuman” pace, with tens of thousands of research papers emerging every year. Every time a domain became widely accessible, design moved to a new layer of complexity and integration.

The only real constant, he says, is that what designers do will keep changing. For those who enjoy continuous learning and reinvention, this is an exciting prospect. For those seeking a stable, craft-like routine, it may be challenging. AI will keep disrupting tools and workflows, but the core question is whether designers can remain fluid and open, working skilfully with where we are now rather than clinging to fixed visions of the future.

Zimmerman’s talk at Domus Academy ultimately reframes AI not as a distant, sentient intelligence, but as a demanding, imperfect material that designers must learn to handle. Its promise lies less in spectacular breakthroughs and more in the quiet, often overlooked opportunities where AI, applied with care and systemic thinking, can genuinely improve lives.

Watch the full video to relive the talk.

FAQ – Frequent questions

 

1. Who is John Zimmerman?
John Zimmerman is a UX and Service Designer, Professor of AI and Human–Computer Interaction, known for his research on designing meaningful, buildable and ethical AI systems.

2. Will AI replace designers?
According to Zimmerman the answer is no. AI automates tasks, but designers remain essential for systemic thinking, creative facilitation and aligning AI with real human needs and contexts.

3. Why do so many AI projects fail?
As Zimmerman unfolded during his lecture, many projects fail because teams choose unrealistic problems, misunderstand model performance, or lack collaboration between designers, data scientists and managers.

4. How is AI changing the design process?
AI shifts design toward finding the overlap between user needs and AI capabilities, making designers key facilitators who turn data, ethics and feasibility into practical solutions.

5. What does Domus Academy offer in the field of AI?
Domus Academy offers specialised training at the intersection of design and artificial intelligence through its Design x AI Master’s programme, the 2-Year Master of Arts in Design Innovation, and a range of AI-integrated workshops and applied projects. Across these programmes, students learn how to design AI-enhanced products, services and strategies, combining creativity with technological and ethical awareness. AI is embedded across design, business and innovation curricula, supported by industry collaborations

GET YOUR INFO KIT
Fill in the short form and receive information regarding our undergraduate & postgraduate programmes, admission procedures, campus facilities and scholarships.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
NEWSLETTER
Want to stay up to date? Subscribe to receive updates about upcoming events, talks, scholarships contests and more.
Thank you for signing up.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.