Insights /

What design leaders need to know about model behavior and UX risk

What design leaders need to know about model behavior and UX risk

For all the value it adds, AI introduces a new kind of design constraint. 

Companies are rushing to add AI to their products, but most of them are hitting a wall. In 2024, 17% of companies walked away from most of their AI initiatives. By 2025, that number hit 42%

Central to this trend is the fact that traditional design rules don’t apply to a product that is probabilistic, as opposed to deterministic. It’s a lot more difficult to design for a product that can change its mind. In traditional applications, buttons and UI elements do what they’re told. In an AI app, the model provides a best guess.

As design leaders, our challenge is to manage the gap between that guess and the user’s reality. If you’re looking to build or add AI to your products, here are five risks you need to design for:

1. The ‘certainty’ trap

AI models are built to be confident, even when their output is completely wrong. If your UI displays an AI-generated answer with the same visual emphasis and weight as a hard data point, you might be setting your users up for failure.

Users will soon realize that the AI was confidently inaccurate or plainly wrong. They will not just lose faith in that one answer, but rather, they lose faith in the whole product itself. One in three Americans now uses AI every week. But only one in twenty actually trusts it. Nearly 70% won't let AI take action without their explicit sign-off.

Design for a degree of confidence. If a model is only 70% sure of an output, the UI should reflect that, perhaps by presenting it as a suggestion or a starting point rather than the final result.

2. Designing for the “wrong” answer

In the past, an error was a bug. In the new AI world, a wrong answer is often just the model doing its job based on the data it has. The risk here isn’t the error itself, but more so the dead end it creates for the user.

If a user gets an unhelpful AI response and their only option is to ‘try again’, you’ve failed them.

We should focus on iterative user experiences. This means giving the users the tools to steer the AI. How can you add features to shorten, change tone, or add more detail to the AI’s output, without forcing the user to start over from scratch?

3. The human-in-the-loop problem

2025 was the year we moved towards ‘agents’ that take actions, like drafting an email or updating records. But with this, the stakes get higher. Enterprises are rushing to deploy AI that can act on its own. Yet Gartner expects more than four in ten of those projects to get shelved before 2028.

The biggest UX risk is the loss of agency. Users feel anxious when an AI agent does work on their behalf without a clear way of auditing it.

We should aim to design more transparent handoffs, where the AI does 90% of the work but the human retains 100% of the control.

Building an AI native product is a balance of speed and safety. You want the wow factor of automation, but you can’t afford the oops factor of a model that goes off the rails.

4. The “latent gap”

In traditional UX, we usually aim for sub 100ms response times. With large AI models, you’re often looking at 2 to 10 seconds of thinking time. This can create a cognitive disconnect.

If a user stares at a static loading spinner for 5 seconds, they lose their train of thought and their sense of momentum.

The fundamental risk here is the black box feel of the wait. If the user doesn’t know what’s happening during that “silence”, they assume that the system is broken.

To overcome this, we use progressive disclosure. Don’t wait for the final answer to show something. Stream the text as it’s generated, or show thought traces with brief updates like “Analyzing your Q4 spreadsheet” or “Cross-referencing market trends”.

5. The context drift in long-term use

AI models have a context window. That is, a limit of how much information they can remember in a single session. As a user interacts with a tool over time, the model can start “forgetting” the original constraints or the specific style the user requested.

The risk here is inconsistency. A tool that starts off brilliantly but becomes forgetful or erratic 20 minutes into a session is a tool that gets abandoned.

To mitigate this, we implement persistent context bars. This is an element that shows the user exactly what the AI is currently remembering (e.g., Style: Professional, Data Source: 2025 Reports, etc). If the AI starts to drift, the user can see why and reset the memory without losing their work.

Make design your AI differentiator, not just the model

AI is moving fast. But trust is built slowly, one well-designed interaction at a time.

Every risk, from false confidence to dead-end errors, loss of agency, latency fog, and context drift, shares a common thread: they're not technical failures. They're experience failures. And they're entirely preventable with the right design decisions upfront.

The gap between a flashy demo and a product that scales is measured in edge cases. When we collaborate with clients, we don't just optimize for the happy path. We design for the hallucination, the 5-second silence, and the user who's one bad answer away from abandoning the tool entirely.

Contact us to learn more.

Subscribe to receive the latest industry insights and news from Transcenda

Related articles:

Subscribe to Transcenda's newsletter

Receive exclusive, expert-driven engineering & design insights to elevate your projects to the next level.