Why Most AI Projects Feel Like Talking to a Very Smart Wall (And How AX Fixes It)
Published
Jul 29, 2025
Topic
Thoughts
I've been into AI for three years, but only started building full AI systems these last few months. Multi-agent orchestration, custom fine-tuning, RAG implementation—the whole technical stack. But here's what nobody talks about in the AI builder community: most AI agents only achieve a 75.3% task completion rate, and the gap isn't technical capabilities. It's how users actually interact with those capabilities.
Last month, I watched a Fortune 500 client demo their new "AI-powered customer service agent." The AI could process natural language, access their entire knowledge base, and even integrate with their CRM. Technically impressive. But watching real customers use it? Painful. People would start a conversation, get confused about what the AI could actually do, repeat themselves three times, and eventually give up to call human support.
The AI wasn't broken. The experience was.
Everyone's Building AI Wrong
61% of U.S. adults have used AI in the past six months, but most of those interactions suck. We're in this weird phase where AI capabilities are advancing faster than our understanding of how to make them usable.
Most organizations aren't agent-ready, according to recent industry analysis. Companies are rushing to deploy AI without thinking about the human side of the equation. They build sophisticated multi-agent systems that can research, analyze, and generate—then slap a generic chat interface on top and wonder why adoption is low.
I see this constantly in the AI builder community. Someone will demo their agent that can "analyze market trends, generate investment reports, and execute trades autonomously." Then you try using it and realize you have no idea how to actually get it to do any of those things effectively.
The traditional evolution went:
First wave: Command-line interfaces (you had to memorize specific commands)
Second wave: Graphical interfaces (buttons, menus, visual workflows)
Third wave: Touch interfaces (direct manipulation, gestures)
Now: Conversational interfaces (natural language interaction)
But here's the problem: conversation isn't always the right interface for complex AI capabilities. Conversational interfaces are cheap to build, so they're a logical starting point, but you get what you pay for. Just because you can talk to an AI doesn't mean the conversation flow makes sense for what you're trying to accomplish.
Enter Agent Experience (AX)
This is where Agent Experience comes in. AX applies UX principles to AI agent design—but it's not just "UX for chatbots." It's a complete rethinking of how humans and AI systems work together.
After spending 6+ years designing institutional-grade interfaces (handling $4B+ in annual transactions), then switching to AI systems architecture, I've realized something: the same principles that make financial platforms intuitive for investment banks can make AI agents actually usable.
Traditional UX focuses on: Buttons, workflows, visual hierarchy AX focuses on: Conversations, memory, learning, capability discovery
Same core principles, completely different application.
The Four Pillars of AX
1. Conversation Design Not just "make it talk like a human." Design structured dialogues that help users discover capabilities progressively. Most AI agents today are like walking into a library where the librarian knows everything but you have to guess what questions to ask.
2. Context Preservation AI agents need memory systems that work like human memory—not just storing everything, but knowing what's relevant when. 57% of AI interactions are classified as augmentative (enhancing human capabilities) versus automation patterns, which means the AI needs to understand the user's ongoing work context.
3. Progressive Complexity Start simple, reveal advanced features gradually. Don't overwhelm users with every possible capability upfront. 54% of consumers don't care how they interact with a company, as long as their problems are fixed fast—but they need to understand what's possible.
4. Capability Scaffolding Help users build mental models of what the AI can do. This is where most conversational interfaces fail—they assume users magically know how to prompt for complex tasks.
Real-World AX in Action
Let me show you what good AX looks like versus the typical approach:
Typical AI Agent Interaction:
User: "Help me with my project"
AI: "Sure! What would you like help with?"
User: "Uh... analyze the data?"
AI: "What kind of analysis would you like?"
[User gives up after 3 exchanges]
AX-Optimized Interaction:
User: "Help me with my project"
AI: "I can see you're working on Q4 planning. I can help with: market research, competitive analysis, or financial projections. Which area should we start with?"
User: "Market research"
AI: "Got it. I'll research your top 3 competitors and current market trends. This usually takes 2-3 minutes. While I work, what specific metrics matter most for your Q4 strategy?"
The difference? The AI agent demonstrates knowledge of context, explains its capabilities clearly, sets expectations about timing, and keeps the user engaged during processing.
The Numbers Don't Lie
Companies that get AX right see dramatic improvements. 75% of organizations have seen improvements in satisfaction scores post-AI agent deployment when they focus on experience design, not just technical capabilities.
Research indicates that the tasks AI agents can autonomously complete with a 50% success rate have been doubling in numbers approximately every seven months. But here's the kicker: every AI agent experienced a decrease in success after 35 minutes of human time spent on the task.
This isn't a technical limitation—it's an experience design problem. Users lose confidence in agents that take too long or don't communicate their progress clearly.
Meanwhile, 80% of consumers feel more valued when autonomous assistants deliver hyper-personalized interactions. The agents that succeed focus on understanding user context and adapting their behavior accordingly.
What Goes Wrong Without AX
I've analyzed dozens of failed AI implementations. The pattern is consistent:
McDonald's AI Drive-Thru Disaster: McDonald's ended their IBM partnership after viral videos showed the AI adding 260 Chicken McNuggets to orders while customers pleaded for it to stop. The AI could process speech and understand orders, but had no conversational awareness or error recovery patterns.
Air Canada's Chatbot: Air Canada was ordered to pay damages after its virtual assistant gave incorrect information to a passenger. The bot could answer questions but had no understanding of context or consequence.
Character.AI Safety Issues: Character.AI faced lawsuits from families claiming its bots delivered explicit content to minors and promoted self-harm. Technical capability without behavioral design principles.
The common thread? These systems had impressive AI capabilities but terrible Agent Experience. They could process language but couldn't navigate human interaction patterns.
How to Build AX Into Your AI Projects
Here's my framework for implementing AX, learned from building production AI systems:
Start Simple, Scale Smart
Don't launch with "I can do anything." Start with 3-5 core capabilities and nail the experience for those. 39% of consumers are already comfortable with AI agents scheduling appointments, but only because that's a well-defined, limited scope interaction.
Example progression:
Week 1: "I can help you book meetings"
Week 4: "I can book meetings and suggest optimal times based on your preferences"
Week 12: "I can manage your entire calendar, coordinate with multiple attendees, and reschedule automatically based on priorities"
Design Conversation Flows, Not Just Responses
Map out the complete interaction journey. Where do users typically get stuck? What questions do they ask repeatedly? 87% of customer service interactions involve at least one transfer—your AI agent should eliminate that friction, not replicate it.
Implement Progressive Disclosure
Show capabilities gradually based on user behavior. If someone successfully completes basic tasks, introduce advanced features. 32% of Gen Z consumers are comfortable with AI agents shopping for them—but they didn't start there. They built trust through simpler interactions first.
Build Memory That Matters
Not just conversation history—contextual memory. When I return to my AI assistant after a week, it should remember my current projects, preferred communication style, and ongoing goals. 65% of B2B companies report stronger client engagement rates since implementing contextually-aware AI solutions.
The Business Impact
This isn't just about user satisfaction. AX directly impacts revenue.
83% of sales teams with AI saw revenue growth in the past year—versus 66% of teams without AI. But here's what's interesting: the highest-performing implementations aren't just using more sophisticated AI models. They're designing better human-AI collaboration patterns.
Businesses have reported an average 6.7% boost in customer satisfaction scores in areas where artificial intelligence has been tested or implemented. That's not from better AI—it's from better AI experiences.
Companies that nail AX see:
35% higher task completion rates (verified across multiple studies)
Reduced support escalation (AI handles more complex requests successfully)
Higher user adoption (people actually want to use the AI tools)
Better business outcomes (AI drives measurable value, not just efficiency)
The Future is Multi-Modal AX
We're moving beyond text-based conversations. Voice and gesture-based interactions, immersive AR/VR experiences are becoming standard in AX design. The agents I'm building now integrate voice, text, data visualization, and action execution in seamless workflows.
Example: Instead of describing data trends in text, the AI generates a visual dashboard while explaining insights verbally, then asks clarifying questions based on what the user focuses on visually. That's multi-modal AX.
By 2027, AI agents are expected to automate 15% to 50% of routine business tasks—but only if we solve the experience design challenge. The technical capabilities already exist. The user experience gap is what's holding back mass adoption.
Bottom Line
AX isn't optional anymore. It's what separates AI projects that get funding from AI projects that get results.
The global AI agents market is projected to reach $7.6 billion in 2025, up from $5.4 billion in 2024—that's a 41% jump in one year. But most organizations aren't agent-ready from an experience perspective.
The opportunity is massive for anyone who gets this right. While everyone else is focused on building more powerful AI models, you can create dramatically better experiences with existing technology.
Three things to do this week:
Audit your current AI interactions: Where do users get confused or frustrated? Those friction points are AX design opportunities.
Map your user's AI journey: Don't just think about individual prompts. Design the complete workflow from first interaction to expert usage.
Start small and measure: Pick one AI interaction pattern and redesign it with AX principles. Track completion rates, user satisfaction, and task success.
The future belongs to whoever makes AI feel intuitive instead of impressive.
Want to see AX in action? I've built several AI systems that demonstrate these principles. DM me @deeflectcom if you want to explore how AX could transform your AI projects.