UX Fixes

Focused iteration day — fix what user testing revealed.


UX Fixes

The focused iteration day after user testing. Evidence-based fixes applied within a single day, before the next workshop or the final presentation.


Why This Step Exists

User testing surfaces friction. Some of it is fundamental (product direction changes). Some of it is fixable in a day (copy, layout, interaction patterns). The UX fixes day converts testing insights into shipped improvements — fast enough to present "before and after" evidence in the next workshop or final presentation.

On the first engagement, the UX fixes day produced 16 implemented fixes, 4 major overhauls, and a complete evidence trail linking every change to a specific user quote. The client saw working code with issues fixed — not a list of recommendations they'd need to act on later.


When It Happens

UX fixes sit between user testing and the next decision point:

USER TESTING
    │   5 sessions, evidence collected
    ↓
UX FIXES (1 day)
    │   Analyse findings, prioritise, fix
    ↓
NEXT STEP
    ├── Workshop (if mid-project — present fixes, set new sprint scope)
    └── Presentation (if end-of-project — present validated outcomes)

In a multi-sprint project, UX fixes happen after each round of testing. The fixes inform the next workshop and demonstrate the testing-iteration loop works.


How to Do It

1. Analyse Testing Results (Morning)

Before fixing anything, understand what the data shows:

  1. Score each sprint question — Pass/fail across all testers, weighted by ICP fit
  2. Extract key findings — What's unanimous? What's split? What contradicts?
  3. Separate fix types:
    • Quick fixes — Copy, labels, loading states, visual polish (< 30 min each)
    • Major fixes — Restructured flows, new components, significant UX changes (hours each)
    • Strategic decisions — Product direction changes that need client input (don't build, present)

2. Prioritise Ruthlessly

Not everything from testing gets fixed. One day means choosing what matters most.

Fix if:

  • It's unanimous feedback (all testers hit the same friction)
  • It's from an ICP-fit user with specific, actionable feedback
  • It's achievable in the time available
  • It directly addresses a failing sprint question

Don't fix if:

  • It's a strategic product decision (present it, don't build it)
  • Only one non-ICP user mentioned it
  • It requires backend changes beyond your scope
  • It would take longer than the remaining time allows

The priority stack:

Priority Type Time Example
1 Sprint question failures Hours Restructure report categories, fix trust mechanism
2 Unanimous friction points Hours Remove chat intermediary from source access
3 Quick copy/UI fixes < 30 min each Rename confusing labels, add loading states
4 Data consistency < 30 min each Align numbers across views

3. Execute Fixes

Major fixes first. Start with the highest-impact changes that take the most time. These are the ones that will show the biggest improvement in the presentation.

Quick fixes in parallel. If using AI coding agents, run quick fixes in parallel while working on major changes. These stack up — 15 small fixes collectively transform the feel of the product.

Document everything. For each fix:

  • What user said (the quote)
  • What was changed (before/after)
  • Which sprint question it addresses

This documentation becomes the "What We Changed" section of the presentation.

4. Prepare the Before/After Evidence

The UX fixes day isn't just about making changes — it's about building the evidence package:

  • Before screenshots (captured during testing or from the previous build)
  • After screenshots (the fixed version)
  • User quote that drove the change
  • Sprint question it addresses

This evidence is what makes the next workshop or final presentation compelling. The client sees what users said, what was changed, and why.


What to Fix vs. What to Present

Category Action Example
Copy/messaging issues Fix "No web scraping" → "Primary source verification"
Missing loading states Fix Add spinner during graph initialisation
Navigation gaps Fix Add back navigation from report to share cover
Label confusion Fix "Custom Research" → "Request Dedicated Research"
Data inconsistencies Fix Align dashboard metrics with report data
Report restructure Fix (if achievable) Signal-type → domain-based categories
Trust mechanism changes Fix Remove chat intermediary, show sources directly
New product features Present as direction "Living document" concept from 2 users
Enterprise-only features Present as direction Project management, real-time collaboration
Backend-dependent changes Present as recommendation Real confidence scoring, email notifications

Locked Decisions

Decision Why
One day for fixes, not two Prevents scope creep into a second build sprint. Forces prioritisation.
Evidence trail for every fix Every change linked to a user quote — the presentation argues itself.
Strategic decisions presented, not built Building half-baked features dilutes the validated story.
Quick fixes run in parallel AI agents handle 15+ small fixes while major work happens. Collectively transformative.
Fix before present The presentation shows "tested, fixed, improved" — not "tested, here's a list of things to do."

Research Tech Example

Sprint 2 UX Fixes (20 January 2026):

  • One day of fixes between user testing (19 Jan) and the Sprint 3 workshop presentation (21 Jan)
  • Addressed critical usability issues found in 5 user testing sessions
  • Quick iterations based on real user feedback, feeding into the next sprint's scope

Sprint 3 UX Fixes (29 January 2026):

The day between user testing (28 Jan) and the final presentation (30 Jan). 4 user testing sessions produced clear, actionable feedback.

Major fixes completed:

  1. Direct source access — Removed chat intermediary from evidence drawer. 5/9 users across both sprints preferred direct source access. Sources-first evidence panel with confidence levels, rejected sources section, and chat as secondary "Ask in chat" action.
  2. Report categories restructured — Changed from signal-type categories (Risks, Conflicts, Opportunities) to domain categories (Team & Organisation, Market & Competition, Product & Technology, Financials & Traction). 3 ICP users explicitly requested this. Both demo reports updated.
  3. Confidence indicators added — Confidence level badges on insight cards (High/Medium/Low/Review). Confidence narrative in evidence drawer explaining model agreement. Rejected sources section showing methodology transparency.
  4. DAG interactive navigation — Priority sort within columns, category filtering (click category node to filter), hover preview, simplified colour coding (9 edge colours → 3 states). Full-bleed canvas mode for research nodes page.

Quick fixes completed (16 total):

  • Loading state on review step
  • Back navigation from report to share cover
  • Descriptive DAG column headers
  • "No web scraping" reframed to positive messaging
  • "Custom Research" renamed to "Request Dedicated Research"
  • Processing time context added
  • Review step label clarified
  • Multi-model verification messaging improved
  • Multiline chat input
  • Stage count verified
  • Toggle label renamed
  • "Assess" → "Research" copy fix
  • Email notification copy added
  • Date removed from share cover
  • Clickable risk badges on dashboard
  • Data consistency audit (5 mismatches fixed)

What was NOT fixed (presented as strategic directions):

  • Living document / project management features (2 users independently identified — presented as next product milestone)
  • DAG as interactive navigation tool (presented as power-user feature for future iteration)
  • Real-time notifications (backend infrastructure — copy fix set correct expectations)
  • Review step optional (team strategic decision — deferred to client)

Result: The final presentation showed validated sprint questions (90%/80%/100% for ICP), plus "here's what we changed based on evidence" with before/after comparisons for each major fix. Client reaction: "I'm amazed. Unbelievable." Every change was backed by a user quote and a visual comparison.


Previous: User Testing | Next: Presentations