Designing Signal Inbox: A smarter way to give agents instant customer context

A case study on using jobs-to-be-done and the double diamond to turn fragmented session data into clear, actionable signals.

  • Name

    Signal Inbox - A live stream of behavioral signals and session context.

  • Phase

    MVP -- Phase 1

  • Business

    Foresee (acquired by: Verint Systems)

  • My Role

    End-to-end Product Design & Leadership, UX research, system modeling, interaction design, replay integration

  • Impact summary
    • Reduced context switching during triage and initial response
    • Improved comprehension of customer behavior in under 5 seconds
    • Transformed replay from a separate tool into contextual intelligence
  • Figma project
  • FigJam project

Overview

Support agents rely on clarity. In the first moments of a conversation, they quickly assess who they’re helping, what happened recently, and which signals matter for the issue at hand. But in most CX tools, this context is scattered — buried in logs, timelines, sentiment dashboards, and analytics. Agents jump between multiple systems to reconstruct a customer’s story.

Signal inbox was designed to solve this gap. It brings together the most important behavioral cues, session activity, and emotional patterns into a single, prioritized feed that agents can interpret in seconds. Instead of piecing together data across tools, agents get immediate clarity on what the customer did, felt, and experienced — right before deciding how to respond.

This case study explains how I used jobs-to-be-done and the double diamond process to identify the agent’s decision moments, define the clarity gaps, and design a context model that reduces cognitive load. It breaks down the reasoning behind each design choice, from signal prioritization to hierarchy and the role of replay intelligence.

The result is a context system that turns raw behavioral data into actionable clarity.

Discovery to delivery

Visualizing the Double Diamond process that guided the Signal Inbox feature — from identifying CX friction points to launching a zero-to-one system grounded in behavioral data and JTBD.
Discovery

Identified friction points through research analysis and interviews. JTBD helped reveal what users were really trying to achieve.

Define

Synthesized insights into key problems and prioritized them. JTBD guided problem framing and design focus.

Explore

Ideated and prototyped solutions to test how Signal Inbox could improve resolution. JTBD ensured alignment with user goals.

Deliver

Delivered end-to-end UX, prototypes, UI components, and design specs—all tied to measurable impact

Phase 1: Discovery

Uncovering CX struggle through behavior and intent

In the discovery phase, I set out to uncover the hidden patterns that caused friction across Nordstrom’s customer support workflows. As a net new feature, Signal Inbox needed strong evidence of a real unmet need — not just a wishlist item. By applying the Jobs to Be Done (JTBD) framework within the Double Diamond process, I was able to identify not just what users were doing, but why they were doing it.

We conducted workflow mapping, listening sessions, and behavioral observation inside live support environments. A universal pattern emerged: during the first 30 seconds of a conversation, agents hunted for clarity.

Input sources

Support tickets (tagged escalation + resolution time)

Interview transcripts

Support agents, PMs, and developers

Internal requests

Nordstrom’s CX team

Usage analytics

Page flows, session length, repeat events

Phase 1: Discovery

Discovery outcome

This phase validated a clear opportunity to build a shared visibility layer for CX work — one that removes guesswork and exposes the customer’s journey as behavioral evidence, not hearsay. This informed our product brief and paved the way for the Define phase.

Key insight themes

  • Lack of behavioral context stalls resolution

    Struggle: Agents and CX teams often received vague or incomplete customer complaints.

  • Escalations stem from internal blind spots

    Struggle: Developers and product managers lacked visibility into what the customer experienced before bugs were reported.

  • Manual reconstruction is error-prone and time-consuming

    Struggle: CX and support agents manually pieced together customer journeys from logs, CRM notes, or multiple tools.

  • Analytics miss the human story

    Struggle: Quantitative tools (dashboards, funnels) failed to explain why users struggled.

  • Frontline support workarounds mask systemic issues

    Struggle: Agents developed their own hacks (e.g. screen shares, mock logins) to resolve tickets.

The JTBDs

Core JTBD

JTBD: “When I open a customer conversation, help me see what the customer recently did and experienced so I can respond accurately and avoid unnecessary questions.”

Secondary jobs

  • When I’m preparing my initial response, help me understand the signals that shape this customer’s expectations.
  • When an issue feels unclear, help me identify the likely root cause without searching across systems.
  • When I’m short on time, reduce cognitive load by showing only what’s relevant to this specific issue
  • Help me feel confident that I’m not missing anything important.

JTBD: Support Agent (SA)

JTBD: When investigating bugs or regressions, I want to Signal Inbox the exact steps users took, so I can debug faster without reproducing manually.

Motivations

  • Reduce time-to-resolution.
  • Deliver confident, accurate responses on the first touch.
  • Increase CSAT and deflection of unnecessary escalations.

JTBD: CX Analyst

JTBD: “When support patterns emerge or a new release drops, I want to investigate sessions in bulk, so I can spot usability or workflow problems and influence product fixes with evidence.”

Motivations

  • Drive systemic improvements and remove UX friction.
  • Detect usability pain points early.
  • Improve top call drivers and reduce volume through insights.

JTBD: Product Manager

JTBD: “When CX or support flags recurring friction points, I want to validate the impact by seeing real user sessions, so I can prioritize confidently and advocate for fixes with stakeholders.”

Motivations

  • Validate and quantify customer pain with behavioral evidence
  • Confidently prioritize product backlog items/li>
  • Confidently prioritize product backlog items
  • Close the loop with CX and support teams using hard evidence

JTBD: Developer

JTBD: “When a bug is assigned to me, I want to Signal Inbox what the user did leading up to the error, so I can reproduce the issue and fix it faster without guessing or recreating edge cases.”

Motivations

  • Reproduce bugs reliably and reduce fix time.
  • Spend less time decoding vague Jira tickets.
  • Improve velocity and code quality.

Phase 2: Define

Define Phase — Framing the Right Problem

After uncovering deep behavioral struggles across roles in the Discover phase, the Define phase helped us narrow in on the real problem. We synthesized dozens of quotes and interactions using the JTBD framework to surface role-specific struggles, identify tool-based workarounds, and quantify unmet outcomes.

01

What we saw in Discovery

Each role — Support Agent, Product Manager, Developer, CX Analyst — had to stitch together the user experience manually. Agents copied logs into Slack. PMs relied on secondhand screenshots. Developers asked for Zoom calls. These were not just workflow issues — they were friction points on the JTBD timeline.

Figjam
decoration
decoration
02

Mapping JTBD struggles

Using the JTBD struggle timeline, we placed each behavior in context. We could now see when and why users felt increasing frustration — and what moved them from tolerating the problem to seeking a new solution.

Figjam
03

Role specific JTBD

Next, we framed each role’s goal using JTBD syntax. This helped us ground the problem in user motivation — not just feature requests. This framing guided all future design decisions and tradeoffs.

  • Support Agent
  • CX Analyst
  • Product Manager
  • Developer
Figjam
decoration
decoration
04

The underserved opportunity

By scoring desired outcomes, we identified a clear pattern: the ability to visually understand the user’s experience — quickly and without switching tools — was highly important and low satisfaction. This was our design opportunity.

Figjam
05

The before state

We visualized the before-state using struggle cards. Each one tells a story of tool overload, time waste, and unnecessary effort — all symptoms of an unmet job.

Figjam
decoration
decoration
06

Defining the problem

From these insights, we arrived at our problem thesis: How might we reduce the time and cognitive effort needed to understand a user’s experience, without requiring multiple tools, manual summaries, or user back-and-forth? This became the foundation for Signal Inbox.

Figjam
07

Defining success

Lastly, we defined success in measurable terms. These outcomes tied directly back to the JTBDs we uncovered — ensuring we would solve the real, high-value problems for each role.

  • Reduce tool-switching time
  • Increase confidence in triage
  • Shorten time-to-resolution
  • Remove need for manual evidence gathering
Figjam
decoration

What we defined: A clear opportunity, a shared problem, and a focused path forward

Signal Inbox was no longer just a feature idea — it became a strategic enabler of fast, accurate, and collaborative issue resolution.

Phase 3: Explore

Framing and testing potential solutions with cross-functional teams

After mapping JTBD and pain signals across roles, we explored prototypes that addressed their most critical moments of struggle. Signal Inbox was a net-new feature, so we worked closely with Nordstrom’s CX, product, and engineering teams to validate desirability, feasibility, and usability before building.

Main-img
Phase 3: Explore

JTBD aligned prototypes

Role specific solution paths

Role
Prototype tested
JTBD trigger addressed
Support Agent Signal Inbox timeline w/ escalation bookmarks I can’t explain what happened on this call”
CX Analyst Query/tag filtering + batch Signal Inbox viewer “I’m seeing a spike in refunds but no pattern yet”
Product Manager Signal Inbox + session metadata in journey tools “I don’t know what experience caused the drop”
Developer Event-based data architecture testing “I need to know if this breaks our infrastructure”

Stakeholder quotes

What we heard

“I hate asking customers to re-explain everything. That bookmark on the timeline saved me 5 minutes easy.”

Support Agent

Nordstrom CX Agent

“I’d use this every day—if I can filter sessions by tag, it changes how I do post-mortems.”

CX Analyst

CX Ops Analyst

“If we can align this with our funnel drop-offs, that’s gold. Right now we just speculate.”

Product Manager

PM, Digital Commerce

“This could explode data costs if we’re not careful. But scoped to trigger events, I think we can make it work.”.

Developer

Platform Engineer


Speed + signals

Agents need speed. Analysts need signal. PMs need story.

Entry points

Entry points into Signal Inbox must reflect each role’s workflow.

When vs. how

“When to Signal Inbox” was as important as “how to Signal Inbox”

Hidden blockers

Storage and cost surfaced as hidden blockers—developer input early saved months

Phase 3: Explore

What we learned

Each role’s expectations around “Signal Inbox” were different.

Agents prioritized speed-to-resolution and wanted Signal Inboxs that aligned with escalation bookmarks. Analysts needed signal clarity—tools to batch-analyze spikes and tag patterns across sessions. PMs, by contrast, saw Signal Inbox as a narrative tool to explain unknown drops in the user journey.

These differences revealed that Signal Inbox could not be a single modal experience—it had to adapt based on workflow context. Additionally, developer feedback surfaced a critical technical constraint: storing full Signal Inboxs by default was not sustainable. By shifting to a trigger-based architecture early, we preserved feasibility without compromising value.

Phase 4: Deliver

Delivering clarity, speed, and alignment at scale

I led the design and launch of CX Signal Inbox—a net-new feature built from scratch to accelerate issue resolution and improve cross-team alignment. Grounded in the Double Diamond and JTBD frameworks, I delivered production-ready UX flows, prototypes, UI components, filtering logic, and full design specs—shaped by real user needs and validated by results.

45% faster time-to-resolution

Support agents resolved tickets faster: Signal Inbox gave agents instant context—eliminating the need to ask users to reproduce steps or dig through logs manually.

38% drop in escalations

Fewer tickets escalated to engineering: Product managers and analysts used Signal Inbox to triage bugs more effectively—flagging duplicates and eliminating false positives.

52% faster root cause analysis

CX analysts diagnosed issues quicker: Signal Inbox surfaced event sequences and user paths that previously required stitching together logs, screenshots, and call transcripts.

40% reduction-investigation time

Developers got what they needed—faster: Engineers used Signal Inbox links embedded in Jira tickets to directly view what happened, reducing time spent reproducing edge cases.