top of page

Re-imagining a trustable AI evidence package for enterprise CRM

In the competitive landscape of enterprise CRM platforms, our sales teams expressed a growing concern: a lack of trust in the AI-generated results. This hurdle was impacting their decision-making process and overall performance. The need was clear - a redesign of the AI evidence package that not only improved trust but also aligned with the sales team's workflow.

  • My role: lead designer

  • Collaborators: 1 PM, 1 researcher, 5 data scientists, 5 engineers

  • Design Time Frame: Nov 2022 - Jan 2023

  • Design Success Metrics: Scalability, Explainability, Actionability, and Implementation Viability

  • Tools: Figma, ReGraph

Screen-Recording-2023-11-17-at-1.gif

Introduction

WHAT IS CRM

Context & Pain Point
34.png

CURRENT USER GROUP FOR C3 CRM: INTERNAL SALESPEOPLE

Build Trust_3.png

Let's meet Jackie. He is a sales representative whose goal is to reach quarterly sales target by managing his deals and forecasting what deals he can close.

44.png

On a daily basis, Jackie would come to the opportunity detail page to manage and track his deal. On the top panel, while he estimates the likelihood of closing the deal for this quarter on the "Probability" field, the "C3 AI Probability" also provides a AI prediction as a reference.

396.png

On the bottom right side, the "C3 AI Insights" section provides a collection of data and insights that explain how an AI prediction or recommendation was made.

394.png

However, Jackie has been lacking trust in the CRM AI Insights because he is not sure why there is a big gap, or what contributed to the changes in AI probability over time, and he thinks the presented evidence is insufficient to justify the score change.

408.png

Unfortunately, Jackie was not alone. A researcher and I have interviewed 7 sales representatives and we concluded that they all share a similar experience. 

399.png

See quotes from different users

402.png

Therefore, we initiated the project of redesigning a trustable CRM AI Evidence Package for Salespeople. I collaborated with 1 researcher, 1 PM, 5 data scientists, and 5 engineers on this project.

409.png

It is essential to build trust for AI because before we ask users to use AI, we first need to build trust between humans and AI. Only with strong user trust, we can achieve a higher usage rate and better user satisfaction, and our salespeople can sell to external customers more confidently.

Build Trust_5.png

Solution Approach

To solve the issue of lacking user trust in AI, I approached the challenge with 3 steps: evaluation, investigation, and action.

400.png
Solution Approach

EVALUATION - Why is the current AI evidence package not trustable?

In the first step, I want to learn why the current AI evidence package is not trustable by better understanding users' pain points.

1. Evalute Issue

To break down the lack of user trust issue, I summarized 3 main pain points from the user research. 

476.png

Firstly, the presented evidence is insufficient to justify the score change. For instance, sales reps normally know how long their deal is in a certain sales stage very well; seeing information they already know causing a score change does not help them believe in the power of AI. They'd like to see supporting evidence that is novel to them.

Secondly, while we show an AI probability score and its history,
users do not understand what evidence below contributes to a specific score change on the chart. It would be more helpful if the evidence could be closely associated with the score change over time.

Thirdly, users cannot drill into insight details like raw data. The lack of actionability also resulted in the lack of user trust in the AI evidence package.

To concur with those issues, I have done a number of brainstorming sessions with PMs and data scientists, and we eventually landed on three major solutions - improving the variety of data, providing a time component for the package, and enhancing the interpretability and actionability of AI evidences.

464.png

INVESTIGATION - What additional data can make it more trustable?

Now that we understand user pain points and have the solutions in mind, I then move on to discuss and understand data and viability with data scientists and the PM.

2. Investigate Data
405.png

After a number of brainstorming sessions on the helpful factors for different types of salespeople, we summarized three data types that could be integrated into the evidence package - coutinuous data, static data, and discrete data.

420.png

With the 3 types of data in mind, the challenge for me as a designer is then - how might we design a scalable pattern that can integrate any kind of data?

414.png

ACTION - How can design improve it?

At last, we are in the ACTION step, which is using design to solve the issue. There are 4 major design decisions during the process - data categorization, enrich data visualization, simplify navigation, and enhance actionability.

3. Design Decisions
90.png

1. DATA CATEGORIZATION

As we know the 3 types of data, what do they look like if we translate them to visuals directly?

465.png

They look very complex and hard to understand, which is not suitable for salespeople who are non-technical.

Frame 4337.png
Frame 4338.png
Frame 4336.png

To solve this issue, my task is then to translate DATA language to USER language.

451.png

Instead of using data types that are complex to understand, I found that using data categories like score movers, activities, external events, and opportunity changes that are very familiar to the users is much more appropriate

446.png

2. DO ADDITION - ENRICH DATA VISUALIZATION

To enrich data visualization, there are two main challenges - how might we design an easy-to-understand evidence package for salespeople who are non-technical? How might we design a scalable pattern that can integrate any kind of data?

92.png

After exploring 50+ potential designs, I always feel like something is missing in each design and I can hardly make a design decision...

467.png

Therefore, I developed a set of success metrics for my design to help me streamline my design process, including scalability, explainability, and implementation viability.

97.png

Then, I tested a few designs with 3 salespeople using the success metrics, which helped me narrow down my design direction.

468.png

Once I had a relatively satisfied design, I brought it to engineers to discuss implementation viability. Besides giving a rating, engineers also shared with me a few additional features supported by Kronograph. The PM and I found that using icons and a horizontal layout of visualization and evidence can be very helpful for our evidence package. 

452.png

At last, we had a winner that is scalable, easy to understand, and viable to implement.

453.png

3. DO SUBTRACTION - SIMPLIFY AND IMPROVE NAVIGATION

After having an enriched data visualization, if we put the evidence package back to the opportunity page, we can spot a few issues:
1. The probabilities on the top panel and the evidence
package are repetitive;
2. The association between the probabilities on the top and the evidence package is weak because there is an opportunity details section in the middle;
3. While the forecast movers should be closely associated
with the probability line chart, they are now separated. 

473.png

Therefore, I decided to:
1. remove repetitive probabilities on the evidence package, and enhance the probabilities on the top panel;
2. Remove the middle section to enhance the association of AI probability and evidence package;
3. Integrate the forecast movers into the probability line chart.

474.png

4. ENHANCE ACTIONABILITY

At last, I'd like to design a scalable pattern that helps enhance AI interpretability and user actionability.

441.png

I've explored several options. The first one is to make feeds expandable/collapsable. However, the expandable section has very limited space, which is not scalable for all kinds of data.

460.png

The second exploration is adding an additional side panel for double-click actions. However, it will squeeze the chart component a lot, lead to a messy information hierarchy, and it is also hard to implement as we do not have this component for section level. 

458.png

At last, my final decision is to use a modal for users to drill into insight details. A modal would be scalable for all kinds of data; it provides  a focused view for raw data; and it is easy to implement as an existing component.

469.png
472.png

New Experience

REDESIGNED VERSION OF C3 AI CRM EVIDENCE PACKAGE

Final Design
Screen-Recording-2023-11-17-at-1.gif

Solution Summary

In summary, we have redesigned the full page of opportunity detail to enhance user trust in the AI evidence package.

421.png
Soluton Summary

The new experience has solved all user pain points by improving the navigation, the variety of data, and adding in double-click actions.

475.png

After seeing the design of the new experience, our salespeople were very excited about the change, especially about seeing external information without leaving the application.

439.png

Reflection

141.png
Reflection
143.png

 © 2024 by Aichen Guo.

bottom of page