
This is Esure
The intelligent BOT assistant
My responsibilities at Synthetix included exploring ways to iteratively improve the user experience of our B2C products, such as live chat systems, intelligent forms, and web-based self-service FAQs. This specific project focused on developing a more scalable and effective way to integrate these products into a single, modular tool, simplifying deployment for our clients and better serving their customers' needs.
Esure
Client
3 months, London 2019
Timeline
1 Designer
1 PO
1 Developers
Team
Lead UX/UI Designer
My role
Our initial assumption was that users would find value in an intelligent online tool that allowed them to quickly and easily find the information they sought, and that also provided multiple avenues for escalating to direct contact with a contact center agent if necessary.


From a business perspective, we knew our clients needed to efficiently support their customers 24/7, minimizing effort and reducing inbound communication channels.
Main challenges:
The core challenge was to design an AI-powered product that subtly guides user decisions to enhance conversion and satisfaction rates, while reducing the potential for errors or uncertainty, without compromising user control over their experience.
To test our assumptions, I used the insurance industry as a case study. My prior experience with Esure provided valuable insights into insurance products and client needs, informing the development of our persona, which proved highly useful for this exercise.

Our initial hypothesis:
-
We believed a bot system with a simple decision tree would effectively guide users.
-
We believed users should retain independent access to all other tools, channels, and escalation routes throughout their interaction with the bot.
-
We believed the bot itself could offer escalation routes when needed (e.g., user struggles, user wants to skip).
-
We believed open question fields should be used sparingly to expedite the experience, minimize typing friction, and reduce errors.

Based on our working assumptions, I designed a basic user flow that we envisioned could be further customized and enhanced with more tailored conversational interactions to personalize the user experience and better meet individual needs.
How others do it:
Competitive research informed our bot redesign by providing insights into market trends and interaction design. Through heuristic analysis and testing, we identified potential solutions, user pain points, and key differentiators between various user experiences. This process led us to two viable options, both of which could be tailored to our needs.

Paper prototyping:
Following the selection of Option 2, I designed a mobile-first experience, beginning with sketches and wireframes. The design aimed to create a conversational interface distinct from a typical chat window, ensuring users could easily differentiate the bot experience from the integrated live chat one.

Recognizing the need to guide users through the bot journey while also providing the flexibility to type questions or initiate a live chat, I chose to implement a burger menu for navigation and positioned readily accessible escalation options at the bottom right of the screen, within easy reach of the user's thumb.
Mid-fidelity prototype:
At this stage, my primary focus was to ensure users could easily follow the bot's conversational flow while also being able to readily navigate back to earlier steps in the journey to change their path as needed. Our assumption was that this would empower users with a strong sense of control over their experience, without detracting from the bot's guided assistance.

First round of testing:
To validate my design decisions early in the process, I conducted guerrilla usability testing. Five users participated in the testing: three internal participants from teams outside of development and design, and two external users. Participants were observed as they completed specific tasks using the prototype.

The results of the usability testing were generally encouraging, though not without areas for improvement:
-
The average task success rate was determined to be 82%.
-
The average user satisfaction rating was 4 out of 5.
-
The primary issue identified during testing was that users did not immediately comprehend how to navigate back to previous steps within the bot interaction.
High-fidelity prototype:
Based on initial test results, I revised the design. I was confident the high-fidelity prototype would address some of the previous issues. Brighter colors, updated iconography, and refreshed branding enhanced the product's visual appeal. To clarify the "Go back" action, I added a downward arrow icon next to the previous choice at the top of the screen, indicating that clicking or dragging down would navigate back one step.

Second round of testing:
Subsequent testing iterations demonstrated significant improvements in the prototype. User feedback on ease of use, clarity, and aesthetics was extremely positive. Previous issues were resolved, and the complete user experience, including integrated escalation options to FAQs and live chat, was validated.

Further testing:
Prior to project delivery, I determined that further testing was necessary to validate the new live chat features. Quick A/B testing proved invaluable in resolving uncertainties surrounding the new chat interface design, particularly regarding the optimal placement of the "End Chat" button and how best to enable users to move in and out of the chat room during an active session for multitasking purposes.

A/B testing results:
Based on test results, option B resulted far more successful than option A:
-
Average success rate was 100%
-
Average satisfaction rate was 5 out of 5
-
Average time per task 35(s).
The new intelligent bot assistant concept was well-received by the business, the team, and our clients, who expressed strong interest in adoption.
