Intelligent Virtual Assistant (IVA)
This project introduces the first version of a customer facing AI virtual assistant for a large scale B2C platform. It simplifies everyday requests, improves user efficiency, and lowers support costs by reducing inquiries and streamlining access to information. Built with scalability, reliability, and enterprise requirements in mind, the assistant leverages conversational design, natural language processing, and sentiment handling to deliver accurate responses and smooth human AI handoff. The work was done in close collaboration with product managers, engineers, and stakeholders, demonstrating systems thinking, cross functional leadership, and customer centric design while establishing a foundation for future intelligent support experiences.
Role
Senior Product Designer
Duration
4 Months
Tools
Figma
Mural
Jira
Deliverables
User Scenarios, AI Tool Comparison Chart, Conceptual Framework, Response Decision Logic Tree, Contextual Layer, Sentiment Flow Chart, Wireframes, and Prototype.
Overview
This project explores the strategy and design for integrating an AI virtual chat assistance into the TreeRing platform, detailing the design specifications, business impact, and high-level technical considerations to create a unified and user friendly solution.
Framing The Problem
One of the most recurring problems uncovered in our research was the overwhelming number of user inquiries about information that was available but difficult to locate within the platform. This created frustration for users and placed a constant burden on customer representatives, who were addressing repetitive questions and calls. Our findings highlighted the need for an AI powered assistant to act as a middle layer, helping users quickly resolve common, non-technical issues. From the business perspective, this solution not only enhances the user experience but also reduces support overhead, lowers staffing costs, and streamlines editor related tasks.
Recreating The Current Situation, Addressing The Benefits
To set the foundation for the project, I began by mapping the current user flow and comparing it to a future state supported by an AI virtual assistant. It was important that all teams understood not only the assistant’s functionality but also the value it could bring to both users and the customer service team.
In the current state, users often face guesswork when trying to complete tasks such as changing a theme, understanding how a button works, or locating information scattered across multiple sites. With unclear entry points and inconsistent resources, many turn directly to customer service. This creates user frustration and forces agents to spend time on repetitive, non-technical inquiries, a workload that continues to increase. Since the business wanted to avoid adding more customer service overhead, we needed to determine what was going wrong and how an AI assistant could fill the gaps.
When presenting to stakeholders, I emphasized how the virtual assistant could remove guesswork, act as a single entry point for information, and serve as a middle layer that connects users to customer service only when escalation is necessary. The assistant would reduce frustration, scale support without increasing costs, improve user confidence, personalize experiences, streamline workflows, and drive feature adoption. Over time, it would continue to evolve, making it a crucial step in modernizing TreeRing into a more user-friendly and future-ready scalable platform.

Determining The Value
As I moved through the discovery phase, our research and user scenarios revealed a clear story: the platform was creating unnecessary strain on both users and the business. Users struggled to find information, and customer service agents were left handling repetitive requests that pulled them away from higher-value work. The business faced a choice, continue to grow support overhead or find a smarter, more scalable solution.
This became the turning point. The introduction of an AI virtual assistant offered a way forward, reducing the need for additional customer service hires while allowing TreeRing to scale efficiently. For the business, it meant cutting costs, streamlining operations, and modernizing the product in ways users had been asking for over the years. For customer service, it meant shifting from acting as the middleman for routine inquiries to focusing on complex, high-impact tasks, stepping in only when escalation was necessary.
Strategy
As the project moved forward, the focus shifted from research into strategy. The challenge was clear: we had a tight timeline, limited resources, and a long list of user pain points to address while still meeting business expectations. To move ahead with confidence, I immersed myself in studying the AI tools available in the market and examined how they could realistically fit within our tool. The strategy became about leveraging what we already had, including APIs, knowledge base articles, and existing infrastructure, to reduce costs and team effort while still building a foundation that could scale with future versions and product launches. This stage was critical in bridging user needs with business priorities and setting a clear path for execution
Comparing Model Approaches for AI Assistants
With all the requirements gathered, the next step was to evaluate our technical options. I compared building a fully custom AI model against using APIs connected to existing models. A custom model offered the greatest flexibility and control, allowing us to tailor the assistant to specific needs and scale it for future versions. However, it required significant time and resources to train, making it unrealistic within our tight timeline. On the other hand, APIs to existing models such as GPT provided a fast, scalable solution. While less customizable, they aligned with our timeline, worked within budget, and supported knowledge base integrations through Salesforce Cloud.
I also explored smaller existing models, like remove.bg and Pine, which could provide immediate value by addressing targeted user pain points in the short term. To guide decision-making, I created a comparison model in Mural outlining the trade-offs and benefits of each option. From this analysis, it became clear that APIs to existing models were the right short-term strategy, meeting deadlines and addressing current concerns, while custom models remained a strong long-term opportunity for future versions and more complex yearbook tasks.

Why Hybrid Won: Custom and API Model Comparison
After evaluating the strengths and limitations of each option, I recommended a hybrid approach. By starting with APIs to existing models, we could move quickly and immediately deliver value by answering user questions, escalating to customer service when needed, integrating NLP, generating captions, and even handling tasks like background removal. This approach allowed us to leverage what was already available while focusing our efforts on designing and coding the infrastructure to connect and manage the models.
The hybrid model also balanced the needs of every team. Design could trust the assistant to deliver consistent, brand-aligned responses while still tapping into state-of-the-art generative capabilities. Development gained flexibility and accelerated prototyping. Support benefited from a more reliable assistant that worked effectively in both standard and specialized cases. The business reduced vendor lock-in, lowered long-term costs, and created space for defensible differentiation through custom models.
Most importantly, the hybrid strategy gave us a clear path for scalability. In the short term, it provided fast wins. In the long term, it opened the door to deeper customization for TreeRing-specific tasks such as generating layouts or changing entire book themes. This approach allowed us to meet immediate goals without sacrificing the future vision.
Aligning the Right Tools with the Right Stage
The next part of the strategy phase was about making informed choices on which AI models to integrate and where customization would deliver the most impact. It was not just a technical exercise, but a way to shape how the assistant would function both now and in the future. I explored which tools could give us immediate wins, such as natural language processing and knowledge base integrations, while also considering where building custom solutions would provide TreeRing with long-term differentiation. This balance between off-the-shelf models and custom development became a key strategic decision point.
Equally important was evaluating the business side of these choices. I researched the costs of each model and documented them so leadership could see how different options aligned with budget and scalability goals. This transparency helped stakeholders weigh trade-offs between speed, customization, and investment. By combining technical feasibility with business strategy, this phase ensured we were not only selecting the right tools but also setting a clear direction for sustainable growth and future product evolution.
Below is a snapshot of my evaluation of AI models for a core part of the virtual assistant focused on reasoning, natural conversation, and flexible text generation. I compared closed source APIs, enterprise hosts, and open source options, factoring in cost and benefits. This step was key in shaping strategy, helping the team align on what would work best for immediate needs while setting the stage for long-term scalability.

Technical
Before moving into design, I focused on the technical side of the project from a design perspective. This meant understanding limitations, aligning on strategy, and defining a vision that would optimize efficiency, reduce team effort, lower costs, and support future scalability. Creating technical specifications and artifacts early gave development a strong foundation and kept teams unified, while also giving the business clarity to make smarter cost-saving decisions. This step ultimately shaped how the assistant would function, look, and evolve, making technical strategy a key driver of both design and business impact.
Defining Capabilities and Limitations
The first meeting with development proved to be a turning point, as it allowed me to capture the right requirements to build accurate diagrams, wireframes, and user flows. By asking focused questions, I was able to shape the overall strategy for the virtual assistant while also identifying what was already available through APIs, knowledge bases, and existing code. Gathering these insights early brought tremendous value: it kept both teams aligned, eliminated wasted effort, and gave the business confidence that goals, costs, and expectations were being met. This alignment ensured that every design decision moving forward was grounded in both technical feasibility and strategic value.
Laying Out The Conceptual Framework
Early in the project, it became clear that while the vision for the AI assistant was exciting, each team was viewing it through a different lens. Development needed clarity on system interactions, business wanted to understand scope and costs, and design needed to map the user experience from start to finish. To bring everyone together, I created the conceptual framework.
This framework became the turning point. It mapped out the assistant as a complete system, showing what it is, what parts it includes, and how those parts connect. More importantly, it gave all teams a shared understanding of the scope and boundaries of the project. It also showed where intelligence and automation would fit into TreeRing’s experience and how they would directly address the pain points uncovered in research.
I structured the framework into five swimlanes that followed the journey step by step: beginning with user input, flowing through natural language processing and the contextual layer, moving into the response engine logic, and finally returning to the user through output and interaction. This high-level view helped everyone see how the front end and back end would align seamlessly, and it gave the business and development teams the confidence to move forward knowing both feasibility and value were clearly defined.

Structuring the Response Logic System
As we moved deeper into the project, I realized that clarity around how the AI would make decisions was essential for shaping user experience. I created the response decision logic tree to map out every path a user request could take, from simple templates to knowledge base retrieval to escalation when needed. This artifact gave us more than just technical alignment, it showed the bigger picture of how intelligence and automation would fit into the TreeRing experience. By laying out edge cases, confidence thresholds, and sentiment handling rules, the tree allowed stakeholders to see where the assistant could add the most value, where limitations existed, and how scalability could be achieved over time. It became both a blueprint for implementation and a strategic guide for how the assistant should grow with the product.

Building the Contextual Layer and Sentiment Flow
As the project moved into technical discovery, I focused on making the assistant both emotionally aware and contextually smart. I created a sentiment handling flow chart to define how the assistant adapts its tone, prevents repetitive loops, and knows when to escalate to a human. This helped build user trust, improved satisfaction, and reduced strain on the support team.
I also created a contextual layer map to give the assistant awareness of where the user is, what they are doing, and what has already happened. This ensured the assistant could provide precise guidance instead of generic answers and hand off key information to support teams when escalation was required. For developers, the map served as a clear blueprint of the data to fetch and integrate.
Together these artifacts transformed the assistant from a simple bot into a more empathetic and context-driven product that aligned with user needs and gave all teams a shared framework for building and supporting the experience.


Design
In the last phase, I moved from research and planning into hands-on design execution. My goal was to provide development with clear, build-ready artifacts in Figma. I explored several chatbot interface ideas, assembled a component library to establish consistent styles, prototyped interactions, and concluded by writing comprehensive design requirements to ensure smooth handoff.
Defining How It Looks and Functions
I began the design process by testing different ways to interact with users, with a strong intention to move beyond static, option-only virtual assistants. These rigid systems often frustrate people and push them toward customer service faster than necessary. Instead, I leaned into AI capabilities that allowed users to type naturally, while the system interpreted intent, analyzed tone, applied confidence scores, and tapped into knowledge bases. The result ended up being a smoother, more empathetic, and human-feeling interaction.
In this stage, I explored different design directions and determined that the only clickable options users truly needed were direct links to relevant resources, such as the TreeRing Salesforce knowledge base, which houses the foundational information about the editor tool. The focus remained on preserving a free-form prompt experience that felt conversational and human, while also enabling the system to gather contextual details about the user’s role and location within the editor. With this intelligence, the assistant could surface tailored guidance drawn directly from the knowledge base, balancing self-service efficiency with a more intuitive, human-centered interaction.


Scalability
A central focus of this project was asking, “How might we scale this product for future launches while improving both usability and customer satisfaction?” This question guided my thinking from beginning to end, as the impact extended beyond just users and support agents to include sales outcomes as well. My goal was to ensure the solution was not a short term fix but a scalable foundation, one that could grow with the product, adapt to new needs, and deliver lasting value across the organization.
Building The Foundation For A Scalable AI Virtual Assistant
Every AI virtual assistant is shaped by the unique needs of the product it serves. For TreeRing’s assistant, there were many factors to balance such as timing, resources, development expertise, user research, sales expectations, and customer service demands. The roadmap needed to address these priorities at different stages, but the very first version had to launch quickly, by mid fall, before the yearbook season reached its peak.
When I asked myself, “How would I define TreeRing’s scalable AI virtual assistant?” I described it as an intelligent support system built directly into the yearbook creation platform, or Editor. Its purpose is to help users answer questions, complete tasks more efficiently, and work with greater confidence, reducing the guesswork and frustration common today. The assistant will surface relevant resources, guide users through complex workflows, anticipate needs, and even complete tasks on their behalf. More than a simple support tool, it delivers a conversational, human like experience that scales knowledge across editors, parents, and schools, ultimately driving higher satisfaction and stronger adoption of the platform.
Roadmapping The Future
As part of the scalability phase, I worked closely with stakeholders and management to align on what TreeRing’s AI virtual assistant should deliver in the short and long term. We framed this vision as a strategic journey across three versions, each building on the last.
Version one established the foundation: a familiar assistant that could answer questions, surface knowledge base articles, and connect users to support. While simple, it set the stage by leveraging existing AI tools to create a more conversational and human centered experience.
Version two expanded the assistant’s role, shifting toward a hands-off experience where it could complete tasks like changing themes or intelligently sourcing graphics. This required a hybrid approach, blending market-ready AI with TreeRing-specific algorithms tailored to the editor tool.
Version three looked further ahead, positioning the assistant as a proactive partner. By incorporating user metrics and behavioral data, it could anticipate needs, guide progress, and even make decisions within defined parameters. This long-term vision ensured scalability and framed the assistant as a core driver of user satisfaction and product growth.
