top of page
Overview
This project focuses on integrating an AI tool that helps users showcase their memories by automatically selecting their best photos and curating layouts for their yearbook pages.
Through my earlier research into AI opportunities, I discovered that parents in particular struggled with identifying, selecting, and placing photos. They described the process as time-consuming and frustrating, with frequent tool lag and grids that wouldn’t snap into place. On top of that, images often printed blurry, distorted, or cropped in ways that cut off faces, leading to even more frustration and an increase in support calls for costly reprints.
Framing The Problem
The first step in my process was to frame the problem before jumping into solutions. It was important to ensure everyone had a clear understanding of the scope and nature of the challenge, why it mattered, what we were solving, and who would be impacted. I translated the earlier research into actionable statements that set expectations and aligned the team. This documentation became a critical reference point throughout the project, especially during moments of disagreement, shifting scope, or changing timelines.

Strategizing and Finding A Solution
The solution needed to balance several priorities: Sales required AI to support the fall launch, users wanted a more streamlined and modern experience, and management aimed to improve efficiency while reducing pressure on customer support. AI addressed all three goals. I focused on designing a tool that could automatically select the best photos from a user’s album and generate multiple layout options, making the process faster, easier, and less error-prone.
Creating Alingment
After identifying a solution that met the needs of all parties, my next step was to align the teams under a shared vision. I framed the tool not only in terms of user experience but also in the value it would bring to the company. Establishing a consistent design language and clearly explaining the rationale to developers ensured buy-in and helped justify resource allocation.
Role
Senior Product Designer
Duration
8 Months
Tools
Figma
Mural
Jira
Deliverables
Vision Statement, Problem Framing Documentation, User Scenarios, Service Blueprints, Annotated Wireframes, Prototype, Reason Codes Directory, Empty/Fallback State Diagrams, API Contracts, JSON Specs, & Interaction Rules Documentation.
Designing AI Photo Curation and Dynamic Layouts
After laying the groundwork through preliminary research and uncovering opportunities for AI within the TreeRing yearbook tool, I moved into the next chapter of the project: designing an AI-powered photo curation and layout experience. This tool, set to launch in fall 2025, became the bridge between research insights and a tangible product vision.
Service Design
The next step was to map the full end to end experience, capturing both the frontstage and backstage interactions. I illustrated how AI curation, layout generation, and guardrails connected with the UI at each stage, broken down into seven swimlanes. This visualization revealed critical dependencies such as AI responses, image scoring, and template rendering which allowed the development team to plan sequencing more effectively.
Creating The Blueprint
First, the service blueprint was designed to show how the user experience connects with the backend processes triggered by each action. It gave management a clear view of the coordination and effort required, while also helping less technical teams understand the shared goals and the steps needed to reach them. Swimlanes were divided into customer actions, frontstage UI, backstage systems, and supporting processes, creating a common map that aligned everyone around both the vision and the execution.
Predefining The Reason Codes Directory
Next, I developed the reason codes directory to translate AI decision-making into human readable explanations for each photo that is selected or rejected, defining the rationale behind every outcome. This ensured the UI could communicate decisions with clarity and consistency, building user trust and supporting content adjustments when necessary. For developers, the directory established a structured vocabulary of outputs to expect from the AI, which simplified mapping to the appropriate UI components and reduced ambiguity during implementation. Incorporated into the end to end flow, the reason code touchpoints documented exactly how AI logic interacts with the interface and backend, creating a precise operational map that guided both design and development planning.

Design
The next phase of the project focused on design execution, where I produced annotated wireframes, high fidelity mockups, and a prototype to define the product vision in detail. These deliverables established a single source of truth for all teams, eliminated ambiguity, and streamlined the development handoff. By anticipating edge cases early, they also provided a solid foundation for technical documentation and ensured the product could be built with consistency and efficiency.
Annotating Wireframes
The annotated wireframes served as a communication tool to define the structure and functionality of the interface at a mid-fidelity level. Detailed notes explained key features such as behaviors, constraints, and interaction logic, bridging the gap between conceptual design and high fidelity visuals. This ensured development had a clear understanding of how each element should behave, reduced rework, and provided a structured framework for more effective cross-team reviews.

Designing High Fidelity Mockups & Prototype
The high fidelity mockups became a turning point in the project, transforming abstract ideas into a polished, tangible vision of the product. By locking in the colors, typography, spacing, and UI components, they defined the look and feel that users would ultimately experience. These mockups not only conveyed the visual identity but also built confidence with stakeholders by showing how the final product would come to life. For the team, they served as a single source of truth, eliminating ambiguity and ensuring consistency across every detail. I also designed the mockups to anticipate real-world scenarios, including guardrails, fallbacks, and low-data states, so the product would feel seamless and intentional in any context. In this way, the mockups bridged vision and execution, aligning everyone on both the aesthetics and the functionality of the final build.

Technical
In the last phase, I moved from research and planning into hands-on design execution. My goal was to provide development with clear, build-ready artifacts in Figma. I explored several chatbot interface ideas, assembled a component library to establish consistent styles, prototyped interactions, and concluded by writing comprehensive design requirements to ensure smooth handoff.
Drafting The Empty/Fallback State Diagrams
I created the empty and fallback state diagrams defined how the interface should respond when the AI could not return an ideal result. Their purpose was to maintain transparency with users by clearly explaining why a result was unavailable, which built trust and gave them the ability to adjust content when needed. For developers, the diagrams provided a structured vocabulary and clear mappings between AI outputs and UI components, ensuring consistency with the design language. By establishing these patterns upfront, we reduced rework, preserved a consistent tone in AI explanations, and created scalable frameworks that could extend to future features. This also enabled development to handle API failures and guardrail rejections gracefully without blocking the user experience.

Outlining API Contracts
I constructed the API contracts documentation to define the structure, inputs, and outputs of the endpoints powering AI curation and layout. This ensured that design requirements were built into backend planning from the start by specifying exactly what data the UI needed to send and receive. The contracts established a shared language between design and engineering, reducing ambiguity and preventing misalignment during implementation. In parallel, guardrails were defined to protect the intended user experience by blocking low quality or broken layouts before they reached the interface. These also provided developers with clear, testable rules for rejecting or adjusting AI outputs, ensuring reliability and consistency in the final product.

Creating The Interaction Rules Documentation
When the AI began placing photos, I realized it was critical to define exactly how users could interact with those placements. Without clear rules, interactions risked becoming inconsistent across devices, leading to confusion and frustration. To address this, I created the interaction rules documentation, which outlined what users could and could not do once photos were placed. This not only gave developers a precise framework to follow but also ensured the final build aligned with usability standards and the brand’s design language. By documenting these rules early, I helped safeguard a smooth, predictable experience for users while keeping the product scalable and maintainable for the team.
Drafting The Template JSON Specifications
Finally, I developed the reason codes directory to translate AI decision-making into human readable explanations for each photo that is selected or rejected, defining the rationale behind every outcome. This ensured the UI could communicate decisions with clarity and consistency, building user trust and supporting content adjustments when necessary. For developers, the directory established a structured vocabulary of outputs to expect from the AI, which simplified mapping to the appropriate UI components and reduced ambiguity during implementation. Incorporated into the end to end flow, the reason code touch points documented exactly how AI logic interacts with the interface and backend, creating a precise operational map that guided both design and development planning.
Thoughts
Finally, I developed the reason codes directory to translate AI decision-making into human readable explanations for each photo that is selected or rejected, defining the rationale behind every outcome. This ensured the UI could communicate decisions with clarity and consistency, building user trust and supporting content adjustments when necessary. For developers, the directory established a structured vocabulary of outputs to expect from the AI, which simplified mapping to the appropriate UI components and reduced ambiguity during implementation. Incorporated into the end to end flow, the reason code touch points documented exactly how AI logic interacts with the interface and backend, creating a precise operational map that guided both design and development planning.
bottom of page


