crisp design logo

Service Design (A Home Depot Company)
The leader in custom window treatments online

OVERVIEW is preparing to scale a program called Measure and Install (M&I) that serves both the DIY and DIFM customer. This program allows for a licensed professional to go to a customer’s home to measure and install their window treatments.

To comply with company policy, I have omitted and obfuscated confidential information in this case study.


  • Product Manager
  • UX Designer + Researcher
  • Lead Engineer
  • Team of Developers


8 sprints


Improve processes to prepare to scale a program nationwide

Our goal for the project was to automate parts of the process to aid and prepare for the national wide roll out of the program launch. The current process is manual, time consuming, and error prone.

Our high level goals were to:

  • Connect the two systems used in the program to allow for data transfer
  • Use that data to automate specific steps in the process
  • Track orders that failed automation and the reason why

I led the design and research of the experience while collaborating closely with two Product Managers, two Senior Software Engineers, one Quality Analyst, and five Software Engineers.

In addition, I worked with another UXer on research, the stakeholder of the project (the Services Team Project Coordinator), and reported directly to company leadership about progress and impact along the way.

The program is set to launch nationwide in Q2 of 2022 and already has close to $1 million dollars run through automation in the first 6 months.


Analyze the current state to identify opportunities - Product Case Study 1

How did I get to this? A Service Blueprint that shows the interaction between people, technology, and processes. Due to confidential information, I can’t show that here.

“Early findings lead us to believe we could automate so much of the process, that we projected 80% of the orders to be fully automated after launch.”


Internal Users: Two teams serving two different customer types

Both teams report to one Project Coordinator but manage two separate flows within the program, DIY vs DIFM. I spoke with the Project Coordinator to try and get a better understanding of each teams day-to-day. With his responses I created an empathy map and found overlapping pain points and feelings of concern around similar topics from each team.

DIY TEAM - Product Case Study 2

DIFM TEAM - Product Case Study 3

External Users: Technicians from multiple Service Providers

The actions of the technicians during the Measure and Install service directly impact this feature. It was paramount to get a better understanding of their behavior and current training. To gather that information I partnered with another UXer to complete moderated interviews. - Product Case Study 4

We interviewed Technicans and the Service Providers that hire/train said techs. - Product Case Study 5


Management, analysis, and resolution

To our surprise every Technician, regardless of the Service Provider they worked for, completed each job and used our mobile app the same way. This helped us define specific triggers to fail orders—that needed a human to be resolved—before being released to production and pin point interesting customer behavior that could greatly impact the DIY side of the program. See accuracy vs clarity.

We also defined a user-friendly way that our failed orders would be tracked and managed automatically. This drastically reduced the Project Coordinator’s time spent building and analyzing an excel sheet. - Product Case Study 6

Other findings… - Product Case Study 7


The page the DIY Services Team uses is missing from the navigation. The only way to access it is to bookmark the link given to you during training

Other findings… - Product Case Study 8


Creating a bridge between System A + B will aid in matching jobs to orders 1 to 1, from an accuracy stand point. But, due to customer behavior we don’t have 100% clarity on if the customers and technicians are measuring the same thing.

Scope Creep - Product Case Study 9


The orders that needed human intervention would fail at the Order level. Meaning if 1 line item needed human intervention, then no automated steps would take place, and the entire order would to be assessed by the Services Team. This allows us to:

  • Stay on schedule
  • Test the feature (collect data)
  • Find any opportunities and/or bugs


Navigation update and order management screen

After multiple rounds of revisions and prioritizing features based on our agile approach, we were able to create an accessible path to the current tracker for the DIFM team and a new, real-time tracker for the DIY team. - Product Case Study 10


Fixing the user experience of the current tracker that is used by this team was deemed out of scope since it’s preexisting. However, we were able to add a dedicated menu item for both teams’ Order Tracking pages. - Product Case Study 11


This new screen collects orders that failed automation, provides users with the reason why, navigation buttons to easily access both systems to resolve said issues, then automatically removes said resolved order from the screen.
How did I come to this solution? Multiple rounds of revisions and sign-off from stakeholders, senior leadership, and lead engineers. Due to confidential information, I can’t show that here.


Celebrate the wins


The new navigation provided easier access and reduced friction, for current and new members of the DIFM team, at the very beginning of their day-to-day process.


This screen removes the step where the Project Coordinator had to export a daily excel sheet of orders, manually filter for the ones that need human intervention, and divide/assign orders to his team.

The list is sorted from oldest to newest to ensure orders don’t stay in the queue too long, which would impact the customer experience.

The export feature allows the Project Coordinator to send specific orders to different departments across the org, when necessary. (i.e., when the sales team could capture more product purchases than originally ordered.)

The users are also provided with important information such as the “Failure Reason” reducing time spent on the previous process of manually investigating each order to discover the issue.

Learn from the data

Though we launched a successful feature, there was still room for improvement. Data showed us that only 47% of the orders were successfully flowing through the automation process, much less than we orginally projected.

After some investigating, we realized failing at the order level was still leaving the Services Team with a ton of manually work for orders that only had 1 line item that required manually intervention. - Product Case Study 12
On average, only 1 line item is flagged
On average, there are 13 lines items per order

This means that 364 lines items must be manually resolved—using the current feature as is—when actually only 28 line items require it while the rest could be automated.

Updating the feature to only fail the line items that actually require manual intervention, rather than the entire order, would alleviate 92% of the manual work required to resolve those 28 orders shown.


The program is set to launch nationwide in Q2 of 2022 and already has close to $1 million dollars run through automation in the first 6 months.

The new team assigned to this project will continue to incrementally release new enhancements as we collect more data.


Wright Way Wrestling