Skip links

UX Case Study

Network Airing Log Ingestion Web App

Various screens from the airing logs manager app

Role

UX Engineer

Software

Figma, Notion, PHP, JS, React, HTML, CSS

UX Skills

Interview Guide, User Interviews, Heuristic Evaluation, Service Blueprint, Interactive Prototypes, User Testing, Visual Design

Year

2023

Overview

This case study is an overview of a 9-month project that I completed with my team at ProMedia, a media buying agency based in Miami. The existing developer-designed product that ProMedia had was difficult to use with a high learning curve. I was hired and tasked to design a better product.

Background

This project was to design and develop a system for airing logs from 2,000+ networks to be ingested into an in-house ERP. The files received are all separate formats and many have missing data or incorrect airings that need to be corrected.

Airing logs are an important way that television networks communicate the advertisements they run on air with media buying agencies.

Getting airing logs ingested quickly is crucial to having up-to-date information for the Media Buyers on the team. ProMedia's ERP allows buyers to adjust their bids in real-time based on the airing log data from the networks.

Problem statement

Media Coordinators are spending too much time getting airing logs into the system and needing to verify information. This is a problem because it prevents them from helping with other tasks on the team. It is also a problem for Media Buyers because data isn’t available in the system when they’re determining the weekly ad spend, potentially resulting in either too low or too high of bids.

Target audience

Media Coordinators were the target persona for this project. Other stakeholders included Media Buyers and Support Agents.

Strategy

I created a project plan as a process for how to understand the problem more deeply. Before jumping to any solutions, I conducted user research with the team and used the findings to inform the design and refined it with further user testing.

  • Background
    • Interview the manager to gain background knowledge
    • Complete heuristic evaluation on the current system
  • Research
    • Write a research plan
    • Perform user interviews and analyze the results
    • Complete a customer journey map and service blueprint
  • Design and user testing
    • Identify user stories and create interactive prototypes
    • Perform usability tests
    • Adjust prototypes and test again
  • Develop, QA, UAT, deploy

Heuristic evaluation

I completed a heuristic evaluation to get stakeholder buy-in, as well as to identify areas of focus for user interviews.

User research

User interviews

I wrote an interview guide and conducted five interviews with Media Coordinators and collected their responses in Notion to stay organized.

A few questions that I asked:

  • Can you tell me about how managing the logs impacts your job?
  • Can you explain to me when and why you manage logs? How often?
  • How do you know if you have logs you need to address, if any?
FigJam board with affinity mapped stickies
Answers to the interview questions, added to FigJam as stickies and affinity mapped by question to identify patterns

Interview analysis

I transferred the interview answers to stickies on a FigJam board and affinity mapped the responses by question. These are some patterns that I discovered.

Top user pain points

  • Parsing the airing log – All users had complaints with the log parsing process.
    • Logs that used to parse, no longer parse without them knowing why.
    • When file formats change, the system seems to break
    • They were not sure if some features were turned off, such as the automatic analysis system that handles logs that don't need help.
    • The system doesn't learn their actions and get better over time, like they expected.
  • System is slow – The system isn't working as quickly as users want, and they don't know why. They either wait, upload it themselves, or submit a ticket.
  • System is confusing – A couple of users mention that the system is very confusing and when something goes wrong they don’t know what’s happening.

When users are work-stopped

All users said that when the system isn't working, they can't do their jobs. Managing logs is their primary responsibility, even if they can sometimes pick up the slack for other team members on the media team.

When users contact support

Most users really appreciate support and think that they are consistent, do a great job, and are generally quick.

Time-consuming actions

Most users thought that troubleshooting errors was the more time-consuming part of the logs process for them. A few others time-consuming processes include: sending out emails, uploading files, and double-checking the system’s work.

Where trust is limited

Users thought the least trustworthy part of the process is when the log goes through the parser. Some said that it has gotten better over time, but still they don’t fully trust it and need to verify the work of the parser – especially for bigger logs conducting A/B tests, multiple creatives, different links and performance.

A screenshot from the current system
A screenshot from the old system that was confusing to users

Key insights

The two top concerns that I learned from nearly every user is that the system is too slow and can't be trusted.

Overall, users liked the logs feed over manual entry, but they still had many frustrations. From one user I heard that, "It's hard to understand the vision of how we want it to look, and it's difficult to portray to the dev team what we actually want."

The users primarily wanted:

  • A better understanding for how the system works
  • To spend less time troubleshooting and fixing issues

Understanding the user and process

Personas

After I understood the goals of the users and their main frustrations, I came up with personas for my team to have a better understanding of our users.

  • Media Coordinator
  • Media Buyer

Service blueprint

I mapped out the user journey of the Media Coordinator persona and system functions as a service blueprint. This diagram was very useful to work cross-functionally across development and design to understand the process.

A service blueprint of the new system
A service blueprint diagram of the proposed system following the Media Coordinator's journey, including each of the people and systems involved

Prototypes and iterative testing

Object planning

I mapped out in FigJam how the data will behave as objects and planned out the statuses. This was helpful for the development team, but also to ensure we were consistent for our users to develop a strong mental modal of how the system worked.

User stories

I came up with user stories of common scenarios they might encounter, based on what I learned during user interviews.

"As a Media Coordinator, I want a personalized dashboard, so that I know what logs are mine to work on at any moment."

"As a Media Coordinator, I want a summary of what went wrong when a log didn't parse, so that I can quickly troubleshoot the issue or contact support."

"As a Media Coordinator, I want to more intuitively know who to send the log analysis to, so that I can get the job done more quickly with less stress."

Mid-fidelity prototypes

I first designed mid-fidelity prototypes to focus on foundational components and how users interact with the new system. Using what I learned from background interviews, object planning, and user stories, I created a path for users simulate completing the task at hand with a Figma prototype.

A screenshot of the mid-fidelity prototype
A screenshot of part of a mid-fidelity prototype I created to learn about user behavior

Design system

The current ERP had an outdated and developer-focused design system, so this project was an opportunity to start building a new system for the future. I created a color palette and shared components in a Figma Library to use on all future projects.

A screenshot of some components in the new design system
A screenshot of some components in the new design system

Usability testing

I completed nine guided scenario usability tests with Media Coordinators where I tested each user story. Every couple of tests, I made adjustments if the issues were universal and continued to iterate on the prototypes. This process allowed me to pivot quickly when concepts were confusing or not intuitive and greatly improve the interface.

Half-way through the testing, I upgraded the mid-fi prototypes to high-fidelity and added colors and more detail using the new design system. I conducted the second half of the tests on the more refined prototypes.

Adjustments

One point of contention was determining what to name or title the airing logs. I wanted to provide a consistent name as soon as the database record was created, however this was difficult because we might run into errors while parsing the log, so we didn't have relevant data yet. This was also important to me so that development and media teams had a shorthand and a dedicated link to a particular log, for when issues arose.

Ultimately, we compromised and kept the consistent name, but when the relevant data was available, we provided a title that was easy for the media team to identify.

Adjustments made to the cards based on user feedback, providing more relevant information to Media Coordinators when available

Reflections

My team and I conducted a retrospective after launching the product and reflected on our successes and failures.

  • The user research was invaluable and shortened development and revisions time because there were clear goals
  • Testing feature changes in a prototype was much quicker and let us be nimble as we iterated on the design
  • Build a smaller minimal viable product next time and get it in front of users sooner
  • On the technical side, avoid having the new UI reuse backend code and tables from the old system, because it will lead to a messy codebase that's harder to maintain

Quantitative successes:

  • Reduced the total time before an airing log analysis is sent to the buyer from 41 minutes to 30 minutes
  • Reduced the analysis time from 20 minutes to 10 minutes

Next steps

In February 2024, one year after beginning the project, my team continues to learn from our users and refine the product. The Airing Logs Manager processes around 200 logs per day in tandem with the Media Coordinators.

We have about 20% of the logs ingesting and completing the analysis automatically, and we aim to further automate this process and only include users when absolutely necessary.

I've recently identified the following user stories that we intend to tackle next.

"As a Media Coordinator, I want to know when another person is working on a log, so that we don't double up the work by mistake by both working on it."

Potential solution:
Provide an indicator if someone else has the page open.

"As a Media Coordinator, I want to know if I've opened a log that is not assigned to me, so that I remember to update the assignment for others on the team."

Potential solution:
When opening a log not assigned to you, provide a popup and give them the option to assign it to themselves.

"As a Media Coordinator, I want to change assignments from the main dashboard, so that I can update the assignment without opening a log."

Potential solution:
Add an "Assign to me" button to each card to quickly take a log assignment.