Fast, Frequent, and Focused: Building Better User Testing at Warby Parker

As a company that’s committed to delivering exceptional customer experience, it’s no surprise that research plays an integral role in the evaluation and lifecycle of Warby Parker’s products and services. Across departments, we’re consistently A/B testing, sending out customer surveys, prototyping designs, and gathering qualitative and quantitative data to paint a three-dimensional view of our customers.

All the questions, all the time

On the Product Management team, there is a mix of UX and Product designers responsible for maintaining and improving As a group, they strive to understand a variety of things about our users before approaching a new feature or iterating on an existing web experience:


  • What are users trying to accomplish?
  • What steps are they taking in order to reach their goal?
  • Where, and with what mindset are their journeys taking place?


When evaluating design solutions, the same sort of inquisition applies: Does this address a user’s needs? Are they getting tripped up? Are they oriented? Are they able to accomplish their tasks successfully? Do they understand how to navigate the interface? Is the experience enjoyable? The list of questions goes on.
In a nutshell, being responsive to customer needs and feedback is an integral aspect to the design process; it enables us to intelligently craft or validate interactions, features, and flows—plus have a deeper understanding of our online visitors.


So, how do we gain this understanding?

A combination of analytics, consumer insights, and (you guessed it!): a whole lot of user testing. That last piece is where we’ll be focusing today, as it’s the area of research at Warby Parker that the Design team leverages most—as well as the one with the most room for improvement.

Although our designers were familiar with conducting onsite and internal sessions to help us answer these questions, a couple of challenges and pitfalls began to take shape:


  • Speedy deadlines. It’s easier said than done to pad agile timelines with adequate room for user research—particularly when that research involves recruiting, facilitating, analyzing, and applying feedback from outside users. It’s quick enough to walk up and share a design-in-progress with someone in the company, but reaching out to real customers and getting them to come in? That takes plenty of time and planning. Due to this, we were testing later in the process than ideal—and needed to find a way to be more nimble about gathering and applying insights from external users.
  • Small team. There just aren’t very many designers, to put it frankly. We’re a tiny but mighty squad that is often responsible for facilitating research and designing digital product simultaneously. It’s quite an undertaking, and we hadn’t taken advantage of any tools to help streamline or conduct our user testing sessions more efficiently. We needed to find a way of getting the maximum impact and insight from a minimal number of resources.
  • More devices. With a responsive site rolling out and an uptick in mobile traffic, the importance of quality mobile experiences to the business and customer is of the utmost importance. Having placed a lot of love and attention to desktop experiences in the past, we weren’t as well equipped or familiar with testing mobile designs as we were with desktop experiences. We had to start getting richer insights for small screens.


Getting fast, frequent, and focused

To effectively address the aforementioned challenges, we decided to band together and develop ways to improve our speed, test more often, and focus our resources.



In order to get feedback more quickly, we first expanded our software and testing methodology. Most of our designs are turned into quick-and-dirty prototypes before going live to our customers, and we began to leverage InVision’s Lookback integration to record mobile screens and user reactions as our testers are playing with products. Additionally, platforms like enable us to share work-in-progress with remote users and get meaningful feedback in less than fifteen minutes—worlds faster than a three-day back and forth just to get a tester into the office.

We also developed script templates, making it easier to onboard new members of the team and empower more people with the capabilities to facilitate user testing. This helps anyone quickly generate a testing script related to something they’re working on and equips a larger part of the organization to take part in gathering insights.




Testing prototypes and recording them with Lookback, as you can see with the InVision clickthroughs above, allows us to hone in on areas of friction before experiences are built out in full and go live to customers.

Testing prototypes and recording them with Lookback, as you can see with the InVision clickthroughs above, allows us to hone in on areas of friction before experiences are built out in full and go live to customers.


Without holding ourselves to a dedicated time for testing, we were less accountable to conduct it regularly. Thus, our first step was to create a consistent time and place for running sessions. We’ve recently moved from an as-needed basis to a recurring bi-weekly timeslot—one that can ideally scale to weekly in the future.


The next move? Gather a steady participant pool. For starters, we began building an internal testing group, asking people from the company who don’t work with digital products on a regular basis to sign up. This makes sure we’re constantly getting the people in our direct vicinity to play with, break, and help us make our experiences better. Additionally, we can use this internal group of testers to bolster existing tests with external users, or to sub in for when an external user is a no-show (a fairly common occurrence).


By getting the entire Warby Parker team involved, it allows us to foster a strong testing culture and get more ideas out there. A good number of people at the company don’t have the opportunity to use the site that often, which makes them fantastic fresh eyes for lending feedback on existing pages and new features. They’re also the first great step towards growing the database—eventually including outside testers whom we can reach out to consistently in the future.


The final area of improvement: enlisting more targeted participants. In most cases, recruiting for user testing involves reaching out to the general masses with a Craigslist blast. What we needed was to shift away from this and find a way to talk to the specific customers we were designing for, e.g., invite people who have done a Home Try-On to provide feedback on returning to the site to buy a pair of glasses. We’re now seeking ways to focus our outreach to these smaller segments by querying the customer database and being more strategic about our screening before bringing someone in to test. We hope this will lead us to more relevant, impactful insights.


Additionally, we’re striving to address testing designs mobile first—starting with small-screen feedback then scaling up and out to more devices. This helps us prioritize content for mobile before diving headfirst into the larger canvas or a sweeping solution too early.


Informed makers, better makers

In essence, the question user testing helps us answer (and continually ask) is this: What can we do to deliver the best possible experience to our customers? Our hope is that by strengthening and broadening our methodology, we’ll be given a means to increase empathy, become more sensitive to customer needs at a company level, and keep pace with our user base as we scale and encounter new challenges. Because at the end of the day, we all have a commitment to build things people need—not things we assume they need. And we’re certainly excited to keep finding ways to be better at that.

Posted Under:

Posted on