Skip to Main Content
Menu
Close Menu

Conducting User Research for Federal Agencies

A successful experiment from approval to outcomes

Group of people in national mall park in Washington, D.C.

Federal agencies who seek to deliver modern web experiences exist between a rock and a hard place. The “rock” is a law that mandates a user-centered approach. The “hard place” is another law that makes it quite challenging to deliver one. Embedded in Palantir’s culture, which embraces experimentation, I found a way to squeeze through the gap. My hypothesis: we can gain meaningful insights without contacting more than 9 members of the public. Read on to learn why 9 is the magic number and how it all went down.

Situation: Bringing a federal agency’s website into the 21st Century

I’m the UX strategy lead on a team modernizing a prominent federal agency’s website. We’re doing important work, rebuilding their legacy site from the ground up. Their modernized website will include a unified and user-centered architecture, the US Web Design System, Drupal, and more. Our work will impact the agency’s millions of annual users.

The federal agency’s top priority is compliance with user-centered laws like 21st Century IDEA and Section 508. However, they also want their modernized website to help visually unify the agency and support their critical work.

Problem: Federal agencies are bound by laws that seriously impede user research

We had a problem. The Paperwork Reduction Act (PRA) restricted our ability to conduct user research. The word “paperwork” hints the era of this law, very-pre-UX 1995. Its lengthy process constrains federal agencies from asking the same questions of ten or more members of the public within a year. Its intention is good: to minimize the paperwork burden created for the public by the Federal Government. However, it now impacts the government’s ability to create digital tools that didn’t exist in 1995, but are now critical to the public. To be user-centered, we need to conduct small- and large-group research over a long period of time. That includes groups as small as 3-5 and as large as 100+.

When I joined the project, our team, wanting to be in compliance, assumed we couldn’t do any research without the law’s arduous process. Wanting to ensure the best possible outcomes for our client and their user, I just couldn’t accept that constraint without exploring our options. Our project centered on 21st Century IDEA’s mandate to serve users, which requires ongoing user research. And, the required multi-month approval process for PRA-compliant user research posed a real challenge to that. Not deterred, I researched the law and proposed a creative solution.

Solution: Let’s see what we learn from a qualitative user interview experiment

We experiment all the time at Palantir. This creates a culture of testing hypotheses rather than striving to avoid failure. I had a hypothesis to test with this client: we can gain meaningful insights without contacting more than 9 members of the public and encountering the multi-month approval process. I couldn’t conduct any user research without client approval. So, I wrote a research plan for the federal agency to review.

My research plan outlined a lightweight approach to contextual inquiry. I planned to observe people using the site via Teams, gaining more robust insights than I could by asking them about the site. However, I also constrained my request to a single, one-hour session; I couldn’t compensate them nor recruit widely.

In each session, I would first ask questions like where and how often they use the site. Then, I’d ask them to share their screen and complete their most common tasks on the site. Last, I’d ask what they think works well and what could be improved. My goals included documenting users’ process for achieving their top tasks, the challenges they encounter, and what they value about the site.

Beyond the plan itself, I provided context about the value of qualitative user research to the federal agency. Rather than reinventing the wheel, I sourced existing resources. Here’s an excerpt of what I shared:

Nielsen Norman Group are world leaders in research-based user experience. They produce easy-to-understand bite size videos, like this one: 5 Qualitative Research Methods. It’s a great five-minute introduction to both the invaluable nature of qualitative user research as well as common methods (like “contextual inquiry”, which is what my plan utilizes).

Even small groups of users generate meaningful insights. Here’s a resource with more on that: Why You Only Need to Test with 5 Users. Another quick resource I would recommend is this one from thought leader, Teresa Torres: How To Talk To Your Customers. She recommends speaking directly with users daily. While that’s not something we can reasonably achieve, regularly speaking directly with users is invaluable.

To my delight, the federal agency approved my research plan without edits. So, recruitment became the next hurdle. Of the 9 people we could email, I aimed for 3-5 agreeing to participate. I asked the federal agency for a list of users that represented key user types and were already in contact with them. This would maximize the value of each session and the number of sessions I could conduct.

We honed our list of 9 collaboratively, ensuring we included members of marginalized communities. Once we had our list, an agency employee sent the first email to legitimize the request. If people responded, I followed up to schedule. I also sent a single nudge to those who didn’t respond to the first email. I documented who we contacted, so we had a clear record of us staying within our magic number.

Results: The federal agency provides invaluable content. Their users need it to be easier to find.

Of the 9 people we emailed, 1 email bounced, 5 people scheduled interviews with me, and everyone showed up. Informed by insights from the 2023 Government UX Summit, I didn’t record the sessions. Recordings introduce arduous record-keeping needs for the government, and they risk capturing personally identifiable information (which adds another big headache). Instead, I asked for 1-2 members of my team to capture notes for each session.

The five sessions provided a mighty amount of information and context about site users. Participants included staff of small and international nonprofits as well as a Tribal Government representative. Some were power users of the site, while others interacted with it only through email links sent for a specific purpose. They broadened my understanding of the variety of things people do on the site. Each person’s interaction with the federal agency could be robust and complex.

Everyone valued the agency’s transparency and the wealth of information it provides. However, they also all struggled to find things. Almost everyone entered the site through a Google search or a bookmark. One participant wasn’t able to reconcile their mental model with the org-chart-driven site structure. A second participant, after an unsuccessful attempt to use the site’s search, found a page via another agency’s website. A third participant said, “It’s busy, you have to get lucky to find something useful here,” about the landing page of an important program (both to the participant and the federal agency).

These findings, along with insights from Google Analytics, a Top Tasks survey, a Card Sort, and a Tree Test, are guiding our transformation of the site’s information architecture. We also have this invaluable tool in our toolbox: qualitative user research. This successful experiment paves the way to further interviews, usability testing, and more.

I’m so pleased with how this experiment went and the results it generated. If you want to talk more about conducting research for federal clients, please get in touch with me via our website contact form. I have more learning to share and much more still to do.

Let’s work together.

Have an exceptional idea? Let's talk and see how we can help.

Contact Us