One thing I love about my job is the privilege of working with other design/user experience researchers and learning from the variety of approaches they take in their work. This diversity is rewarding because it means I’m constantly picking up new ideas and iterating on my own approach through experimentation and creativity.
Right now I am part of a team of about 20 or so researchers who work across the company. Each week we get together for a one-hour “methods” meeting to discuss topics related to the art and science of research, including tools, methodologies, process, outputs and outcomes. This week’s discussion was about creating research reports, and was suggested by an intern who wanted to know more about what works best for communicating research results. Three people from the team walked through an example of a report they had created, explaining how they chose to structure it, why they included certain information, and other aspects they felt make for a good deliverable for stakeholders. They also talked about the efficacy of their approach as a form of information sharing and tool for impact on the product.
The three reports shared many things in common with regard to content – basically the information you would expect to find in a written report – like research observations, insights and recommendations for product design. These elements reflect a shared understanding of best practices and what kind of information is most important to include.
The reports also contrasted in various ways. One of them was around 70 slides in length, chock full of detailed findings, competitor experience comparisons, conceptual frameworks and other information the researcher found important to share with the stakeholders. The second report was around 20 slides and focused only on the most important information that needed to be shared quickly with the team, with less “nice-to-have” details. Unlike these two, the third report did not have an executive summary of key findings or information on the research design (method, research questions, participant background, etc.)
The third report differed not only in some of the content but also in format – it was a 5-page Word doc featuring a simple table with columns for observations, recommendations, and a severity/priority rating for each user experience issue. For this researcher, it wasn’t important to create a deliverable that looked good or explained things visually – it was simply about conveying the results in an efficient and easy-to-parse way.
These differences in approach led to a discussion of how the deliverables we create depend on a number of factors, including who our stakeholders are and how much they value/understand research, the time frame and urgency of the project, the research questions at hand, the methodology, etc. For example, if it’s a straightforward usability study that’s part of an iterative, agile design process, you may not need a robust deliverable when you can just summarize what’s working and what’s not, state your recommendations for improvement, and add a priority/severity rating. If you work closely with your team and your colleagues value and understand the role of research in the design process, then you may not need to put as much effort into convincing them of the validity of the information by using lots of visuals and explaining what qualitative research is on a slide.
However, if you are doing research for a team that has little or no experience incorporating research into their decision-making, you may have to take a different approach with your deliverable (and your overall research process, but that’s a topic for another post). You may have to include a blurb in your report about why it’s OK to do a study with just six people, and explain more about the research methodology and process. You may choose to include lots of imagery, photos of participants, screen shots of the interface, or other things that help to tell a story and aid in convincing the team that they should seriously consider this information when moving forward. You may feel the need to share every piece of useful information possible in order to get the team to understand just how much can be learned from qualitative studies.
A researcher’s role on a team and relationship with her or his teammates will also affect the approach, process, and output. For example, I have been doing research for iOS and Android (platforms) and not with any particular team or domain, unlike most of my research teammates, who are embedded on specific teams/domains focused on a very specific part of the product. For the most part, I have worked with different teams for each project, versus my colleagues who work primarily with the same people each time and have a different kind of relationship with teammates who potentially have a better understanding of research and perhaps value it more within the design process. Because of this, I have ended up spending more time on my deliverables to communicate what was learned and convince teams of the value of the study. It reminds me of working in a design consultancy where the deliverable is just as important as the content.
I look forward to the weekly methods meeting because I find it valuable to take time out of the week from our projects and other responsibilities and come together as a team to learn from each other and broaden our perspectives. Talking about the why and not just the what of our work allows us to practice rationalizing our approach, brainstorm ideas for improvement, and reiterates the value of diversity for helping each other become better research practitioners.