The key to making a good forecast,” [Nate] Silver writes [in The Signal and the Noise], “is not in limiting yourself to quantitative information. Rather, it’s having a good process for weighing the information appropriately.” Ditto for good product research.
Any researcher who wishes to become proficient at doing qualitative analysis must learn to code well and easily. The excellence of the research rests in large part on the excellence of the coding.
Notes taken during or before an interview are filled with inaccuracies. It’s just beyond human capacity to fully capture everything. You need an audio or video record. Whether you later transcribe those (my preference) or just watch them again is up to you, but notes are not the same as the definitive recording of the interview.
So here’s the process for heuristic evaluation. And I’m going to be using an example from Android to illustrate this. A few months ago, another company released a tablet, and our leadership wanted to know how long their setup processes took. This is a great question to evaluate using a heuristic method because it was a quantitative question. […]
So we set up some speed-based usability heuristics — number of screens, number of clicks, number of times to enter text, minutes of loading time, confusing language that would slow somebody down. Then we set up a room and time to meet up with our key stakeholders — our researcher, our project manager, and our setup flow designer. We then ran through the setup process together at a measured pace, keeping track of time, but also taking detailed notes on each step of the process and how well all the elements on the screen helped us, the user, get through setup as quickly as possible. […]
So normally action items, when you’re running against your own products, are usability action items, like we have to move this button, or we have to change this wording. But action items from running a heuristic evaluation as a competitive analysis usually have to do with keeping up with industry baselines.
Web usability has traditionally been focused on increasing ease of learning for the novice users. …
the pendulum will soon start swinging a little bit in the other direction …
Some websites engender sufficient loyalty that users return frequently and begin using them on a daily basis. …
Increased attention to expert performance has [these] implications: …
• The need to stop using Web browsers as the platform for Internet-enabled applications… Frequent users will need an optimized interface that takes full advantage of the device they are using.
• Usability tests must study users over time as they develop expertise in using the site or service.
Marco helped the Home team understand the expectations people might have for the product and identify potential interaction issues. He collected and analyzed feedback through interviews, diary studies, surveys and discussion groups. […]
Anything that changes the deep relationship people have with their device is really challenging to design for. You can’t predict what people will expect and how they’ll react.” […]
“We acted quickly and used quite a few different research methods and approaches to mitigate not only the fact that the phone is so personal, but also that we wanted to cover different contexts and situations of use. Doing this also allowed us to gather data and feedback at different paces, and to have a solid sense of patterns of behavior from an early stage.”
People aren’t very good at predicting what they want. Especially if you ask them out of context, like with a survey.
At Polyvore we use a method we call “Fake Doors” : You put a fake door in front of someone and then you see if they try to open it. In a web product what this means is you pretend it exists and then you see if anybody clicks on it.
Don’t listen to users
I don’t listen to users because of the psychology of attitude & behavior
Many studies found no relationship between attitude and behavior
Don’t ask what people need
Instead observe what they do
Don’t ask for feedback
Instead watch them use it
You can quickly see whether customers will engage with a new feature by launching just the first part of it. We did this with CustomMade, a startup that lets people order custom-built products. Our idea was to let visitors save others’ projects for inspiration. But instead of laboriously building the whole feature, we just launched the first button. When we observed a huge number of visitors clicking the button to access that function, we knew we were onto something and built the rest of the feature.
A glimpse at one of Google’s usability labs (begins 2:50)
In this case study we describe a four-step process for eliciting and analyzing user behavior with products over an extended period of time. We used this methodology for conducting a comparative study of two mobile applications over a period of seven months with 17 participants.
The participants’ ultimate impressions of the applications differed markedly from their first impressions, lending further evidence that longitudinal study … is essential in evaluating product usability and usefulness.
Mr. Zuckerberg takes a decidedly deliberate approach to product development.
That’s evident in how the 28-year-old CEO led the creation of Timeline […] The new feature was a culmination of an 18-month process that included dozens of test versions and multiple focus groups.
The group created at least 100 versions of the product, according to employees on the team.
As valuable as user prototypes and user testing may be, often you need live data in order to determine if an idea actually works.
Some of my favorite examples of this are when applying game dynamics to e-commerce, search result relevance, many social features, and of course funnel work.
Generative research is a collection of knowledge about people who might potentially use your services or products … generative research focuses on internal reasoning while a person does something of particular interest to your organization. This knowledge helps your team produce better ideas, more on track with people’s real life situations. The knowledge gives you empathy.
Generative research works hand-in-hand with evaluative research as a part of a cycle that keeps risk, concealed opportunities, and wasted investment at bay.
As product manager, you know your job is to gain a deep understanding of your target customer, the problems to be solved, and whether you can come up with a product that meets these needs. You know you need to work closely and directly with customers in order to come up with a product that will meet the needs of hundreds of customers (and thousands or even millions of users), but you also know there aren’t enough hours in the day to work directly with this many customers.
My favorite technique for addressing both of these problems – getting deep insight into my target customers and having great reference customers at launch – is to use a charter customer program (also known as a “Customer Advisory Board” or “Customer Council” or by similar names).