Product Manager's User Research - you have to dig

I sat on the call with a Product Manager on my team.

I knew then that she probably wouldn't work out, and I was cringing during the User interview.

She not only spoke 80% of the time.

She argued with the User about why her design was better than what the User wanted!

I talked to her afterwards. But this didn't change much of anything.

For whatever reason, she needed her 35 page spec to become reality, despite comments from me or from others. Even when people said on a spec review call that what she was doing was wrong, she told me later when I synced with her that they were telling her they liked what she was building!

But you also don't have all day, and you aren't someone's therapist!

I've been on calls where the Product Manager (not this same one, obviously) would ask all about the feelings someone has about the product and what features excite them and what features are missing.

That's not going to get you what you want to know either in a short amount of time.

To me, especially when building an understanding of a Persona, it's important to have repeatable set of simple questions so you can find signal.

If there's no signal, there's no Persona and, in all likelihood, no Product.

Here's the bare minimum I try to assess when in an interview with questions. Obviously one can go far deeper and chase rabbitholds and pull on more threads.

However, for me, I want to understand these four things to assess a new product or features.

Situation

These types of questions help me to understand what is going on in the world of the User. Depending on how much I know already, I may need to spend more or less time.

But this is important to validate that the things they see around them and the beliefs they have align with what I think is going on.

Depending on how 0-1 the product is also determines whether to fly at 30,000 feet or much closer to earth.

"Tell me about what's happening in your life or day-to-day that you feel you need to do something."

This is vague but it's giving an opening to talk about an action or change that they feel they need to do something. It's not a product-centric question.

It's about the situation. It could be an increase in noisy errors, it could be rising fraud or bots, it could be the number of systems being attacked is growing, the pressure from managers to resolve incidents faster....something is a "situation" for them.

Now, the more vague it is, the more you have to drill-down. If the situation doesn't really lead to a crisp answer in the next question, then there's some recalibration.

For something like a social product, I may need to pose a question that is more directed. Social and lots of consumers often need a specific context.

"How do you feel your experience has been meeting with and connecting with people when you go to events?"

"How active and helpful do you find your network on LinkedIn?"

This still lets me to understand their situation and carve out space for them as a Persona.

Problem

IF the situation creates some kind of a problem, even one they haven't thought much about, there's space to explore.

"When that happens, what do you think about it? How does it affect you, if at all?"

"Do you care about it? What is wrong with the things you have tried in the past or is it fine?"

I still relate to how the User thinks about the problem and how it affects them. What is the level of care that they have about the actual problem.

I have had interviews where I start to drill down with them and ask them, "So do you really care about this?" and they'll laugh and say that they don't.

Other times, when I talk to them, they tell me why it's a problem, but after alot of digging.

For example, if they are getting alot of fraudulent attacks, they will say that it results in alot of bots buying their inventory. Don't stop there.

I'll ask, "But if your inventory is still being bought, why do you care? You still make money, right?"

Then they will drill down further into what the underlying issue is.

Make sure as part of this you understand what existing solution or things they have tried in the past are. This helps to sharpen the design space.

Needs

I try not to focus on the exact features, but yes sometimes you'll get suggestions. Take them, think about them (they could have good ideas), but get back to what they need.

Do they want the attacks to just stop and never hear about them?

Do they need a way to know what happened, and then make a manual decision?

Do they want alerts? How far do they want the remediation to go?

This is less about the feature, but what would the end result be like for them.

These can be super high level, too.

A possible starting response could be, "I want to stop waking up in the middle of the night for false positives during my on-call."

It could be even future oriented or risk-avoidance: "I want our institution to not miss out on crypto adoption if it becomes a real thing without any of the downside exposure."

This is the digging to really begin to work through what is the real design space for the product.

Pareto Option

Even though the ultimate Pareto Proposal -- what is the most impactful set of features -- is up to the Product Manager, getting ideas from Users is not a bad idea.

This imposes a constraint in the discussion and begins to make things real for both the User and the Product Manager.

Atlhough it is true that Users don't know the Product or the Features, they do have a good sense of the Job to be Done, that core end state they want to get to that's different from their current world right now.

Questions I use:

For emphasis: this is NOT how I would make the final determination on the product specification.

That same argumentative product manager would point to a passing comment someone would say they want and say, "See, this is why it's one of the 20 features and graphs I have! They asked for it!"

The purpose is to ask the question and then look at the proposed list of features and ask a very different question.

"How can I fulfill that while reducing or eliminating these other important capabilities?"

The second key point: no matter what the User says, you still have to dig.

Why was that solution the most important one and are they telling the truth in terms of importance to them?

User Research is Archeology

Good User Research, ultimately, is archeology.

It's digging through the initial conversations and having both an openness to have your own biase and assumptions over-turned; but at the same time, having enough direction or structure to being to detect signal and form a hypothesis as quickly as possible.

Unlike most academic research, where there's some implicit "truth" that can be uncovered, typically enough conviction to publish for all your peers to see, User Research needs to be both right and flexible.

Because new information is coming and, in many ways, user observability is quantum, that initial conviction may be subject to modifications based on what gets released. It's a rapid learning environment.

But get that initial conviction wrong, you could be stuck in a local maxima or be far enough that course-correction won't bring you back to target.

So I like this approach of understanding the Situation, the Problems, the Needs, and the Pareto version to get these fast as possible and to reduce directional risk.

How would you do consistent and rapid User Research in a dynamic product environment?