Redirecting to https://www.makingdatamistakes.com/ease-value/ - click if you are not redirected.


Note: You will be redirected to the original article. A local copy is included below for convenience.

It was early afternoon a few days after I'd started as Chief Product Officer, and I was rubbing my eyes blearily.

We had a list of product ideas as long as my arm - some ideas promised to really make a difference but required major engineering effort, while others looked like quick and easy wins. Their advocates were pointing to various possible benefits. Obviously we couldn’t do them all. We’d been talking for 90 minutes, and we were thrashing around in the face of so many competing possibilities.

I felt that I should somehow be able to offer some clarity, or some insight, but I was new and I didn’t trust my own judgment. In a way, I’m glad about this in retrospect - because the best answer is rarely the HiPPO’s (highest paid person’s opinion). It usually comes out of multiple perspectives.

So we took a break for 10 minutes. Every time we take a break in a long meeting, you can feel the group’s IQ bobbing back up afterwards.

When we came back, we ran an ease/value ranking:

  1. Start with your longlist of ideas

  2. Give each idea a low/medium/high score (1-3, higher is better) for how valuable it would be

  3. Give each idea a low/medium/high score (1-3, higher is better) for how *easy* it would be.

  4. Multiply the two scores together, and rank them.

We discussed and fine-tuned the prioritisation of the top handful, and ignored the rest. That made for a focused, efficient discussion, everyone felt heard, and we felt confident that our prioritisation was more than good enough. And then we revisited the ignored, remainder items at a later date when updating our prioritised backlog.

It was one of those moments where I realised that my job as a leader wasn't to be the smartest person in the room with all the answers. Instead, as a group, we were smarter than any one of us as individuals. And better still, the process felt inclusive.

Here's a Google Sheets template.

(More detail follows for the high-need-for-cognition folks in the room.)

FAQ

How should I handle things if there are more than a couple of people involved?

This process scales up well with larger groups. To benefit from the wisdom of the crowds, we need to aggregate independent, diverse, informed perspectives.

Use one of the more complicated template tabs with separate columns for multiple raters.

Apply the principles from the Idea Stampede, i.e. ask people to work quietly on their own for a while, scoring within their own column of the spreadsheet without looking at other people's, then aggregate and discuss.

It can help to ask people to write a comment for each score as they're scoring. This forces them to think things through, and helps a lot to understand the sources of disagreement - "ah, Person A is confident because they think the programming side will be easy, but Person B is worried about the information security risks...". Those comments can be really useful as a reference and reminder later. But use your judgment - if there are *lot* of ideas, maybe better to do a quick-and-dirty first-pass, and save the fine-grained head-scratching for the medium-length-list.

Why does this technique work so well?

Ease-value ranking is such a simple, obvious idea, but it works remarkably well. I think there are a few reasons:

How do you generate the longlist in the first place?

You can do this beforehand asynchronously, or in the room as part of the ease-value ranking.

Apply the principles from the Idea Stampede, i.e. ask people to write down ideas silently on their own, encouraging them to write clearly and provide enough context to be understood. It can help to add one or two obvious ideas at the top as exemplars, to model the style you're looking for.

Once people are starting to run dry, ask the group to scan over the list, merging duplicate ideas together and tidying things up.

Then take a break before starting the ease-value scoring.

Does this go by other names?

Yes, lots of people have suggested this idea before, e.g. cost/benefit, difficulty/value, pain/gain matrix, and ICE (impact, confidence, ease).

I mostly prefer ease-value because:

But what do 'ease' and 'value' actually mean?

Ah, I was hoping you wouldn't ask.

Value

Ease

In practice, it's helpful to have a very brief up-front discussion (or provide a rubric) with some examples about what makes for low/high ease and value, but it doesn't help to sweat the details (especially if you're only scoring 1-3).

What if you have a high-stakes decision, or a lot of ideas?

Ease-value is a great first step because it's quick. It frees up time, so you can to focus a more fine-grained discussion on a narrow range of topics.

Not every decision boils down to 'ease' and 'value'.

If it's really high-stakes, or nuanced, or you have a long list of ideas, or you want the prioritisation to be better, sometimes it makes sense to use a 1-5 scoring instead of 1-3. This might lead to a more fine-grained ranking. For example, I usually use a 1-5 score for Risk Assessments, because they're higher-stakes, and there's less discretion to re-rank based on discussion.

But be aware that it is much more effortful to score 1-5 than 1-3 - it just requires a lot more internal deliberation to decide with finer-grained categories. So in practice, I find it's often better to score 1-3, and leave more time for discussion.

Further analysis

For high-stakes decisions with multiple diverse perspectives, it can help to break down the aggregated scoring in a few ways. Use the 'Richer breakdowns' template.

How to run the post-ranking discussion?

I don't have such neat, prescriptive advice for how to run the discussion after the ranking.

If it's low-stakes, you could:

If it's high-stakes, maybe the discussion focus should be on what further information would inform the decision, then reconvene after gathering that information.

Notion vs Google Sheets

As a rule, I've come to prefer Notion to Google Workspace. But for ease/value rankings, I favour Google Sheets, for a few reasons:

I'm sure Microsoft Excel or similar would be fine too, as long as they support multi-user collaborative editing.

I have a better way of doing things!

If you've found a way to improve things, I would love to hear from you.

In practice, experiment, and use your judgment. Context matters when picking the best approach for a given decision, team, or moment.