The ICE Scoring Model is a relatively quick way to assign a numerical value to different potential projects or ideas to prioritize them based on their relative value, using three parameters: Impact, Confidence, and Ease.
What is the ICE Scoring Model?
ICE Scoring is one of the many prioritization strategies available for choosing the right/next features for a product. The ICE Scoring Model helps prioritize features and ideas by multiplying three numerical values assigned to each project: Impact, Confidence and Ease. Each item being evaluated gets a ranking from one to ten for each of the three values, those three numbers are multiplied, and the product is that item’s ICE Score.
Impact looks at how much the project will move the needle on the key metric being targeted. Confidence is the certainty that the project will actually have the predicted Impact. Ease looks at the level of effort to complete the project.
For example, Item One has an Impact of seven, a Confidence of six, and an Ease of five, while Item Two has an Impact of nine, a Confidence of seven and an Ease of two. The ICE Scores would be 210 for Item One and 126.
Those scores can then be compared with a glance and the item with the highest score gets the top slot in the prioritization hierarchy (which would be Item One in our example). Item Two’s relatively low Ease value dragged that item’s ICE score way down, despite the fact it would have a greater Impact at a higher level of Confidence than Item One. This is because all three elements of the equation are treated equally, unlike a weighted scoring model.
Why is ICE Scoring Useful and Who Created it?
There are many different scoring models out there, but ICE primarily separates itself from the pack by being simpler and easier than most of the alternatives. Because ICE only requires three inputs (Impact, Confidence, and Ease) for each idea under consideration, teams can rapidly calculate the ICE score for everything and make prioritization decisions accordingly.
It’s an even simpler calculation than the RICE model, which adds Reach as a fourth element in the equation (it also swaps Ease for Effort, so it uses Reach * Impact * Confidence / Effort for the formula).
The fact that ICE is so speedy is in no small part due to the person who created it, Sean Ellis. Ellis is most famous for coining the term “growth hacking” and helping companies quickly ramp up experimentation. Growth hacking experiments are supposed to be quick and iterative, so it makes sense that the scoring model used to determine which experiments to prioritize would also be quick and easy to use.
ICE scoring is basically a “good enough” estimation and far less rigorous than other scoring models that product teams typically rely upon. There is also a high level of variability in any item’s ICE Score based on who’s doing the scoring. Since it’s almost completely subjective, two people could assign very different values to the attributes of different ideas and end up with contrasting opinions.
The fact that a low Ease score can so easily drag down an item’s ICE score also highlights the “experimental” origin of the model; in the land of growth hacking “failing fast” is extremely valuable due to the lessons learned, and teams typically don’t want to invest a ton of time in any single project. However, some things that will have a bigger impact require a larger resource and time commitment. Solely relying on ICE scores could lead a team to keep going for “low hanging fruit” instead of making a larger investment in a project that could make a much bigger difference in the long run.
ICE Scoring is best used for relative prioritization; if you are considering a few contenders, it’s a great way to pick a winner. Similarly, if you were to apply it to the entire backlog it will help bubble up a top tier of options for the goal being targeted at that moment.
A major drawback of ICE Scoring is that relatively few people in an organization will have enough information to accurately predict all three elements of the equation. Impact and Confidence are business considerations while Ease falls into the technical domain.
Engaging product development to provide an Ease ranking for every item being considered in a scoring exercise is one way to limit the subjectivity to areas where the scorers should have a stronger body of knowledge and gets decision-makers out of the business of guesstimating development timelines. However, the fast and cheap nature of ICE Scoring may run counter to asking developers to create a level of effort for dozens or hundreds of possible projects.
It’s also important to have a consistent definition of the 1-10 scale for ranking each of the ICE elements. If there isn’t an agreement on what a confidence of “7” means it could lead to some very inconsistent assessments by various team members.
While ICE Scoring definitely has its merits, it’s likely not the best method for prioritizing an entire product roadmap, but is better suited for pre-work or taking advantage of a particular opportunity.
Conclusion
Speed and simplicity are ICE Scoring’s biggest selling points and can help product teams narrow things down. Its strength is also one of its weaknesses, however, since it is only assessing an item’s impact relative to a single goal—in an organization with multiple, concurrent goals it falls short of other scoring models’ capabilities.
Despite its lack of nuance and complexity, ICE Scoring can offer a slick way to trim things down and provide some relative comparison points for decision-makers. And when you’re trying to reach a consensus, sometimes ruling things out is just as helpful as figuring out which item is the cream of the crop.
To learn more about prioritization, watch the following webinar.