Priority Matrix for Canvas for RATs

Report

Why to use?

Use it to prioritize riskiest assumption tests (RATs) effectively.

When to use?

At the final phase of a workshop the priority matrix comes into use to evaluate critical assumptions or address unanswered questions, indicated by white sticky notes. This step is essential before you can proceed with the execution of your data & AI strategy and the implementation of your data/AI product(s). The matrix aids in assessing both the likelihood of failure ( labeled as the "Fail likelihood" axis) and the potential impact of these failures ( labeled as the "Fail impact" axis) for each assumption or question. Prioritize testing assumptions or questions that present a high risk of failure and have significant negative consequences if proven incorrect.

How to use?

I. Preparation

1. Fill the canvas header:

a) Label Focus on in the canvas header with a white sticky note "Assumptions & Questions".

2. Label the axes and quadrants with white sticky notes:

a) X axis: "Fail likelihood" - How likely is the assumption to be false or the answer to the question to be negative?

b) Y axis: "Fail impact" - To what negative extent does a falsified hypothesis or negative answer impacts the realisation of the data & AI strategy or implementation of the data/AI product? A false assumption with a high fail impact could kill your strategy/product. For a false assumption with a low fail impact you might find a new strategy/product design.

c) I. Quadrant: "III. Important" - You should test those assumptions with a low fail likelihood and high fail impact third.

d) II. Quadrant: "I. Riskiest" - You should test those assumptions with a high fail likelihood and high fail impact first.

e) III. Quadrant: "IV. Safest" - You should test those assumptions with a low fail likelihood and low fail impact last.

f) IV. Quadrant: "II. Improbable" - You should test those assumptions with a high fail likelihood and low fail impact second.

II. Storytelling

If time allows, review all canvases and ask yourselves: is it complete, correct, and consistent? If uncertainties arise,, add white sticky notes with the assumptions or questions. One way to do this, is to play "storyteller & devil´s advocates": the facilitator narrates the workshop’s progression while participants check for plausibility. If they discover a gap, an error etc. they add another white sticky note.

III. Collection

Gather all white sticky notes bearing a critical assumption or open question from the canvases, and place them on the Sort in field at the left edge of the Priority Matrix canvas. Duplicate these original sticky notes, change the color to blue and add contextual information to make each note self-explanatory.

V. Guesstimation

3. Anchor Element: Choose a sticky note from the Sort in field with medium fail likelihood and medium fail impact and position it in the center of the canvas. This is your anchor element, against which all other assumptions or questions will be compared, serving as the benchmark for "average" fail likelihood and impact.

4. Quadrants: Instruct participants to assess the fail likelihood and impact of each new assumption or question relative to those already positioned on the matrix, particularly in relation to the anchor element. This evaluation should be iterative, allowing for adjustments as new information potentially alters the relative scale:

  • Fail likelihood: Move the sticky note more to the right if its fail likelihood is higher than those already placed, and to the left if it's lower.

  • Fail impact: Position the sticky note higher if its fail impact is greater than the existing ones, and lower if it's less.

This adjustment should continue dynamically, with participants encouraged to reposition previously placed notes as necessary to maintain accurate relative assessments throughout the session.

Tip: If one assumption or question relies on another, illustrate this relationship by connecting them with arrows. Position the dependent assumption or question higher and to the right of the one it depends on, to reflect its increased fail likelihood and impact. This adjustment acknowledges that a dependent assumption or question carries higher risks, as its validity hinges on another assumption that is not yet validated, thereby also amplifying its potential impact.

Optional: If there's a lot of debate or uncertainty about the fail likelihood or impact, use colors on the sticky notes to show how certain you are:

  • Green: Absolutely sure

  • Yellow: Moderately sure

  • Red: Not sure at all

VI. Riskiest Assumption Tests (RATs)

Focus on testing the assumptions/questions in the II. Quadrant labeled as "I. Riskiest" as they are most likely to fail and negatively impact your strategy/product. For each, consider designing experiments, conducting expert interviews, or undertaking research studies to verify or falsify them. Record these tasks in your management system, such as a Kanban board or a HELD (Hypotheses, Experiments, and Learnings Database).

Important: Remember that the anchor element is also considered a critical assumption/question to test!

--

Copyright: All rights reserved by Datentreiber GmbH. For more workshop templates, transformation tools, and canvas tutorials, visit our Data & AI Business Design Bench. The Data & AI Business Design Bench requires a commercial licence per user per year. For more information, please contact Georg Arens via Email or make an appointment via Calendly.

Categories

Martin Szugat image
Martin Szugat
Data & AI Business Designer@Datentreiber
To help companies to design and transform into data-driven and AI-powered businesses I've invented the Data & AI Business Design Method and our company Datentreiber developed the Data & AI Business Design Kit - a collection of open source canvases - as well as the Data & AI Business Design Bench - a commercial collection of Miro templates & tools.
Share your comment with the Miroverse community.

Similar templates