Skip to content

Outcome Estimates (OE) - “How Much Trouble Are We Actually In?”


1. What Was Going On

By the time the POM was approved, the team finally had something solid to argue about.

Not opinions. Not tools. Not Jim.

They had:

  • defined workflow steps
  • clearly named outcomes
  • an explicit scope boundary

Now came the uncomfortable part.

Someone had to say how much work this actually was - and which parts mattered right now.


2. The Conversation That Triggered This Step

The first instinct was predictable:

“This doesn’t look that bad.”

Which was immediately followed by:

“Let’s just do all of it.”

That suggestion was shut down quickly.

Not because it was wrong - but because it avoided the real decision:

What do we fund first, and what can wait without killing the business?

That question triggered the OE.


3. The Artifacts

Below are the Outcome Estimates as they were captured.

No precision theatre. Just relative sizing, risks, and sequencing.

Outcome 1: Authorized scoring execution with auditability

## Outcome 1: Authorized scoring execution with auditability

### 1. Workflow Steps Included
1. Authorized users initiate season scoring
2. Scoring requests are logged for audit purposes

---

### 2. Work Slices
| Slice | Description | Dependencies | Risks | Estimate |
|------|-------------|--------------|-------|----------|
| Authorization rules | Define who may initiate scoring | Role definitions | Overly broad access | M |
| Request audit logging | Capture scoring requests and metadata | None | Missing traceability | M |

---

### 3. Outcome Delivery Summary
- **Total Slices:** 2
- **Complexity Profile:** S:0, M:2, L:0, XL:0
- **Delivery Window:** 4–6 weeks

---

### 4. Recommendation
Fund

---

Outcome 2: Pre-publish review confidence

## Outcome 2: Pre-publish review confidence

### 1. Workflow Steps Included
3. Staged results are available for review
4. Staged results can be compared to production results

---

### 2. Work Slices
| Slice | Description | Dependencies | Risks | Estimate |
|------|-------------|--------------|-------|----------|
| Staged results visibility | Expose staged scoring results | Outcome 1 | Misinterpretation of data | S |
| Result comparison | Compare staged vs production results | Historical data availability | False positives | S |

---

### 3. Outcome Delivery Summary
- **Total Slices:** 2
- **Complexity Profile:** S:2, M:0, L:0, XL:0
- **Delivery Window:** 2–3 weeks

---

### 4. Recommendation
Defer if required

---

Outcome 3: Controlled publishing with audit trail

## Outcome 3: Controlled publishing with audit trail

### 1. Workflow Steps Included
5. Selected results are published
6. Publishing operations are logged for audit purposes

---

### 2. Work Slices
| Slice | Description | Dependencies | Risks | Estimate |
|------|-------------|--------------|-------|----------|
| Publish controls | Define publish actions and safeguards | Outcome 1 | Accidental publication | S |
| Publish audit logging | Record publish operations | None | Incomplete audit trail | S |

---

### 3. Outcome Delivery Summary
- **Total Slices:** 2
- **Complexity Profile:** S:2, M:0, L:0, XL:0
- **Delivery Window:** 2–3 weeks

---

### 4. Recommendation
Fund

---

Outcome 4: Stakeholder notification and confirmation

## Outcome 4: Stakeholder notification and confirmation

### 1. Workflow Steps Included
7. Published results are verified
8. Stakeholders are notified of published results

---

### 2. Work Slices
| Slice | Description | Dependencies | Risks | Estimate |
|------|-------------|--------------|-------|----------|
| Result verification | Confirm published results are complete | Outcome 3 | False confidence | S |
| Notification delivery | Notify stakeholders of publication | Contact data accuracy | Missed communication | S |

---

### 3. Outcome Delivery Summary
- **Total Slices:** 2
- **Complexity Profile:** S:2, M:0, L:0, XL:0
- **Delivery Window:** 1–2 weeks

---

### 4. Recommendation
Fund

4. What Almost Went Wrong

Several people attempted to turn these estimates into commitments.

  • “So this means six weeks total?”
  • “Can we promise this by mid-season?”
  • “What if we just squeeze Outcome 2 in?”

All of those questions were politely ignored.

The OE does not answer when things will be done. It answers what it will take and what matters most.


5. The Decision

Decision Question Do these estimates provide enough clarity to make funding and sequencing decisions?

Decision Made Proceed.

Why This Was Good Enough The OE:

  • made trade-offs explicit
  • avoided false precision
  • separated mission-critical from deferrable work

No one loved it. Everyone understood it.


6. What This Unlocked (And What It Didn’t)

Now Allowed

  • Prioritize outcomes
  • Decide sequencing
  • Bundle approved work into an initiative

Still Not Allowed

  • Writing requirements
  • Designing solutions
  • Starting implementation

That comes next.


7. Why This Step Matters

This is where realism enters the room.

Without OE, teams either:

  • underfund critical work
  • overcommit to non-essential features
  • or pretend everything is equally important

None of those end well.


8. Sarcastic Footnote

If everything here had been marked “MVP,” this document would be useless.

Fortunately, it wasn’t.