The Right Side of Wrong

Is Karen Read Guilty of John O'Keefe's Death?

This is an analysis of the Karen Read case using Bayes' Theorem.

⚠️ Limitations & Caveats

This site is intended to help us explore how reasonable people, starting from different assumptions or weighing evidence differently, can reach very different conclusions about the same case. By making our reasoning explicit, we can better understand how and why viewpoints diverge—even when everyone is acting in good faith. This is an educational exploration of Bayesian reasoning, not a verdict predictor.

Bayesian Analysis of the Karen Read Case

Post-Verdict Analysis

Case Outcome: Not Guilty of Murder, Convicted of OUI

Verdict Date: June 18, 2025

Verdict:

  • Not Guilty: Second-degree murder, Manslaughter, Leaving the scene of an accident
  • Guilty: Operating Under the Influence (OUI)

Sentence: 1 year probation, 24D alcohol education program

Jury Deliberation: 5 days (21+ hours)

Key Factors in the Verdict: The jury determined there was reasonable doubt regarding the fatal incident, leading to acquittal on the more serious charges. The conviction for operating under the influence was based on evidence presented during the trial. The defense's arguments about investigative flaws and evidentiary issues were influential in the outcome.

This page explains the Karen Read case using Bayesian logic—a way of updating our beliefs as new evidence comes in. Think of it like detective thinking: start with what you believe, and change your mind a little or a lot depending on the clues.

🤔 What Are We Trying to Figure Out?

Who (or what) caused John O'Keefe's death?

We consider three main possibilities:

🎲 What Is Bayesian Thinking?

Imagine you're guessing who stole a cookie. You might first guess your little brother (because he does it often), but then you see your dad has chocolate on his shirt! That's a clue. So you update your guess.

Bayesian thinking works like this:

  1. Start with what you think is likely (prior belief)
  2. Get a new clue (evidence)
  3. Update your belief based on how well each explanation fits the clue

1. Start with a Prior

We looked at how often people are hurt by their partners. That's not super common when there's no history of violence.

Chance Karen did it (H1)

10%

Prior probability

Chance someone else did it (H2)

60%

Prior probability

Chance of mixed scenario (H3)

30%

Prior probability

Adjust the sliders to set your priors:

Total must equal 100%. If it doesn't, values will be scaled automatically.

2. Evaluate the Evidence

Here's how each piece of evidence fits with each story:

Clue Fits H1 (Karen)? Fits H2 (Others)? Fits H3 (Mixed)? Comments
Body found on lawn Maybe Likely Likely Could have been moved
No blood in her car Unlikely Very likely Likely No crash signs inside car
Injuries don't match car hit Unlikely Likely Possible Injuries match falling or being hit, not run over
Tail light pieces near body Medium Low Medium Some think they were planted
Phone tracked inside house Low High Medium His phone was still in the house after Karen left
Police video gaps Low Medium High Raises suspicion
Police story changes Low High High Defense says there's a cover-up
Karen was emotional Medium Medium Medium Could mean guilt or confusion
New timeline in 2nd trial Low High High Supports idea that he was hurt inside, not outside

3. What If It's Partially True? (H3)

Real life isn't always clear-cut. What if:

These ideas blend parts of both H1 and H2. That's H3, the "middle-ground" theory.

4. What Happens When We Add It All Up?

Adjust how strongly the evidence supports each scenario. The sliders let you weigh different pieces of evidence:

How to interpret the weights:

  • 0–1×: Weak evidence
  • 5–7×: Strong evidence
  • 10×: Very strong/conclusive
  • Values in between represent varying degrees of support
💡 Tip: Play with the sliders and priors to see how your beliefs change! Bayesian reasoning is about making your assumptions explicit and seeing how conclusions shift.
How Bayesian Updating Works:
Prior Evidence Posterior
Start with a prior belief → update with evidence → get a new belief (posterior)
Show an Example: Numeric Walkthrough
Example: Suppose your prior for H1 is 20%, and you rate a piece of evidence as 80% likely if H1 is true, and 40% likely if H2 is true.
  • Prior odds for H1:H2 = 20:80 = 0.25
  • Likelihood ratio = 80/40 = 2
  • Posterior odds = 0.25 × 2 = 0.5
  • Posterior probability for H1 = 0.5 / (0.5 + 1) = 33%
This is how a single piece of evidence updates your belief!

10×

10×

10×

As you adjust the sliders, the chart below will update to show how different weightings of the evidence affect the probabilities of each scenario, given your initial belief set above (your priors).

H1: Karen did it

10%

likelyhood given evidence

H2: Someone else did it

60%

likelyhood given evidence

H3: Mixed scenario

30%

likelyhood given evidence

🎯 What Should the Jury Think?

The jury isn't being asked, "Did Karen do it?" They're being asked: "Are you sure—beyond a reasonable doubt—that she did?"

Bayesian answer: No. Even if H1 is possible, it's the least supported explanation. H2 and H3 together account for 90% of the likely explanations.

✅ Final Thoughts

Bayesian logic helps us:

In this case:

Unless much stronger proof appears, Bayesian thinking says: reasonable doubt remains.

🎛️ Try It Yourself: Adjust the Priors

Use the sliders above to set your own starting assumptions. The chart below shows how the outcome changes when the evidence is applied (held constant).

More Links

Additional resources and references:

Official & News Evidence Galleries

Key Retrial Developments (2025)

Last updated: June 17, 2025