Oracle
How Results Work

How Results Work

When the Oracle responds, it returns a structured set of results designed to give you multiple angles on your situation. This page explains the result format, the three role categories, and how to interpret the Oracle's output.


Result Structure

Each Oracle response contains two parts:

1. Synthesis

The synthesis is a 3-5 sentence paragraph at the top of the results. It weaves the most relevant models into an integrated thinking framework for your specific situation. The synthesis is not a recommendation or a prediction -- it is a structured way to think about the problem.

A good synthesis reads like advice from someone who has studied your situation through the lens of multiple disciplines simultaneously. It connects models that you might not have thought to connect.

2. Model Cards (15 Results)

Below the synthesis, 15 individual model cards are displayed, grouped by role. Each card contains:

  • Model name: The name of the mental model.
  • Discipline: Which field the model comes from.
  • Stance: What this model argues in the context of your situation. This is not a generic description of the model -- it is a specific application to your problem. For example, if you asked about a career move, the stance for "Opportunity Cost" might be: "Staying in your current role is not free -- you are paying the difference between what you earn now and what the equity could be worth."
  • Question: A pointed question derived from this model that you should ask yourself. Continuing the example: "What specifically are you giving up by not taking this offer, and have you priced that accurately?"

The Three Roles

Every model recommendation is classified into one of three roles. This classification is perhaps the most valuable aspect of the Oracle's output, because it prevents you from cherry-picking only the models that confirm what you already want to do.

Supporting

Models that argue for a direction or validate an approach. These are the frameworks that say "yes, this makes sense because..." or "this aligns with the principle of..."

Supporting models are not cheerleaders. They provide intellectual grounding for why a particular direction has structural merit. The stance explains the argument; the question tests whether the conditions for that argument actually hold in your case.

Reading supporting results: Look for which disciplines are represented. If your supporting models come from a single discipline, the support might be narrow. If they span multiple fields, the argument is more robust.

Challenging

Models that present counterpoints, risks, or warnings. These are the frameworks that say "be careful because..." or "you might be falling into..."

Challenging models are not pessimistic by default. They represent genuine risks and cognitive traps that could undermine your thinking. Many of the most valuable Oracle results are challenging models -- they surface the blind spots you did not know you had.

Common challenging models include cognitive biases (Confirmation Bias, Overconfidence, Anchoring), risk frameworks (Tail Risk, Black Swan), and structural warnings (Winner's Curse, Survivorship Bias).

Reading challenging results: Pay special attention to the questions. A challenging model's question is designed to puncture a potential illusion. If you cannot answer it confidently, you have found a gap in your analysis.

Process

Models about timing, method, and how to think about the problem itself. These do not argue for or against a direction -- they guide the process of decision-making.

Process models answer questions like: "When should I decide?" (Optionality, Real Options), "How should I structure my analysis?" (First Principles, Inversion), "What information do I need?" (Bayesian Updating, Base Rate).

Reading process results: These are your methodological toolkit. Even if you ignore the supporting and challenging models, the process models tell you how to approach the problem with more rigor.


Graph Integration

When results arrive, the recommended models fire on the 3D graph simultaneously. This creates a visual constellation -- 15 nodes lighting up across the graph space. The camera automatically frames this constellation so you can see the spatial distribution.

What to observe in the constellation:

  • Cluster: If most recommended models are in one area of the graph, your situation is primarily relevant to a specific discipline or set of closely related ideas.
  • Spread: If recommended models are scattered across the graph, your situation touches multiple disciplines. This is common for complex, multi-dimensional decisions.
  • Bridges: Look for recommended models that sit between clusters. These are the models connecting different aspects of your situation.

Clicking Results

Click any model card in the Oracle results to:

  1. Pan the camera to that model on the graph.
  2. Fire the model with the activation sequence.
  3. Show its connections to other models (including other recommended models).

This lets you explore how your recommended models connect to each other and to the broader graph.

Arrow Key Navigation

Use the arrow keys to cycle through results while in Oracle Mode. Each press highlights the next model card and pans the camera to the corresponding node on the graph. This is a quick way to tour all 15 recommendations.


Interpreting Results

Look for Patterns

Do multiple models point in the same direction? That direction has broad intellectual support. Do supporting and challenging models contradict each other sharply? The tension between them is where the real decision lives.

Weight the Questions

The questions are often more valuable than the stances. A good Oracle question is one you have not asked yourself. If a question makes you uncomfortable or uncertain, that is a signal to dig deeper.

Use the Roles as Structure

The three-role classification gives you a natural decision-making process:

  1. Read the synthesis for the overall framework.
  2. Understand the supporting models: what structural logic argues for this?
  3. Engage with the challenging models: what could go wrong, and what are you missing?
  4. Apply the process models: how should you gather information and structure your analysis?

This sequence forces balanced thinking. Most people skip step 3 naturally (it is uncomfortable to seek out challenges to your preferred direction). The Oracle makes it unavoidable.


Limitations

The Oracle is a tool, not an authority. Its limitations include:

  • No external knowledge: The Oracle reasons only from the 700 models in the Lattice dataset. It does not search the web, access current news, or know about your specific industry.
  • Summary-based reasoning: The model descriptions are brief summaries. Claude reasons about their applicability based on these, not full textbook chapters.
  • Two-pass selection: The 15 models are selected through a two-pass process (shortlist 25 from 700, then deep-analyze 25 to pick 15). While this is more thorough than single-shot, there is no iterative refinement beyond these two passes.
  • No personalization: The Oracle does not learn from your previous queries or adapt to your decision-making style.

These limitations are by design. The Oracle is meant to broaden your thinking, not replace it. It surfaces frameworks you might not have considered, structures them by role, and gives you pointed questions to drive your own analysis.