We're currently in a calibration phase — a fresh start with clean stats is coming soon.
← Back to blog
Observations

Why Our AI Says "Hold" 70% of the Time

Most analysis cycles end with no action. That's not a failure of the system — it's its most important feature.

The myth of constant action

There's a common misconception that a good trading system should always be doing something — buying, selling, adjusting. In reality, the opposite is true. The best traders, human or algorithmic, spend most of their time waiting.

Warren Buffett calls it "sitting on your hands." In quantitative finance, it's called having a high conviction threshold. In our experiment, it means the AI needs multiple independent data sources to agree before it acts.

What "Hold" means in our system

When Cortex outputs a Hold decision, it doesn't mean the AI found nothing interesting. It means the evidence wasn't strong enough across enough dimensions to justify risk.

Each cycle, the system analyzes multiple data layers: technical price action, market sentiment, on-chain blockchain activity, and news flow. A simulated trade requires convergence — signals pointing in the same direction across at least two independent sources. One bullish indicator alone isn't enough.

This is deliberate. Single-factor decisions are the most common source of losses in trading. A price breakout with negative sentiment is a trap. Bullish news with deteriorating on-chain metrics is a sell-the-news event. The AI is designed to recognize these conflicts and step aside.

AI proposes, code enforces

Even when the AI decides to act, a separate layer of hard-coded checks can veto the decision. These rules are entirely non-AI — deterministic code that enforces position sizing limits, exposure caps, and loss thresholds.

This two-layer design — AI judgment filtered through mechanical rules — is one of the core hypotheses of the experiment. Can you combine the flexibility of large language models with the discipline of rules-based risk management to produce better outcomes than either approach alone? That's what we're testing.

Why inaction is interesting

Most AI trading demos show impressive backtests with frequent trades and smooth equity curves. Real markets are different. They spend most of their time in conditions that don't offer clear edges — sideways ranges, conflicting signals, and ambiguous setups.

An AI that can recognize "I don't have enough information to act" is arguably more interesting than one that always finds a reason to trade. Overtrading is one of the most common and costly mistakes in both human and algorithmic trading.

We log every Hold decision — not just the trades. Every cycle is recorded on the dashboard whether the AI acted or not. This creates a complete picture of the system's behavior that you can analyze yourself: when does it act? When does it wait? How does the ratio shift across different market conditions?

The open questions

A high hold rate raises questions that only time and data can answer:

  • Is the system too cautious? Does waiting for multi-source convergence cause it to miss valid opportunities? Or does the selectivity protect capital?
  • How does the hold rate change with the market? Does it naturally adapt — trading more in trending markets, less in choppy ones? Or is it static regardless of conditions?
  • What happens to the trades the risk manager rejects? Would they have been winners or losers? This is verifiable data — and the answer may evolve over time.

These aren't rhetorical questions. They're empirical ones, and the experiment is building the dataset to answer them. The answers will change as market conditions change — which is exactly why following the experiment over time is more valuable than any single snapshot.

See the full picture

Every cycle — action and inaction — is logged on the dashboard. Premium observers can follow the AI's reasoning in real time — including the counter-arguments it weighed before deciding to wait. The answer to "why did it hold?" is often more revealing than "why did it trade?"

Cortex is an independent AI research experiment — not a financial advisory service.

Paper trading only — no real money involved. Past simulated results do not indicate future performance.