AI Signals — Week 16, Apr 13–17, 2026
- Every single AI model turned bearish this week — a synchronized sentiment collapse that hasn't been seen in this dataset before.
- Grok made the sharpest pivot, swinging from +2.1% bullish bias to -2.6%, a shift of nearly 5 percentage points in one week.
- Technology lost half its model-assigned upside in seven days, falling from +14.6% to +7.2%, yet still leads all sectors — which tells you how bad everything else looks.
- DeepSeek remains the cost anomaly of the AI analyst world: 18x cheaper than Claude per thousand valuations, with comparable output validity.
The Big Picture
Something unusual happened this week: all five models moved in the same direction at the same time. Every model that was net bullish last week — Claude (+2.7%), Gemini (+2.2%), GPT (+1.3%), Grok (+2.1%) — flipped negative. DeepSeek, already bearish at -1.6%, dug deeper to -4.2%. The aggregate consensus upside across 23 companies is now a rounding error above zero for the most optimistic models, and meaningfully negative for the rest.
This kind of synchronized reversal is worth pausing on. Models don't coordinate. They don't read each other's outputs. When they all move the same way, it means the underlying data — prices, multiples, macro inputs — shifted hard enough that independent reasoning systems reached the same conclusion. That's a signal, not noise. The market, as reflected in current spot prices, has apparently run ahead of what these models consider justifiable by fundamentals.
The average consensus target price across the universe implies roughly flat-to-negative upside from current levels. When your AI analyst panel collectively shrugs at the market, that's worth taking seriously.
Trends
The most interesting trend signal this week is the divergence in conviction between rising and falling names. META saw its consensus target price rise on 3 of 4 days, with a 12.9% intra-week range — the widest of any trending name. That's not confidence; that's models wrestling with a volatile input set and landing in different places each day. The trend direction is up, but the range suggests the models are uncertain about the magnitude.
NOKIA is the cleaner story: 3 up-days, 0 down-days, a tighter 4.79% range. When models consistently nudge a target higher without reversing, that's genuine directional conviction rather than noise averaging out. Given that Nokia sits at the bottom of the upside table at -26% consensus discount to spot, even a consistent upward trend in targets hasn't closed the gap — the models simply think the stock has run too far.
UPM-Kymmene went the other way: 3 consecutive down-days in target prices, a -10.2% week-on-week target cut, and a -23% implied downside from spot. The models are not warming to Finnish paper. MSFT, despite a rising trend in targets, absorbed the largest single target-price cut in the dataset at -12.3% — a reminder that trend direction and magnitude can tell contradictory stories.
Sector Signals
Technology's -7.4 point week-on-week shift is the headline. The sector still carries the highest consensus upside at +7.2%, but that number was +14.6% last week. The models didn't suddenly hate tech — they recalibrated against prices that moved faster than their fundamental views. This is a classic pattern: price appreciation compresses model-implied upside, and the AI analysts dutifully mark it down.
Financials fell -4.8 points to -6.9% implied upside, driven in part by a -10.5% target cut on SAMPO and persistent skepticism about JPMorgan at current levels. The models see JPM at $254.85 fair value against a $309.95 spot — an 18% implied overvaluation. That's a strong statement about bank valuations in the current rate environment.
Energy was the week's only meaningful gainer in sector sentiment, improving +4.8 points to -21.8% — still deeply negative, but less so. With XOM receiving a modest +2.0% target revision, the models are cautiously less pessimistic. Note that both energy and materials are single- or two-company sectors here; treat those shifts as stock-specific signals, not broad sector reads.
Healthcare held almost perfectly flat at +14.0% implied upside, the second-best sector reading. ORNBV (Orion) and JNJ together anchor this, with JNJ receiving a +2.6% target upgrade. Stability in healthcare consensus during a week of broad bearish rotation is itself a signal about where the models see relative safety.
What the Models Reveal About Themselves
Grok's -4.8 point bias shift is the behavioral story of the week. Last week it was the most bullish model; this week it's the second most bearish. That kind of volatility in model-level bias — not stock-level, but the aggregate posture across 23 names — suggests Grok's valuation framework is more sensitive to market price inputs than its peers. It's not wrong to update on prices; that's rational. But the speed of the swing raises a question about whether Grok is anchoring too heavily on recent price momentum when constructing its DCF assumptions.
Claude remains the most expensive model at $40.51 per 1,000 valuations and the slowest at 28 seconds average latency, yet it's the only model with a non-zero terminal growth rate standard deviation (0.43%). Every other model locks terminal growth at exactly 2.0%; Claude varies it by company. Whether that sophistication justifies an 18x cost premium over DeepSeek ($2.27/1K) is a legitimate question for platform operators.
Gemini continues to be the reliability outlier: 89.6% valid output rate against 100% for the other four. Twelve failed valuations in a week of 115 attempts is not catastrophic, but it's consistent enough to be a structural characteristic rather than random error.
Where the Framework Breaks
The uncapped terminal value deviation figures are quietly alarming. Grok's DCF models produce terminal values that deviate 68.8% from the capped benchmark on average. Claude is at 60.7%, Gemini 58.3%. These aren't small rounding differences — they suggest the models' long-run growth assumptions, when unconstrained, diverge wildly from what the cap mechanism considers reasonable.
The cap rate itself tells the story: Grok caps 39.1% of its valuations, meaning in nearly two of every five cases, its raw DCF output had to be pulled back to prevent an absurd result. DeepSeek caps 47.0% of valuations — nearly half. A model that requires mechanical intervention in half its outputs to produce sensible numbers is not doing fundamental analysis; it's generating starting points that need to be corrected. This is the framework's honest limitation: the models are capable of producing internally consistent but economically nonsensical long-run projections, and the cap is the only thing standing between the output and fiction.
The Model Scorecard
| Model | Avg Upside | Bias Shift | Cap Rate | Valid % | Cost/1K |
|---|---|---|---|---|---|
| claude | -0.5% | -3.2pp | 34.8% | 100% | $40.51 |
| deepseek | -4.2% | -2.6pp | 47.0% | 100% | $2.27 |
| gemini | -1.3% | -3.5pp | 36.9% | 89.6% | $9.48 |
| gpt | -0.7% | -2.0pp | 35.7% | 100% | $16.87 |
| grok | -2.6% | -4.8pp | 39.1% | 100% | $15.63 |