AI agents are now embedded in real enterprise workflows, and they’re still failing roughly one in three attempts on structured benchmarks. That gap between capability and reliability is the defining operational challenge for IT leaders in 2026, according to Stanford HAI’s ninth annual AI Index report.
This uneven, unpredictable performance is what the AI Index calls the “jagged frontier,” a term coined by AI researcher Ethan Mollick to describe the boundary where AI excels and then suddenly fails.
“AI models can win a gold medal at the International Mathematical Olympiad,” Stanford HAI researchers point out, “but still can’t reliably tell time.”
How models advanced in 2025
Enterprise AI adoption has reached 88%. Notable accomplishments in 2025 and early 2026:
Frontier models improved 30% in just one year on Humanity’s Last Exam (HLE), which includes 2,500 questions across math, natural sciences, ancient languages, and other specialized subfields. HLE was built to be difficult for AI and favorable to human experts.
Leading models scored above 87% on MMLU-Pro, which tests multi-step reasoning based on 12,000 human-reviewed questions across more than a dozen disciplines. This illustrates “how competitive the frontier has become on broad knowledge tasks,” the Stanford HAI researchers note.
Top models including Claude Opus 4.5, GPT-5.2, and Qwen3.5 scored between 62.9% and 70.2% on τ-bench. The benchmark tests agents on real-world tasks in realistic domains that involve chatting with a user and calling external tools or APIs.
Model accuracy on GAIA, which benchmarks general AI assistants, rose from about 20% to 74.5%.
Agent performance on SWE-bench Verified rose from 60% to near 100% in just one year. The benchmark evaluates models on their ability to resolve real-world software issues.
Success rates on WebArena increased from 15% in 2023 to 74.3% in early 2026. This benchmark presents a realistic web environment for evaluating autonomous AI agents, tasking them with information retrieval, site navigation, and content configuration.
Agent performance progressed from 17% in 2024 to roughly 65% in early 2026 on MLE-bench, which evaluates machine learning (ML) engineering capabilities.
AI agents are showing capability gains in cybersecurity. For instance, frontier models solved 93% of problems on Cybench, a benchmark that includes 40 professional-level tasks across six capture-the-flag categories, including cryptography, web security, reverse engineering, forensics, and exploitation.
This is compared to 15% in 2024 and represents the “steepest improvement rate,” indicating that cybersecurity tasks are a “good fit for current agent capabilities.”
Video generation has also evolved significantly over the last year; models can now capture how objects behave. For instance, Google DeepMind’s Veo 3 was tested across more than 18,000 generated videos, and demonstrated the ability to simulate buoyancy and solved mazes without having been trained on those tasks.
“Video generation models are no longer just producing realistic-looking content,” the researchers write. “Some are beginning to learn how the physical world actually works.”
Overall, AI is being used across a number of areas in enterprise — knowledge management, software engineering and IT, marketing and sales — and expanding into specialized domains like tax, mortgage processing, corporate finance, and legal reasoning, where accuracy ranges from 60 to 90%.
“AI capability is not plateauing,” Stanford HAI says. “It is accelerating and reaching more people than ever.”
AI capability surges, but reliability lags
Multimodal models now meet or exceed human baselines on PhD-level science questions, multimodal reasoning, and competition mathematics. For example, Gemini Deep Think earned a gold medal at the 2025 International Mathematical Olympiad (IMO), solving five of six problems end-to-end in natural language within the 4.5-hour time limit — a notable improvement from a silver-level score in 2024.
Yet these same AI systems still fail in roughly one in three attempts, and have trouble with basic perception tasks, according to Stanford HAI. On ClockBench — a test covering 180 clock designs and 720 questions — Gemini Deep Think achieved only 50.1% accuracy, compared to roughly 90% for humans. GPT-4.5 High reached an almost identical score of 50.6%.
“Many multimodal models still struggle with something most humans find routine: Telling the time,” the Stanford HAI report points out. The seemingly simple task combines visual perception with simple arithmetic, identification of clock hands and their positions, and conversion of those into a time value. Ultimately, errors at any of these steps can cascade, leading to incorrect results, according to researchers.
In analysis, models were shown a range of clock styles: standard analog, clocks without a second hand, those with arrows as hands, others with black dials or Roman numerals. But even after fine-tuning on 5,000 synthetic images, models improved only on familiar formats and failed to generalize to real-world variations (like distorted dials or thinner hands).
Researchers extrapolated that, when models confused hour and minute hands, their ability to interpret direction deteriorated, suggesting that the challenge lies not just in data, but in integrating multiple visual cues.
“Even as models close the gap with human experts on knowledge-intensive tasks, this kind of visual reasoning remains a persistent challenge,” Stanford HAI notes.
Hallucination and multi-step reasoning remain major gaps
Even as models continue to accelerate in their reasoning, hallucinations remain a major concern.
In one benchmark, for instance, hallucination rates across 26 leading models ranged from 22% to 94%. Accuracy for some models dropped sharply when put under scrutiny —for example, GPT-4o’s accuracy slid from 98.2% to 64.4%, and DeepSeek R1 plummeted from more than 90% to 14.4%.
On the other hand, Grok 4.20 Beta, Claude 4.5 Haiku, and MiMo-V2-Pro showed the lowest rates.
Further, models continue to struggle with multi-step workflows, even as they are tasked with more of them. For example, on the τ-bench benchmark — which evaluates tool use and multi-turn reasoning — no model exceeded 71%, suggesting that “managing multiturn conversations while correctly using tools and following policy constraints remains difficult even for frontier models,” according to the Stanford HAI report.
Models are becoming opaque
Leading models are now “nearly indistinguishable” from each other when it comes to performance, the Stanford HAI report notes. Open-weight models are more competitive than ever, but they are converging.
As capability is no longer a “clear differentiator,” competitive pressure is shifting toward cost, reliability, and real-world usefulness.
Frontier labs are disclosing less information about their models, evaluation methods are quickly losing relevance, and independent testing can’t always corroborate developer-reported metrics.
As Stanford HAI points out: “The most capable systems are now the least transparent.”
Training code, parameter counts, dataset sizes, and durations are often being withheld — by firms including OpenAI, Anthropic and Google. And transparency is declining more broadly: In 2025, 80 out of 95 models were released without corresponding training code, while only four made their code fully open source.
Further, after rising between 2023 and 2024, scores on the Foundation Model Transparency Index — which ranks major foundation developers on 100 transparency indicators — have since dropped. The average score is now 40, representing a 17 point decrease.
“Major gaps persist in disclosure around training data, compute resources, and post-deployment impact,” according to the report.
Benchmarking AI is getting harder — and less reliable
The benchmarks used to measure AI progress are facing growing reliability issues, with error rates reaching as high as 42% on widely-used evaluations. “AI is being tested more ambitiously across reasoning, safety, and real-world task execution,” the Stanford report notes, yet “those measurements are increasingly difficult to rely on.”
Key challenges include:
“Sparse and declining” reporting on bias from developers
Benchmark contamination, or when models are exposed to test data; this can lead to “falsely inflated” scores
Discrepancies between developer-reported results and independent testing
“Poorly constructed” evals lacking documentation, details on statistical significance and reproducible scripts
“Growing opacity and non-standard prompting” that make model-to-model comparisons unreliable
“Even when benchmark scores are technically valid, strong benchmark performance does not always translate to real-world utility,” according to the report. Further, “AI capability is outpacing the benchmarks designed to measure it.”
This is leading to “benchmark saturation,” where models achieve scores so high that tests can no longer differentiate between them. More complex, interactive forms of intelligence are becoming increasingly difficult to benchmark. Some are calling for evals that measure human-AI collaboration, rather than AI performance in isolation, but this technique is early in development.
“Evaluations intended to be challenging for years are saturated in months, compressing the window in which benchmarks remain useful for tracking progress,” according to Stanford HAI.
Are we at “peak data”?
As builders move into more data-intensive inference, there is growing concern about data bottlenecks and scaling sustainability. Leading researchers are warning that the available pool of high-quality human text and web data has been “exhausted” — a state referred to as “peak data.”
Hybrid approaches combining real and synthetic data can “significantly accelerate training” — sometimes by a factor of 5 to 10 — and smaller models trained on purely synthetic data have shown promise for narrowly defined tasks like classification or code generation, according to Stanford HAI.
Synthetically generated data can be effective for improving model performance in post-training settings, including fine-tuning, alignment, instruction tuning, and reinforcement learning (RL), the report notes. However, “these gains have not generalized to large, general-purpose language models.”
Rather than scaling data “indiscriminately,” researchers are turning to pruning, curating, and refining inputs, and are improving performance by cleaning labels, deduplicating samples, and constructing overall higher-quality datasets.
“Discussions on data availability often overlook an important shift in recent AI research,” according to the report. “Performance gains are increasingly driven by improving the quality of existing datasets, not by acquiring more.”
Responsible AI is falling behind
While the infrastructure for responsible AI is growing, progress has been “uneven” and is unable to keep pace with rapid capability gains, according to Stanford HAI.
While almost all leading frontier AI model developers report results on capability benchmarks, corresponding reporting on safety and responsibility is inconsistent and “spotty.”
Documented AI incidents rose significantly year over year — 362 in 2025 compared to 233 in 2024. And, while several frontier models received “Very Good” or “Good” safety ratings under standard use (per the AILuminate benchmark, which assesses generative AI across 12 “hazard” categories), safety performance dropped across all models when tested against jailbreak attempts using adversarial prompts.
“AI models perform well on safety tests under normal conditions, but their defenses weaken under deliberate attack,” Stanford HAI notes.
Adding to this challenge, builders have reported that improving one dimension, such as safety, can degrade another, like accuracy. “The infrastructure for responsible AI is growing, but progress has been uneven, and it is not keeping pace with the speed of AI deployment,” according to Stanford researchers.
The Stanford data makes one thing clear: the gap that matters in 2026 isn’t between AI and human performance. It’s between what AI can do in a demo and what it does reliably in production. Right now — with less transparency from the labs and benchmarks that saturate before they’re useful — that gap is harder to measure than ever.