All articles
Model Comparisons

The Best AI Models for Summarization in 2025

We tested 6 models on academic papers, legal documents, news articles, and business reports. The results reveal significant differences in compression quality, hallucination rate, and key-point retention.

Travis Johnson

Travis Johnson

Founder, Deepest

July 11, 202510 min read

AI summarization quality varies significantly across models — not just in length, but in what information gets preserved and what gets dropped. Claude 3.5 Sonnet produces the most accurate, nuanced summaries; Gemini 2.0 Pro handles the longest documents; GPT-4o is the best balance of quality and speed.

The Summarization Test Methodology

We tested six models across four document types: academic research papers, legal contracts, news articles, and business reports. For each document type, we evaluated summaries on compression quality (retaining key information), hallucination rate (introducing false information), structural accuracy (correctly representing the original's organization), and key-point retention (capturing the most important claims).

Overall Summarization Rankings

Model Compression Quality Hallucination Rate Key-Point Retention Overall Score
Claude 3.5 Sonnet 9.2/10 3.1% 91% 9.0/10
Gemini 2.0 Pro 8.8/10 4.2% 89% 8.6/10
GPT-4o 8.9/10 5.8% 86% 8.4/10
Mistral Large 8.1/10 7.2% 81% 7.8/10
Gemini 1.5 Flash 7.7/10 8.5% 78% 7.4/10
GPT-4o mini 7.4/10 11.2% 74% 7.0/10
Key Finding: Claude 3.5 Sonnet has a 3.1% hallucination rate on summarization tasks — roughly half GPT-4o's 5.8% rate. For documents where accuracy is critical (legal, medical, financial), this difference is meaningful.

Academic Papers: Claude Leads on Technical Accuracy

Summarizing academic papers requires maintaining the precise relationships between methodology, findings, and conclusions. A summary that reverses causality, omits limitations, or overstates certainty can be actively harmful.

Claude 3.5 Sonnet performed best on academic papers. It consistently preserved the hedging language in original studies (correctly representing "may suggest" rather than "proves"), accurately described methodology, and retained important caveats. GPT-4o occasionally overstated findings confidence; Gemini sometimes lost methodological nuance in favor of readability.

Legal Documents: Gemini's Long-Context Advantage

Legal contracts and regulatory documents often run to hundreds of pages. Gemini 2.0 Pro's 1-million-token context window means it can process entire contracts without chunking, which significantly improves summarization coherence for very long documents.

For legal documents under ~80 pages, Claude 3.5 Sonnet produces the best summaries — capturing key obligations, definitions, and risk provisions accurately. For documents exceeding 100 pages, Gemini 2.0 Pro's ability to maintain cross-document context across the full text gives it a practical advantage.

Legal Warning: AI summaries of legal documents should never replace review by qualified legal counsel. Use AI summaries to get oriented before legal review — not to replace it.

News Articles: GPT-4o Is Fast and Good Enough

For news articles and shorter informational content (under 3,000 words), GPT-4o is the most practical choice. It produces accurate, well-structured summaries at the fastest speed. The quality difference between GPT-4o and Claude for short-form news summarization is marginal.

Where GPT-4o can falter on news: summarizing events with complex political context, where slight framing changes can introduce bias. Claude handles politically nuanced topics with more careful framing.

Business Reports and Presentations

For business documents — market reports, financial analyses, strategy decks — Claude 3.5 Sonnet again leads. Its summaries preserve the narrative logic of business arguments and correctly identify which data points are evidence for which conclusions.

GPT-4o is a close second for business documents. Both models correctly identify the key takeaways from business reports; Claude's edge is in preserving the reasoning structure rather than just the conclusions.

Hallucination Patterns in Summarization

Summarization hallucinations are different from knowledge hallucinations. Instead of inventing facts, models hallucinate by:

  • Overstating certainty: "The study proves X" when the original says "suggests X"
  • Reversing causality: Getting the direction of a relationship backwards
  • Conflating statistics: Applying one study's numbers to a different claim
  • Omitting limitations: Summarizing conclusions without the caveats
  • Interpolating gaps: Adding plausible-sounding detail not in the original

Claude 3.5 Sonnet makes the fewest errors of all five types. GPT-4o mini makes the most, particularly on overstating certainty and interpolating gaps.

Prompting for Better Summaries

Model choice is important, but prompting significantly affects summary quality across all models:

  • Specify the audience: "Summarize this for a non-technical executive" vs. "Summarize for a PhD researcher"
  • Request structure explicitly: "Give me: 1) the main argument, 2) the key evidence, 3) the limitations, 4) the implications"
  • Ask for hedging preservation: "Preserve all hedging language from the original — do not increase certainty"
  • Set length constraints: "Summarize in 150 words or fewer" — models default to longer than necessary

Document Type Recommendations

Document Type Best Model Why
Academic papers Claude 3.5 Sonnet Preserves hedging, methodology accuracy
Legal contracts (short) Claude 3.5 Sonnet Accurate key-term extraction
Legal contracts (100+ pages) Gemini 2.0 Pro 1M token context, full-document coherence
News articles GPT-4o Fast, accurate, good enough quality
Business reports Claude 3.5 Sonnet Preserves argument structure
High volume, cost-sensitive GPT-4o mini 11x cheaper than GPT-4o with acceptable quality

Frequently Asked Questions

Which AI model summarizes PDFs best?

Claude 3.5 Sonnet via Claude.ai, or Gemini 2.0 Pro via Google AI Studio, both allow direct PDF uploads and produce high-quality summaries. Claude produces better quality for most document types; Gemini handles longer PDFs better.

How accurate are AI summaries?

The best models (Claude 3.5 Sonnet, Gemini 2.0 Pro) achieve 88–91% key-point retention and 3–4% hallucination rates on summarization tasks. For important documents, always read the original or have an expert review the summary.

Can AI summarize audio or video recordings?

Gemini 2.0 Pro can process audio and video directly. GPT-4o with Whisper API can transcribe audio for summarization. Claude is text-only — you'd need to transcribe first.

What's the maximum document length I can summarize?

Gemini 2.0 Pro handles up to 1 million tokens (~750,000 words). Claude 3.5 Sonnet handles 200,000 tokens (~150,000 words). GPT-4o handles 128,000 tokens (~96,000 words). For most single documents, all three are sufficient; for multi-document research collections, Gemini's context window matters.

AI summarizationGeminiClaudecomparisonlong context

See it for yourself

Run any prompt across ChatGPT, Claude, Gemini, and 300+ other models simultaneously. Free to try, no credit card required.

Try Deepest free →

Related articles