← Back to models
↗↗↗↗
Anthropic
Claude Haiku 4.5
Anthropic’s faster lower-cost Claude tier for lightweight production tasks.
Overall score
87
fastbudgetclassification
Context window
200K tokens
Speed
Fast
Input pricing
$1 / 1M input tokens
Output pricing
$5 / 1M output tokens
Score breakdown
Capability84
Use-case fit87
Cost efficiency90
Speed93
Reliability89
Agent readiness87
Ecosystem90
Scores combine benchmark signals, product experience, and editorial weighting. Use them as a practical guide, not an absolute truth claim.
Best for
Agent automationResearch
Works with
Anthropic APIautomation workersclassification flows
Modalities
textimagefile
Sources & trust
Officially verified core fields
Official linkSummaryDescriptionModalitiesContext windowMax outputTool supportPricingPricing page
Editorial fields such as shortlist guidance, strengths, caveats, and scoring remain clearly separated from official provider data.
Anthropic official
Official site · Tier 5 · Apr 9, 2026
Official link
Claude models overview
Official docs · Tier 5 · Apr 9, 2026
SummaryDescriptionModalitiesContext windowMax outputTool support
Anthropic API pricing
Pricing page · Tier 5 · Apr 9, 2026
PricingPricing page
Claude Haiku 4.5 VerdictLens review
Manual review · Tier 3 · Apr 9, 2026
Best-fit guidanceWorks-with guidanceStrengthsCaveatsOverall scoreScore breakdown
Last verified: Apr 9, 2026
Strengths
- Useful for faster Claude-aligned production paths.
- Good step-down tier under Sonnet for routed stacks.
Things to watch
- Not the best option for deep reasoning or hard coding.
- Many teams will still want Sonnet for their main interactive tier.
Best for
Research synthesis & analyst workflows
Prioritize source grounding, multilingual reading, long-context reasoning, and a retrieval stack that stays inspectable.
Agent automation & operations
Prioritize tool reliability, composability, secret handling, and robust state management across long-running flows.