← Back to models
Anthropic
Claude 3.7 Sonnet
Highly trusted reasoning and coding model with exceptional writing quality and calm, consistent outputs.
Overall score
91
reasoningcodingwriting
Context window
200K
Speed
Balanced
Input pricing
$3 / 1M tok
Output pricing
$15 / 1M tok
Score breakdown
Capability94
Use-case fit92
Cost efficiency82
Speed84
Reliability95
Agent readiness91
Ecosystem90
Scores combine benchmark signals, product experience, and editorial weighting. Use them as a practical guide, not an absolute truth claim.
Best for
codingresearch
Works with
Claude CodeLangGraphPlaywrightMCP servers
Modalities
textimagetools
Strengths
- Excellent code edits and technical explanations
- Strong output consistency for editorial workflows
Things to watch
- Tool ecosystem is strong but narrower than the broadest platform stacks
- Latency can rise on very long deliberative prompts
Best for
Coding copilots & repo execution
Prioritize reliability, diff quality, tool-calling control, and the ability to maintain focus across multi-file edits.
Research synthesis & analyst workflows
Prioritize source grounding, multilingual reading, long-context reasoning, and a retrieval stack that stays inspectable.