Anthropic, an AI company started by former OpenAI employees, claims their new Claude 3 family of AI models perform as well as or better than leading models from Google and OpenAI. Unlike previous versions, Claude 3 can understand both text and images.
Anthropic says Claude 3 will answer more questions accurately, understand longer instructions, and provide better context by processing more information. There are three versions – Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus, with Opus being the largest and most capable.
The company states that the new models are less likely to refuse to answer prompts that approach the limits of their safety constraints, similar to rumors about Meta’s upcoming Llama 3 model. Addressing a past limitation where previous Claude versions declined harmless prompts, indicating a lack of contextual understanding, Anthropic reports that the new models are less likely to refuse responses, aligning with safety guardrails.
The company claims that Claude 3 models can provide near-instant results, even when parsing dense material like research papers. According to a blog post, Haiku, the smallest Claude 3 version, is touted as the fastest and most cost-effective model in the market, capable of reading complex research papers, including charts and graphs, in under three seconds.
In benchmarking tests, Anthropic claims Opus outperformed most models, including showing better graduate-level reasoning than OpenAI’s GPT-4. The new models also significantly improve over the previous Claude 2.1.
Claude 3 was trained on a mix of Anthropic’s internal data, third-party datasets, and publicly available data up to August 2023, using cloud computing resources from Amazon and Google, which have invested in Anthropic.
The models will be available on services like AWS Bedrock, Google Vertex AI, and claude.ai’s API for applications like chatbots, autocomplete, and data extraction. Claude 3 is set to be accessible through AWS’s Bedrock model library and Google’s Vertex AI.
Leave a Reply