Claude 2 released

Claude.ai is pleased to announce Claude 2, their newest model, which can be accessed via API as well as a new public-facing beta website at claude.ai.

 

They have been iterating to improve the underlying safety of Claude 2, so that it is more harmless and harder to prompt to produce offensive or dangerous output. The programmers have an internal red-teaming evaluation that scores our models on a large representative set of harmful prompts, using an automated test while we also regularly check the results manually. In this evaluation, Claude 2 was 2x better at giving harmless responses compared to Claude 1.3. Although no model is immune from jailbreaks, a variety of safety techniques (which you can read about here and here) have been employed, as well as extensive red-teaming, to improve its outputs.

 

Claude 2 is generally available in the US and UK. Claude.ai is working to make Claude more globally available in the coming months. Interested users can now create an account and start talking to Claude in natural language, asking it for help with any tasks. Talking to an AI assistant can take some trial and error, so read up on our tips to get the most out of Claude.

 

Claude.ai is also currently working with thousands of businesses who are using the Claude API. One of the partners is Jasper, a generative AI platform that enables individuals and teams to scale their content strategies. They found that Claude 2 was able to go head to head with other state of the art models for a wide variety of use cases, but has particular strength for long form low latency uses. “We are really happy to be among the first to offer Claude 2 to our customers, bringing enhanced semantics, up-to-date knowledge training, improved reasoning for complex prompts, and the ability to effortlessly remix existing content with a 3X larger context window,” said Greg Larson, VP of Engineering at Jasper. “We are proud to help our customers stay ahead of the curve through partnerships like this one with Anthropic.”

 

Sourcegraph is a code AI platform that helps customers write, fix, and maintain code. Their coding assistant Cody uses Claude 2’s improved reasoning ability to give even more accurate answers to user queries while also passing along more codebase context with up to 100K context windows. In addition, Claude 2 was trained on more recent data, meaning it has knowledge of newer frameworks and libraries for Cody to pull from. “When it comes to AI coding, devs need fast and reliable access to context about their unique codebase and a powerful LLM with a large context window and strong general reasoning capabilities,” says Quinn Slack, CEO & Co-founder of Sourcegraph. “The slowest and most frustrating parts of the dev workflow are becoming faster and more enjoyable. Thanks to Claude 2, Cody’s helping more devs build more software that pushes the world forward.”

 

The programmers at Claude.ai welcome user feedback as they work to responsibly deploy Claude more broadly. The chat experience is an open beta launch, and users should be aware that Claude – like all current models – can generate inappropriate responses. AI assistants are most useful in everyday situations, like serving to summarize or organize information, and should not be used where physical or mental health and well-being are involved. Please let the people at Claude.ai know if you would like to talk to Claude in a currently unsupported area, or if you are a business who would like to start working with Claude.

 

After working for the past few months with key partners like Notion, Quora, and DuckDuckGo in a closed alpha, Claude.ai has been able to carefully test out our systems in the wild, and are ready to offer Claude more broadly so it can power crucial, cutting-edge use cases at scale.

 

Claude is a next-generation AI assistant based on Anthropic’s research into training helpful, honest, and harmless AI systems. Accessible through chat interface and API in a developer console, Claude is capable of a wide variety of conversational and text processing tasks while maintaining a high degree of reliability and predictability.

Read the full article at: www.anthropic.com

More
articles