Skip to main content

Anthropic

LiteLLM supports

  • claude-3 (claude-3-opus-20240229, claude-3-sonnet-20240229)
  • claude-2
  • claude-2.1
  • claude-instant-1.2

API Keys​

import os

os.environ["ANTHROPIC_API_KEY"] = "your-api-key"

Usage​

import os
from litellm import completion

# set env - [OPTIONAL] replace with your anthropic key
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"

messages = [{"role": "user", "content": "Hey! how's it going?"}]
response = completion(model="claude-3-opus-20240229", messages=messages)
print(response)

Usage - Streaming​

Just set stream=True when calling completion.

import os
from litellm import completion

# set env
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"

messages = [{"role": "user", "content": "Hey! how's it going?"}]
response = completion(model="claude-3-opus-20240229", messages=messages, stream=True)
for chunk in response:
print(chunk["choices"][0]["delta"]["content"]) # same as openai format

OpenAI Proxy Usage​

Here's how to call Anthropic with the LiteLLM Proxy Server

1. Save key in your environment​

export ANTHROPIC_API_KEY="your-api-key"

2. Start the proxy​

$ litellm --model claude-3-opus-20240229

# Server running on http://0.0.0.0:8000

3. Test it​

curl --location 'http://0.0.0.0:8000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
'

Supported Models​

Model NameFunction Call
claude-3-opuscompletion('claude-3-opus-20240229', messages)
claude-3-sonnetcompletion('claude-3-sonnet-20240229', messages)
claude-2.1completion('claude-2.1', messages)
claude-2completion('claude-2', messages)
claude-instant-1.2completion('claude-instant-1.2', messages)
claude-instant-1completion('claude-instant-1', messages)

Advanced​

Usage - "Assistant Pre-fill"​

You can "put words in Claude's mouth" by including an assistant role message as the last item in the messages array.

[!IMPORTANT] The returned completion will not include your "pre-fill" text, since it is part of the prompt itself. Make sure to prefix Claude's completion with your pre-fill.

import os
from litellm import completion

# set env - [OPTIONAL] replace with your anthropic key
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"

messages = [
{"role": "user", "content": "How do you say 'Hello' in German? Return your answer as a JSON object, like this:\n\n{ \"Hello\": \"Hallo\" }"},
{"role": "assistant", "content": "{"},
]
response = completion(model="claude-2.1", messages=messages)
print(response)

Example prompt sent to Claude​


Human: How do you say 'Hello' in German? Return your answer as a JSON object, like this:

{ "Hello": "Hallo" }

Assistant: {

Usage - "System" messages​

If you're using Anthropic's Claude 2.1 with Bedrock, system role messages are properly formatted for you.

import os
from litellm import completion

# set env - [OPTIONAL] replace with your anthropic key
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"

messages = [
{"role": "system", "content": "You are a snarky assistant."},
{"role": "user", "content": "How do I boil water?"},
]
response = completion(model="claude-2.1", messages=messages)

Example prompt sent to Claude​

You are a snarky assistant.

Human: How do I boil water?

Assistant: