Skip to content
  • Auto
  • Light
  • Dark

Serverless Inference

DigitalOcean Gradient™ AI Agentic Cloud allows access to serverless inference models. You can access models by using providing an inference key.

You can generate a new model access key for serverless inference in the console.

For example, access serverless inference using the SDK:

Python
import os
from gradient import Gradient
inference_client = Gradient(
model_access_key=os.environ.get("GRADIENT_MODEL_ACCESS_KEY"), # default
)
inference_response = inference_client.chat.completions.create(
messages=[
{
"role": "user",
"content": "What is the capital of France?",
}
],
model="llama3.3-70b-instruct",
)
print(inference_response.choices[0].message.content)

The async client uses the exact same interface.

Python
import os
from gradient import AsyncGradient
inference_client = AsyncGradient(
model_access_key=os.environ.get("GRADIENT_MODEL_ACCESS_KEY"), # default
)
inference_response = await inference_client.chat.completions.create(
messages=[
{
"role": "user",
"content": "What is the capital of France?",
}
],
model="llama3.3-70b-instruct",
)
print(inference_response.choices[0].message.content)