SDKs Overview
Assisters API is OpenAI-compatible, so you can use the official OpenAI SDK with any language. Just change the base URL and API key.
Quick Start
from openai import OpenAI
client = OpenAI(
api_key="ask_your_api_key",
base_url="https://api.assisters.dev/v1"
)
response = client.chat.completions.create(
model="llama-3.1-8b",
messages=[{"role": "user", "content": "Hello!"}]
)
Supported Languages
The OpenAI API specification has SDKs for many languages:
| Language | Package | Status |
|---|
| Python | openai | ✅ Official |
| Node.js | openai | ✅ Official |
| Go | sashabaranov/go-openai | Community |
| Ruby | ruby-openai | Community |
| PHP | openai-php/client | Community |
| Java | openai-java | Community |
| Rust | async-openai | Community |
| C# | OpenAI-DotNet | Community |
Community SDKs work with Assisters by changing the base URL, but they’re not officially tested by us.
Configuration
Base URL
All requests should use:
https://api.assisters.dev/v1
Authentication
Use Bearer token authentication:
Authorization: Bearer ask_your_api_key
Environment Variables
Recommended setup for any language:
export ASSISTERS_API_KEY="ask_your_api_key"
export ASSISTERS_BASE_URL="https://api.assisters.dev/v1"
SDK Features
Streaming
All SDKs support streaming responses:
stream = client.chat.completions.create(
model="llama-3.1-8b",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content, end="")
Async Support
Python and JavaScript SDKs support async/await:
from openai import AsyncOpenAI
client = AsyncOpenAI(
api_key="ask_...",
base_url="https://api.assisters.dev/v1"
)
async def main():
response = await client.chat.completions.create(
model="llama-3.1-8b",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
Automatic Retries
The official SDKs include built-in retry logic:
from openai import OpenAI
client = OpenAI(
api_key="ask_...",
base_url="https://api.assisters.dev/v1",
max_retries=3 # Automatic retries on errors
)
Timeouts
Configure request timeouts:
client = OpenAI(
api_key="ask_...",
base_url="https://api.assisters.dev/v1",
timeout=30.0 # 30 second timeout
)
What’s Not Supported
Some OpenAI-specific features aren’t available:
| Feature | Status |
|---|
| Function Calling | Coming Soon |
| Vision (Images) | Coming Soon |
| Audio (Whisper) | Not Planned |
| Assistants API | Not Planned |
| Fine-tuning | Contact Us |
Framework Integrations
LangChain
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="llama-3.1-8b",
openai_api_key="ask_...",
openai_api_base="https://api.assisters.dev/v1"
)
LlamaIndex
from llama_index.llms.openai import OpenAI
llm = OpenAI(
model="llama-3.1-8b",
api_key="ask_...",
api_base="https://api.assisters.dev/v1"
)
Vercel AI SDK
import { createOpenAI } from '@ai-sdk/openai';
const assisters = createOpenAI({
apiKey: 'ask_...',
baseURL: 'https://api.assisters.dev/v1'
});
const result = await generateText({
model: assisters('llama-3.1-8b'),
prompt: 'Hello!'
});
Getting Help