Documentation
1. Quickstart
Get up and running with the iSpeed AI Platform in less than 5 minutes.
Step 1: Create an API Key
Head over to the Console and generate a new API key under the "API Keys" section.
Code Examples
curl https://api.ispeedhost.net/v1/chat/completions \
-H "Authorization: Bearer $ISPEED_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama-3-70b",
"messages": [{"role": "user", "content": "How do I use this API?"}]
}'
from openai import OpenAI
client = OpenAI(
base_url="https://api.ispeedhost.net/v1",
api_key="your_ispeed_key"
)
response = client.chat.completions.create(
model="llama-3-70b",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.ispeedhost.net/v1",
apiKey: "your_ispeed_key"
});
async function main() {
const completion = await client.chat.completions.create({
messages: [{ role: "user", content: "Hello!" }],
model: "llama-3-70b",
});
console.log(completion.choices[0]);
}
main();
File Storage (MinIO Flow)
To process large files (images, audio), upload them to our transient storage first.
- 1. Request a Signed URL via our
/v1/storage/uploadendpoint. - 2.
PUTyour file to the provided URL. - 3. Use the returned
file_idin your inference request.
2. API Reference
Our gateway is fully OpenAPI 3.0 compliant. You can download the spec or explore it interactively.
OpenAPI Renderer Placeholder
SwaggerUI or Redoc will be integrated here.