Given a list of messages, the model will return a response.
Create chat completion
Create chat completion
POSThttps://api.forefront.ai/v1/chat/completions
Creates a model response for the given chat conversation.
Request Body
Name
Type
Description
model*
string
messages*
array
max_tokens
integer
temeperature
number
stop
array
{
"choices": [
{
"message": {
"role": "assistant",
"content": "Of course! I'm here to help."
}
}
],
"usage": {
"input_tokens": 28,
"output_tokens": 10,
"total_tokens": 38
},
"message": {
"content": "Of course! I'm here to help."
}
}
Example request
from forefront import ForefrontClient
ff = ForefrontClient(api_key="YOUR_API_KEY")
completion = ff.chat.completions.create(
messages=[
{"role": "system", "content" "You are a gourmet chef"},
{"role": "user", "content": "Write a recipe for an italian dinner"},
],
model="MODEL_STRING", # replace with the name of the model
temperature=0,
max_tokens=10,
)
import Forefront from "forefront";
const client = new Forefront("YOUR_API_KEY");
const completion = await client.chat.completions.create({
model: "MODEL_STRING",
messages: [
{
role: "user",
content: "Write a recipe for an italian dinner",
},
],
max_tokens: 256,
stream: false,
});
curl https://api.forefront.ai/v1/chat/completions \
--header 'content-type: application/json' \
--header 'authorization: Bearer $FOREFRONT_API_KEY' \
--data '{
"model": "REPLACE_WITH_MODEL_STRING",
"messages": [
{
"role": "user",
"content": "Write a recipe for an italian dinner"
}
],
"temperature": 0.1,
"max_tokens": 128,
}'