Using Language Models (LLMs)
Learn how to leverage Synexa AI’s powerful language models for text generation and processing.
Basic Usage
Our API provides easy access to state-of-the-art language models for various text generation tasks.
Python Example
import synexa
# Generate text using an LLM
output = synexa.run(
"meta/meta-llama-3-8b-instruct",
input={
"prompt": "What is the meaning of life?"
}
)
# Get the generated text
response = output[0]["text"]
print(response)
Node.js Example
import Synexa from 'synexa';
const synexa = new Synexa.default({
auth: process.env.SYNEXA_API_TOKEN
});
// Generate text using an LLM
const [output] = await synexa.run("meta/meta-llama-3-8b-instruct", {
input: {
prompt: "What is the meaning of life?"
}
});
// Get the generated text
console.log(output.text);
Advanced Options
You can customize the text generation with additional parameters:
# Python example with advanced options
output = synexa.run(
"meta/meta-llama-3-8b-instruct",
input={
"top_k": 0,
"top_p": 0.95,
"prompt": "Johnny has 8 billion parameters. His friend Tommy has 70 billion parameters. What does this mean when it comes to speed?",
"max_tokens": 512,
"temperature": 0.7,
"system_prompt": "You are a helpful assistant",
"length_penalty": 1,
"max_new_tokens": 512,
"stop_sequences": "<|end_of_text|>,<|eot_id|>",
"prompt_template": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
"presence_penalty": 0,
"log_performance_metrics": False
}
)
// Node.js example with advanced options
const [output] = await synexa.run("meta/meta-llama-3-8b-instruct", {
input: {
top_k: 0,
top_p: 0.95,
prompt: "Johnny has 8 billion parameters. His friend Tommy has 70 billion parameters. What does this mean when it comes to speed?",
max_tokens: 512,
temperature: 0.7,
system_prompt: "You are a helpful assistant",
length_penalty: 1,
max_new_tokens: 512,
stop_sequences: "<|end_of_text|>,<|eot_id|>",
prompt_template: "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
presence_penalty: 0,
log_performance_metrics: false
}
});
System Messages
You can use system messages to control the model’s behavior:
# Python example with system message
output = synexa.run(
"meta/meta-llama-3-8b-instruct",
input={
"system": "You are a helpful AI assistant that speaks like Shakespeare",
"prompt": "Tell me about artificial intelligence",
"max_tokens": 200
}
)
// Node.js example with system message
const [output] = await synexa.run("meta/meta-llama-3-8b-instruct", {
input: {
system: "You are a helpful AI assistant that speaks like Shakespeare",
prompt: "Tell me about artificial intelligence",
max_tokens: 200
}
});
Error Handling
Always handle potential errors when using language models:
# Python error handling
try:
output = synexa.run(
"meta/meta-llama-3-8b-instruct",
input={"prompt": "Tell me a joke"}
)
except Exception as e:
print(f"Error generating text: {e}")
// Node.js error handling
try {
const [output] = await synexa.run("meta/meta-llama-3-8b-instruct", {
input: { prompt: "Tell me a joke" }
});
} catch (error) {
console.error("Error generating text:", error);
}