Providers & Models
Introduction
PromptQL supports configuring different LLM providers and models to tailor the experience to your application's needs.
This gives you the flexibility to choose the most performance-efficient and cost-effective solution for your use case, and the freedom to switch between providers and models as needed depending the task.
For example, you can use one model for conversational tasks and another for advanced AI primitives:
kind: PromptQlConfig
version: v2
definition:
llm:
provider: openai
model: o3-mini
apiKey:
valueFromEnv: OPENAI_API_KEY
aiPrimitivesLlm:
provider: openai
model: gpt-4o
apiKey:
valueFromEnv: OPENAI_API_KEY
overrideAiPrimitivesLlm:
summarize:
provider: anthropic
model: claude-3-5-sonnet-latest
apiKey:
valueFromEnv: ANTHROPIC_API_KEY
If you do specify environment variables in your promptql-config.hml
, don't forget to add them to the globals
subgraph's subgraph.yaml
under the envMapping
section.
llm
The llm
configuration is used to define the LLM provider and model for conversational tasks in your application.
In the example above, we're using openai
as the provider with the o3-mini
model.
aiPrimitivesLlm
The aiPrimitivesLlm
configuration is used to define the LLM provider and model for
AI primitives in your application. This is used for tasks such as
program generation and execution. In the example above, we're using openai
as the provider with the gpt-4o
model.
If aiPrimitiveLlm
is not specified, the llm
configuration is used for AI primitives as well.
overrideAiPrimitivesLlm
If you need to use a different LLM for a specific AI primitive, you can use the overrideAiPrimitivesLlm
configuration.
This is useful if you want to use a different model for a specific primitive. Example primitives are summarize
,
classify
, and extract
. In the example above, we're using anthropic
as the provider with the
claude-3-5-sonnet-latest
model for the summarize
primitive.
Any specific AI primitive will first use the LLM specified in overrideAiPrimitivesLlm
before falling back to the
aiPrimitivesLlm
configuration if nothing is specified, and then the llm
configuration if nothing is specified there
either.
Available providers & models
Anthropic
To use an Anthropic model, set the provider
to
anthropic
. The following have been tested with PromptQL:
claude-3-5-sonnet-latest
claude-3-7-sonnet-latest
AWS Bedrock
To use a Bedrock-wrapped model, set the provider
to bedorck
. The following have
been tested with PromptQL:
- Claude 3.5 Sonnet
- Claude 3.7 Sonnet
NB: For Bedrock models, you'll need to provide a model_id
that resembles this string:
arn:aws:bedrock:<AWS region>:<AWS account ID>:inference-profile/us.anthropic.claude-3-5-sonnet-20241022-v2:0
Google Gemini
To use a Google Gemini model, set the provider
to gemini
. The
following have been tested with PromptQL:
gemini-1.5-flash
gemini-2.0-flash
Hasura
With Hasura—used as the default provider—there is no specific model necessary in your configuration.
If you plan to interact with PHI using PromptQL, you should not use the Hasura provider as it is not HIPAA compliant. You will have to bring your own LLM with another supported provider.
Microsoft Azure
To use an Azure foundational model, set the
provider
to azure
.
OpenAI
To use an OpenAI model, set the provider
to openai
. The following have
been tested with PromptQL:
o1
o3-mini
gpt-4o
When using the hasura
provider, the default model is claude-3-5-sonnet-latest
. This is the recommended model for
PromptQL program generation.
Considerations
- The
model
key is not supported when using thehasura
provider. - The value for a
model
key is always in the dialect of the provider's API. - If
ai_primitives_llm
is not defined, it defaults to the provider specified in thellm
configuration. system_instructions
are optional but recommended to customize the behavior of your LLM.
If you're utilizing a provider aside from Hasura, you'll need to add your environment variable to the globals
subgraph
so the container running PromptQL has access to it.
kind: Subgraph
version: v2
definition:
name: globals
generator:
rootPath: .
includePaths:
- metadata
envMapping:
ANTHROPIC_API_KEY:
fromEnv: ANTHROPIC_API_KEY
Be sure to also include the key-value pair for your API key (<PROVIDER>_API_KEY=your-key
) in your project's .env
files.