PromptQL Configuration
Introduction
Your PromptQlConfig is a metadata object that defines the configuration of PromptQL for your project. It includes the LLM to be used, the system instructions, and other settings.
kind: PromptQlConfig
version: v2
definition:
llm:
provider: openai
model: o3-mini
apiKey:
valueFromEnv: OPENAI_API_KEY
aiPrimitivesLlm:
provider: openai
model: gpt-4o
apiKey:
valueFromEnv: OPENAI_API_KEY
systemInstructions: |
You are a helpful AI Assistant.
Metadata structure
PromptQlConfigV2
Definition of the configuration of PromptQL, v2
Key | Value | Required | Description |
---|---|---|---|
kind | PromptQlConfig | true | |
version | v2 | true | |
definition | PromptQlConfigV2 | true | Definition of the configuration of PromptQL for the project |
PromptQlConfigV2
Definition of the configuration of PromptQL for the project
Key | Value | Required | Description |
---|---|---|---|
systemInstructions | string / null | false | Custom system instructions provided to every PromptQL thread that allows tailoring of behavior to match to the project's specific needs. |
llm | LlmConfig | true | Configuration of the LLM to be used for PromptQL |
aiPrimitivesLlm | LlmConfig / null | false | Configuration of the default LLM to be used for AI primitives, such as classification, summarization etc |
overrideAiPrimitivesLlm | [AiPrimitivesLlmConfig] | false | Configuration of specific LLMs to be used for AI primitives, such as classification, summarization etc |
featureFlags | PromptQlFeatureFlags / null | false | Feature flags to be used for PromptQL to enable and disable experimental features |
PromptQlFeatureFlags
Feature flags to be used for PromptQL to enable and disable experimental features
Key | Value | Required | Description |
---|---|---|---|
enable_automations | boolean / null | false | Enable the experimental automations feature |
enable_visualizations | boolean / null | false | Enable the experimental visualizations feature |
enable_visualizations_v2 | boolean / null | false | Enable the experimental visualizations v2 feature |
<customKey> | false |
AiPrimitivesLlmConfig
Configure PromptQL to use a particular LLM for a specific primitive
Key | Value | Required | Description |
---|---|---|---|
primitiveName | LlmPrimitive | true | The name of the operation to override |
llm | LlmConfig | true | The configuration to use for this operation |
LlmPrimitive
The name of an LLM primitive, such as classify
, summarize
, extract
and visualize
.
Value: string
LlmConfig
Configuration of the LLM to be used for PromptQL
One of the following values:
Value | Description |
---|---|
HasuraLlmConfig | Configuration settings for the Hasura-configured LLM |
OpenAiLlmConfig | Configuration settings for an OpenAI LLM |
AnthropicLlmConfig | Configuration settings for an Anthropic LLM |
AzureLlmConfig | Configuration settings for an Azure-provided LLM |
GeminiLlmConfig | Configuration settings for a Gemini LLM |
BedrockLlmConfig | Configuration settings for an AWS Bedrock-provided LLM |
BedrockLlmConfig
Configuration settings for an AWS Bedrock-provided LLM
Key | Value | Required | Description |
---|---|---|---|
provider | bedrock | true | |
modelId | string | true | The specific AWS Bedrock model to use. |
regionName | string | true | The specific AWS Bedrock region to use. |
awsAccessKeyId | EnvironmentValue | true | The AWS access key ID to use for the AWS Bedrock API |
awsSecretAccessKey | EnvironmentValue | true | The AWS secret access key to use for the AWS Bedrock API |
GeminiLlmConfig
Configuration settings for a Gemini LLM
Key | Value | Required | Description |
---|---|---|---|
provider | gemini | true | |
model | string / null | false | The specific Gemini model to use. If not specified, the default model will be used. |
apiKey | EnvironmentValue | true | The API key to use for the Gemini API |
safetySettings | GeminiSafetySettings / null | false | Safety settings for the Gemini API |
GeminiSafetySettings
Configuration to control Gemini's safety settings
Key | Value | Required | Description |
---|---|---|---|
harassment | GeminiBlockType / null | false | Negative or harmful comments targeting identity and/or protected attributes. |
hateSpeech | GeminiBlockType / null | false | Content that is rude, disrespectful, or profane. |
sexuallyExplicit | GeminiBlockType / null | false | Contains references to sexual acts or other lewd content. |
dangerous | GeminiBlockType / null | false | Promotes, facilitates, or encourages harmful acts. |
civicIntegrity | GeminiBlockType / null | false | Election-related queries. |
GeminiBlockType
Blocking level used for Gemini safety settings
One of the following values:
Value | Description |
---|---|
blockNone | Always show regardless of probability of unsafe content |
blockOnlyHigh | Block when high probability of unsafe content |
blockMediumAndAbove | Block when medium or high probability of unsafe content |
blockLowAndAbove | Block when low, medium or high probability of unsafe content |
AzureLlmConfig
Configuration settings for an Azure-provided LLM
Key | Value | Required | Description |
---|---|---|---|
provider | azure | true | |
apiVersion | string / null | false | The specific Azure API version to use. If not specified, the default version will be used. |
model | string / null | false | The specific Azure model to use. If not specified, the default model will be used. |
endpoint | string | true | The endpoint to use for the Azure LLM API |
apiKey | EnvironmentValue | true | The API key to use for the Azure API |
AnthropicLlmConfig
Configuration settings for an Anthropic LLM
Key | Value | Required | Description |
---|---|---|---|
provider | anthropic | true | |
model | string / null | false | The specific Anthropic model to use. If not specified, the default model will be used. |
baseUrl | string / null | false | The base URL to use for the Anthropic API. If not specified, the default URL will be used. |
apiKey | EnvironmentValue | true | The API key to use for the Anthropic API |
OpenAiLlmConfig
Configuration settings for an OpenAI LLM
Key | Value | Required | Description |
---|---|---|---|
provider | openai | true | |
model | string / null | false | The specific OpenAI model to use. If not specified, the default model will be used. |
baseUrl | string / null | false | The base URL to use for the OpenAI API. If not specified, the default URL will be used. |
apiKey | EnvironmentValue | true | The API key to use for the OpenAI API |
EnvironmentValue
Either a literal string or a reference to a Hasura secret
Must have exactly one of the following fields:
Key | Value | Required | Description |
---|---|---|---|
value | string | false | |
valueFromEnv | string | false |
HasuraLlmConfig
Configuration settings for the Hasura-configured LLM
Key | Value | Required | Description |
---|---|---|---|
provider | hasura | true |