Skip to main content
Version: PromptQL

PromptQL Configuration

Introduction

Your PromptQlConfig is a metadata object that defines the configuration of PromptQL for your project. It includes the LLM to be used, the system instructions, and other settings.

Example of globals/metadata/promptql-config.hml
kind: PromptQlConfig
version: v2
definition:
llm:
provider: openai
model: o3-mini
apiKey:
valueFromEnv: OPENAI_API_KEY
aiPrimitivesLlm:
provider: openai
model: gpt-4o
apiKey:
valueFromEnv: OPENAI_API_KEY
systemInstructions: |
You are a helpful AI Assistant.

Metadata structure

PromptQlConfigV2

Definition of the configuration of PromptQL, v2

KeyValueRequiredDescription
kindPromptQlConfigtrue
versionv2true
definitionPromptQlConfigV2trueDefinition of the configuration of PromptQL for the project

PromptQlConfigV2

Definition of the configuration of PromptQL for the project

KeyValueRequiredDescription
systemInstructionsstring / nullfalseCustom system instructions provided to every PromptQL thread that allows tailoring of behavior to match to the project's specific needs.
llmLlmConfigtrueConfiguration of the LLM to be used for PromptQL
aiPrimitivesLlmLlmConfig / nullfalseConfiguration of the default LLM to be used for AI primitives, such as classification, summarization etc
overrideAiPrimitivesLlm[AiPrimitivesLlmConfig]falseConfiguration of specific LLMs to be used for AI primitives, such as classification, summarization etc
featureFlagsPromptQlFeatureFlags / nullfalseFeature flags to be used for PromptQL to enable and disable experimental features

PromptQlFeatureFlags

Feature flags to be used for PromptQL to enable and disable experimental features

KeyValueRequiredDescription
enable_automationsboolean / nullfalseEnable the experimental automations feature
enable_visualizationsboolean / nullfalseEnable the experimental visualizations feature
enable_visualizations_v2boolean / nullfalseEnable the experimental visualizations v2 feature
<customKey>false

AiPrimitivesLlmConfig

Configure PromptQL to use a particular LLM for a specific primitive

KeyValueRequiredDescription
primitiveNameLlmPrimitivetrueThe name of the operation to override
llmLlmConfigtrueThe configuration to use for this operation

LlmPrimitive

The name of an LLM primitive, such as classify, summarize, extract and visualize.

Value: string

LlmConfig

Configuration of the LLM to be used for PromptQL

One of the following values:

ValueDescription
HasuraLlmConfigConfiguration settings for the Hasura-configured LLM
OpenAiLlmConfigConfiguration settings for an OpenAI LLM
AnthropicLlmConfigConfiguration settings for an Anthropic LLM
AzureLlmConfigConfiguration settings for an Azure-provided LLM
GeminiLlmConfigConfiguration settings for a Gemini LLM
BedrockLlmConfigConfiguration settings for an AWS Bedrock-provided LLM

BedrockLlmConfig

Configuration settings for an AWS Bedrock-provided LLM

KeyValueRequiredDescription
providerbedrocktrue
modelIdstringtrueThe specific AWS Bedrock model to use.
regionNamestringtrueThe specific AWS Bedrock region to use.
awsAccessKeyIdEnvironmentValuetrueThe AWS access key ID to use for the AWS Bedrock API
awsSecretAccessKeyEnvironmentValuetrueThe AWS secret access key to use for the AWS Bedrock API

GeminiLlmConfig

Configuration settings for a Gemini LLM

KeyValueRequiredDescription
providergeminitrue
modelstring / nullfalseThe specific Gemini model to use. If not specified, the default model will be used.
apiKeyEnvironmentValuetrueThe API key to use for the Gemini API
safetySettingsGeminiSafetySettings / nullfalseSafety settings for the Gemini API

GeminiSafetySettings

Configuration to control Gemini's safety settings

KeyValueRequiredDescription
harassmentGeminiBlockType / nullfalseNegative or harmful comments targeting identity and/or protected attributes.
hateSpeechGeminiBlockType / nullfalseContent that is rude, disrespectful, or profane.
sexuallyExplicitGeminiBlockType / nullfalseContains references to sexual acts or other lewd content.
dangerousGeminiBlockType / nullfalsePromotes, facilitates, or encourages harmful acts.
civicIntegrityGeminiBlockType / nullfalseElection-related queries.

GeminiBlockType

Blocking level used for Gemini safety settings

One of the following values:

ValueDescription
blockNoneAlways show regardless of probability of unsafe content
blockOnlyHighBlock when high probability of unsafe content
blockMediumAndAboveBlock when medium or high probability of unsafe content
blockLowAndAboveBlock when low, medium or high probability of unsafe content

AzureLlmConfig

Configuration settings for an Azure-provided LLM

KeyValueRequiredDescription
providerazuretrue
apiVersionstring / nullfalseThe specific Azure API version to use. If not specified, the default version will be used.
modelstring / nullfalseThe specific Azure model to use. If not specified, the default model will be used.
endpointstringtrueThe endpoint to use for the Azure LLM API
apiKeyEnvironmentValuetrueThe API key to use for the Azure API

AnthropicLlmConfig

Configuration settings for an Anthropic LLM

KeyValueRequiredDescription
provideranthropictrue
modelstring / nullfalseThe specific Anthropic model to use. If not specified, the default model will be used.
baseUrlstring / nullfalseThe base URL to use for the Anthropic API. If not specified, the default URL will be used.
apiKeyEnvironmentValuetrueThe API key to use for the Anthropic API

OpenAiLlmConfig

Configuration settings for an OpenAI LLM

KeyValueRequiredDescription
provideropenaitrue
modelstring / nullfalseThe specific OpenAI model to use. If not specified, the default model will be used.
baseUrlstring / nullfalseThe base URL to use for the OpenAI API. If not specified, the default URL will be used.
apiKeyEnvironmentValuetrueThe API key to use for the OpenAI API

EnvironmentValue

Either a literal string or a reference to a Hasura secret

Must have exactly one of the following fields:

KeyValueRequiredDescription
valuestringfalse
valueFromEnvstringfalse

HasuraLlmConfig

Configuration settings for the Hasura-configured LLM

KeyValueRequiredDescription
providerhasuratrue