Complete TypeScript SDK documentation for @promptops/sdk.
npm install @promptops/sdk
# or
yarn add @promptops/sdk
# or
pnpm add @promptops/sdkRequirements: Node.js 18+ (native fetch), TypeScript 5+. Zero production dependencies.
import { PromptOps } from '@promptops/sdk'
const client = new PromptOps({
apiKey: process.env.PROMPTOPS_API_KEY!,
baseUrl: 'https://your-api-url',
})
const prompt = await client.getPrompt('welcome-email')
const rendered = client.render(prompt, { userName: 'Sarah' })new PromptOps(config)Creates a new PromptOps client instance.
interface PromptOpsConfig {
// Required: Your PromptOps API key
apiKey: string
// Required: Base URL of your PromptOps API
baseUrl: string
// Optional: Default environment to fetch prompts for
// Default: 'production'
defaultEnvironment?: string
// Optional: Cache TTL in milliseconds
// Default: 60000 (60 seconds)
cacheTtlMs?: number
// Optional: Request timeout in milliseconds
// Default: 5000 (5 seconds)
timeoutMs?: number
}const client = new PromptOps({
apiKey: 'po_live_abc1234567890...',
baseUrl: 'https://api.your-domain.com',
defaultEnvironment: 'staging',
cacheTtlMs: 30000, // 30 seconds
timeoutMs: 3000, // 3 seconds
})getPrompt(slug, options?)Fetches the currently active version of a prompt for the specified environment.
slug (string) — The prompt's unique identifier (e.g., "welcome-email")options.environment (string, optional) — Override the default environmentinterface ResolvedPrompt {
slug: string
versionNumber: number
systemPrompt: string | null
userTemplate: string | null
model: string
temperature: number
maxTokens: number | null
metadata: Record<string, unknown>
}// Default environment (production)
const prompt = await client.getPrompt('welcome-email')
// Specific environment
const devPrompt = await client.getPrompt('welcome-email', {
environment: 'dev',
})render(prompt, variables)Interpolates {{variable}} placeholders in the prompt's userTemplate with the provided values.
prompt (ResolvedPrompt) — The prompt object from getPrompt()variables (Record<string, string>) — Key-value pairs to substitutestring — The rendered template with all variables replaced.
const prompt = await client.getPrompt('welcome-email')
// If userTemplate is "Hello {{userName}}, welcome to {{plan}}!"
const message = client.render(prompt, {
userName: 'Sarah',
plan: 'Pro',
})
// → "Hello Sarah, welcome to Pro!"The SDK maintains an in-memory cache keyed by slug + environment.
cacheTtlMs (default: 60s). During this window, getPrompt() returns instantly.PromptOps instance has its own cache. If you need shared caching, use a singleton.try {
const prompt = await client.getPrompt('my-prompt')
} catch (error) {
// Error is thrown only if:
// 1. The API returns an error, AND
// 2. There is no cached (even stale) value
console.error('Failed to fetch prompt:', error.message)
}PromptOps is LLM-agnostic. Here's how to use it with popular providers:
const prompt = await client.getPrompt('classifier')
const message = client.render(prompt, { content: userInput })
const response = await openai.chat.completions.create({
model: prompt.model,
temperature: prompt.temperature,
max_tokens: prompt.maxTokens ?? undefined,
messages: [
{ role: 'system', content: prompt.systemPrompt ?? '' },
{ role: 'user', content: message },
],
})const prompt = await client.getPrompt('assistant')
const message = client.render(prompt, { question: userQuery })
const response = await anthropic.messages.create({
model: prompt.model, // e.g. "claude-3-sonnet-20240229"
max_tokens: prompt.maxTokens ?? 1024,
system: prompt.systemPrompt ?? '',
messages: [{ role: 'user', content: message }],
})