Getting Started

Set up PromptOps and manage your first prompt in under 5 minutes.

1. Create a Project

Every team or application starts with a Project. Creating a project gives you your first API key — save it, it's only shown once.

curl -X POST https://your-api-url/api/v1/projects \
  -H "Content-Type: application/json" \
  -d '{"name": "My AI App"}'

Response:

{
  "data": {
    "project": {
      "id": "abc-123",
      "name": "My AI App"
    },
    "apiKey": "po_live_abc1234567890..."
  }
}

Alternatively, open the Dashboard and click "Create New Project" — the API key will be displayed on screen.

2. Install the SDK

npm install @promptops/sdk

The SDK has zero production dependencies and uses native fetch (Node.js 18+, Deno, Bun).

3. Create a Prompt

Prompts are identified by a slug — a human-readable, URL-safe identifier like onboarding-email or support-classifier.

curl -X POST https://your-api-url/api/v1/prompts \
  -H "Authorization: Bearer po_live_abc1234567890..." \
  -H "Content-Type: application/json" \
  -d '{
    "slug": "welcome-email",
    "name": "Welcome Email Generator",
    "description": "Generates personalized welcome emails"
  }'

4. Add a Version

Each prompt has versions. When you create a version, it's automatically deployed to the dev environment.

curl -X POST https://your-api-url/api/v1/prompts/{promptId}/versions \
  -H "Authorization: Bearer po_live_abc1234567890..." \
  -H "Content-Type: application/json" \
  -d '{
    "systemPrompt": "You are a friendly assistant that writes welcome emails.",
    "userTemplate": "Write a welcome email for {{userName}} who signed up for the {{plan}} plan.",
    "model": "gpt-4",
    "temperature": 0.7
  }'

5. Use in Your App

import { PromptOps } from '@promptops/sdk'

const promptOps = new PromptOps({
  apiKey: process.env.PROMPTOPS_API_KEY,
  baseUrl: 'https://your-api-url',
})

// Fetch the active prompt for your environment
const prompt = await promptOps.getPrompt('welcome-email')

// Render with variables
const message = promptOps.render(prompt, {
  userName: 'Sarah',
  plan: 'Pro',
})

// Use with your LLM
const response = await openai.chat.completions.create({
  model: prompt.model,
  temperature: prompt.temperature,
  messages: [
    { role: 'system', content: prompt.systemPrompt },
    { role: 'user', content: message },
  ],
})

6. Promote to Production

Once you're happy with a version in dev, promote it to staging orproduction:

curl -X POST https://your-api-url/api/v1/prompts/{promptId}/promote \
  -H "Authorization: Bearer po_live_abc1234567890..." \
  -H "Content-Type: application/json" \
  -d '{
    "environment": "production",
    "versionId": "version-uuid"
  }'

Your SDK will now automatically resolve the production version when called with environment: "production".

Next Steps