Introduction
Better prompts in one API call
Welcome to Vizpy
Vizpy automatically optimizes your LLM prompts by learning from failures. One API call, dramatically better results.
Why Vizpy?
The Problem
Prompt engineering is tedious:
- Manual trial and error
- Hard to systematically improve
- No understanding of why prompts fail
- Improvements on one case break others
The Solution
Vizpy uses contrastive learning to understand failure patterns and generate targeted improvement rules:
- Run your examples through the prompt
- Identify failures and successful retries
- Extract rules from bad→good transitions
- Validate each rule (reject rules that hurt)
- Synthesize into clear, actionable instructions
The result: prompts that work better and improvements you can understand.
Key Features
One API Call
No complex setup. Pass examples, get optimized prompt.
Learns from Failures
Contrastive learning extracts rules from what went wrong.
Validates Rules
Each rule is tested. Harmful rules are rejected.
Interpretable
See exactly what rules improved your prompt.
Works with DSPy
Native integration with DSPy modules and signatures.
Production Ready
Metered billing, usage tracking, enterprise support.
Quick Example
How It Works
Solve
Run examples through your module with retries. Collect failures and successful corrections.
Mine
Extract contrastive pairs (bad to good). Focus on examples with largest improvement.
Extract
LLM analyzes pairs to generate rules. "When X happens, do Y instead of Z"
Validate
Test each rule on held-out examples. Reject rules that hurt performance.
Synthesize
Combine rules into clear instructions. Inject into your module's prompt.
Iterate
Repeat until no improvement or max iterations reached.
Pricing
Simple, predictable pricing. One run = one optimize() call.
| Plan | Price | Runs/Month | Best For |
|---|---|---|---|
| Free | $0 | 3 | Trying it out |
| Pro | $49 | 50 | Indie devs |
| Team | $199 | 200 | Startups |
| Enterprise | Custom | Unlimited | Scale |