Golden Prompt Generator

Transform vague ideas into brutal 10-point prompts

Advanced Options
Generating prompt... This may take a few seconds.

Generated Prompt


            

Configuration

📝 Prompt Templates

Technical, precise, specific - for generating implementation phrases

Creative, catchy, memorable - for generating metaphors and intro hooks

💡 Use "Configuration Presets" below to save and load your configurations (stored on server)

🎛️ Generation Parameters

Temperature (Phrase): Provider Default
Temperature (Metaphor): Provider Default
Max Tokens (Phrase): Provider Default
Max Tokens (Metaphor): Provider Default
10

Validation Rules

Min: Max:
Min: Max:
Min: Max:

💾 Configuration Presets

☁️ Server Storage: Presets are saved in the database and available on all your devices automatically!

🔧 Advanced: Import/Export JSON (for backup or sharing)

Use Export for backups or Import to restore from a file. For normal use, just use "Save Current As..." above.

Debug Panel

When enabled, generation requests will capture detailed LLM request/response data

No debug data available

Enable debug mode above and generate a prompt to see detailed information

LLM Request/Response Logs

Loading logs...

🔍 Transparency & Integrity Verification

What You're Seeing

RAW, UNTAMPERED logs verifying EVERY step:

  • ✓ What Orchestrator sends to LiteLLM
  • ✓ What LiteLLM sends to API (Cerebras/OpenAI)
  • ✓ What API returns to LiteLLM
  • ✓ What LiteLLM returns to Orchestrator
  • ✓ SHA-256 hashes prove NO tampering

⚠️ The Full Chain (Every Point of Failure)

Tampering can happen at MANY points:

  1. Frontend JS: Could filter input
  2. Orchestrator: Could modify prompt
  3. LiteLLM Proxy: ⚠️ BLACK BOX - could change temp/params
  4. API Provider: Could have filters
  5. LiteLLM Return: Could filter response
  6. Orchestrator: Could validate/hide results

This tab shows EVERYTHING - no gaps, no black boxes.

Loading transparency logs from Docker...