Summarize this article with:
As large language models (LLMs) become integral to products across industries, prompt engineering has emerged as a critical discipline. But managing prompts isn't just about crafting clever sentences, it involves iteration, testing, version control, and performance tracking.
In this blog post, we’ll delve into Eden AI’s Prompt Manager, a tool designed to streamline the entire lifecycle of prompts, including creation, A/B testing, and API-based execution.
Whether you're an engineer, data scientist, or prompt enthusiast, this detailed walkthrough will empower you to manage prompts like a pro.
For a hands-on learning experience, you can also watch our YouTube video on our Eden AI Youtube channel, where Krishna, our Developer Advocate, walks you through this tutorial step-by-step so you can follow along as you build and test your own prompt management API.
We’ll walk you through:
- Why prompt management matters
- How Eden AI's Prompt Manager works
- Setting up a FastAPI service to manage and test prompts
- Full API walkthrough: create, list, retrieve, execute, update, and delete prompts
- Version history and A/B testing for optimization
Why Prompt Management Matters
Prompt engineering is both art and science. But even well-written prompts can underperform in real applications. Without a structured way to manage them, teams often:
- Hardcode prompts into backends
- Lose track of which versions are deployed
- Lack performance metrics or version control
- Struggle with collaboration and iteration
Eden AI solves these challenges with a Prompt Manager that acts like GitHub for prompts, providing creation, storage, testing, and analytics tools, all accessible via UI and API.
Overview of Eden AI’s Prompt Manager
Eden AI’s Prompt Manager allows you to:
- Create and version prompts with metadata
- Test prompts in real-time using multiple LLM providers
- Monitor performance across versions
- Integrate via API for use in production apps
You can use it visually (via their dashboard) or programmatically using an API — the latter being ideal for automated testing, CI/CD pipelines, or prompt orchestration engines.
Building a Prompt Management API with FastAPI
The demo code provided in the video sets up a FastAPI proxy service that interfaces with Eden AI’s backend. This is useful for abstracting credentials and routing requests from your application logic.
Step 1: Environment Setup
First, ensure you have a .env file with your Eden AI API key:
The code loads this securely using python-dotenv:
Core Features and Endpoints
Now, let’s break down the full functionality implemented in custom_prompts.py.
1. List All Prompts
You can retrieve all prompts stored in your Eden AI dashboard with pagination:
This endpoint proxies GET requests to:
Useful for listing available versions or prompts used across different use cases.
2. Create a New Prompt
To define a new prompt with template variables, send a POST request to:
Request body example:
This allows structured, reusable prompts with placeholders (e.g. {{text}}).
3. Retrieve a Specific Prompt
Fetch any individual prompt by name:
This enables loading the full template, including the version history and metadata.
4. Execute (Call) a Prompt
Once a prompt is defined, test it by sending values for its variables:
Example body:
Eden AI will substitute the variable and send it to the chosen LLM provider, returning the response.
5. Update or Patch a Prompt
Prompts can be updated using either:
- PUT (for full replacement) or
- PATCH (for partial updates)
Both endpoints:
are routed to Eden AI’s API, maintaining synchronization between your local service and Eden AI’s prompt store.
6. Delete a Prompt
Remove a prompt when it is deprecated or no longer needed:
Returns a 204 No Content response on success.
Advanced Feature: Prompt Version History and A/B Testing
Version control is a key advantage of Eden AI’s prompt manager. Each time you modify a prompt and want to test performance, you can create a versioned history entry:
This allows you to:
- Track performance per version
- A/B test variants in production
- Roll back underperforming prompts
You can retrieve, update, or delete these history entries using:
You can even extract template variables from any version with:
Error Handling and Security
All endpoints include robust error handling to:
- Return appropriate status codes (400, 500, etc.)
- Provide user-friendly error messages
- Catch JSON decoding issues and request failures
For example, if Eden AI returns an error:
This ensures the FastAPI layer remains clean and stable.
Running the API
To run the application locally:
Once live, you can use Postman or Curl to test all endpoints. Better yet, hook it into your application to dynamically fetch and execute prompts from Eden AI.
Final Thoughts
Eden AI’s Prompt Manager is not just a storage solution, it’s a full lifecycle platform for LLM prompt optimization. By combining a powerful UI with a flexible API, it supports scalable, collaborative, and performance-driven prompt engineering.
With the FastAPI proxy, you can:
- Seamlessly integrate prompt management into your backend
- Track and test prompt variants in production
- Manage updates without service interruptions
Whether you're building a chatbot, summarization tool, or content generator, prompt management will be crucial to quality and consistency, and Eden AI provides one of the most developer-friendly solutions in the space.
Resources
Access the full Prompt Manager API documentation to understand every available feature and parameter: Prompt Management Documentation
Want to see how everything works under the hood? Check out the complete FastAPI implementation in our open-source repo: custom_prompts.py on GitHub
Experience the power of structured, version-controlled, API-driven prompt workflows, no setup required: Try Eden AI Prompt Manager
.avif)
.jpg)

.avif)
