Miscellaneous
Agenta is an end-to-end LLM developer platform. It provides the tools for prompt engineering and management, ⚖️ evaluation, human annotation, and :rocket: deployment. All without imposing any restrictions on your choice of framework, library, or model.
Agenta allows developers and product teams to collaborate in building production-grade LLM-powered applications in less time.
Agenta enables prompt engineering and evaluation on any LLM app architecture:
It works with any framework such as Langchain, LlamaIndex and any LLM provider (openAI, Cohere, Mistral).
Playground | Evaluation |
---|---|
Compare and version prompts for any LLM app, from single prompt to agents. | Define test sets, then evaluate manually or programmatically your different variants. |
Human annotation | Deployment |
Use Human annotator to A/B test and score your LLM apps. | When you are ready, deploy your LLM applications as APIs in one click. |
Contact us here for enterprise support and early access to agenta self-managed enterprise with Kubernetes support.
By default, Agenta automatically reports anonymized basic usage statistics. This helps us understand how Agenta is used and track its overall usage and growth. This data does not include any sensitive information.
To disable anonymized telemetry, follow these steps:
TELEMETRY_TRACKING_ENABLED
to false
in your agenta-web/.env
file.telemetry_tracking_enabled
to false
in your ~/.agenta/config.toml
file.After making this change, restart Agenta Compose.
We warmly welcome contributions to Agenta. Feel free to submit issues, fork the repository, and send pull requests.
We are usually hanging in our Slack. Feel free to join our Slack and ask us anything
Check out our Contributing Guide for more information.
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind are welcome!
Attribution: Testing icons created by Freepik - Flaticon