Miscellaneous
Agenta is a platform for building production-grade LLM applications. It helps engineering and product teams create reliable LLM apps faster through integrated prompt management, evaluation, and observability.
Collaborate with Subject Matter Experts (SMEs) on prompt engineering and make sure nothing breaks in production.
Evaluate your LLM applications systematically with both human and automated feedback.
Explore evaluation frameworks โ
Get visibility into your LLM applications in production.
The easiest way to get started is through Agenta Cloud. Free tier available with no credit card required.
git clone https://github.com/Agenta-AI/agenta && cd agenta
docker compose -f hosting/docker-compose/oss/docker-compose.gh.yml --env-file hosting/docker-compose/oss/.env.oss.gh --profile with-web up -d
http://localhost
.For deploying on a remote host, or using different ports refers to our self-hosting and remote deployment documentation.
Find help, explore resources, or get involved:
We welcome contributions of all kinds โ from filing issues and sharing ideas to improving the codebase.
Consider giving us a star! It helps us grow our community and gets Agenta in front of more developers.
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind are welcome!
By default, Agenta automatically reports anonymized basic usage statistics. This helps us understand how Agenta is used and track its overall usage and growth. This data does not include any sensitive information. To disable anonymized telemetry set TELEMETRY_ENABLED
to false
in your .env
file.