Strongest current signal
Package downloads
2.7M/30d
Tracked on PyPI
LLM evaluation framework with the DeepEval testing library
Best current coverage: 2.7M downloads/30d, 93 dependents, and 1 public repos.
Lead signals
Package pull and public code usage both show up clearly for Confident AI.
Strongest current signal
2.7M/30d
Tracked on PyPI
93
Known dependents on PyPI
14.8K
Main repository stars
1
Public repositories importing tracked packages
Research Brief
No recent Research Brief centers Confident AI yet. Start with the latest market reporting, then return here for the stored company signals on this page.
Browse Research Briefs →Reading
package pull and existing public code are the clearest current signals for Confident AI.
Package pull
2.7M/30d
Tracked package pull on PyPI
Existing public code
1
Public repositories importing tracked packages
Downstream usage
93
Known dependents on PyPI
Engineering activity
1
GitHub contributors active in the last 30 days
Coverage includes npm and PyPI registries, GitHub, public code import detection, and recent company news where available. As of April 14, 2026. Methodology
Sustainability and maintenance signals from the primary public repository.
These signals come from public code import detection and tracked hiring posts rather than registry totals alone.
Public repositories and source files importing packages tied to Confident AI.
Background and reference details
background, categories, funding, and tools stay collapsed until you need them.
Confident AI provides an LLM evaluation framework, primarily through its open-source DeepEval library, to help developers test and evaluate their large language model applications. They also offer the Confident AI Platform, an enterprise solution for advanced LLM evaluation, monitoring, and guardrails, aiming to ensure the reliability and safety of AI systems in production.
Raised $500K total - DAI rank #46 suggests moderate developer adoption relative to funding.
Y Combinator
86K PyPI
LLM evaluation framework with the DeepEval testing library
An enterprise platform for advanced LLM evaluation, monitoring, and guardrails, built on the DeepEval framework.
An open-source LLM evaluation framework for testing and evaluating large language model applications.
Public pricing snapshots collected for Confident AI
Source: Company pricing pageUpdates: WeeklyNote: Extracted via automated page analysis; verify on sourceMethodology →Historical metrics for Confident AI
Confident AI: GitHub Stars up 5% (14.1K to 14.8K). Contributors down 50% (2 to 1).
| Date | PyPI Downloads | GitHub Stars | Contributors |
|---|---|---|---|
| Mar 16, 2026 | - | 14.1K | 2 |
| Mar 19, 2026 | - | - | - |
| Mar 22, 2026 | 555.9K | - | - |
| Mar 23, 2026 | - | 14.1K | 2 |
| Mar 26, 2026 | - | - | - |
| Mar 29, 2026 | 757.0K | - | - |
| Mar 30, 2026 | - | 14.4K | 1 |
| Apr 2, 2026 | - | - | - |
| Apr 5, 2026 | 655.9K | - | - |
| Apr 6, 2026 | - | 14.6K | 0 |
| Apr 9, 2026 | - | - | - |
| Apr 10, 2026 | - | - | - |
| Apr 12, 2026 | 455.4K | - | - |
| Apr 13, 2026 | 85.9K | 14.7K | 0 |
| Apr 14, 2026 | - | 14.8K | 1 |