Strongest current signal
Package downloads
2.6M/30d
Across npm and PyPI
AI compute company with wafer-scale chips
Best current coverage: 2.6M downloads/30d, 83 dependents, and 123 GitHub stars.
Lead signals
Cerebras has the best coverage in package pull, with downstream usage and engineering activity as support.
Strongest current signal
2.6M/30d
Across npm and PyPI
83
Known dependents across npm and PyPI
123
Main repository stars
23/30d
Ranked #16 in category discussion
Research Brief
No recent Research Brief centers Cerebras yet. Start with the latest market reporting, then return here for the stored company signals on this page.
Browse Research Briefs →Reading
package pull and downstream usage are the clearest current signals for Cerebras.
Package pull
2.6M/30d
Tracked package pull across npm and PyPI
Downstream usage
83
Known dependent packages across both registries
GitHub attention
123
Main repository stars
Developer discussion
23/30d
Ranked #16 in tracked category discussion
Coverage includes npm and PyPI registries, GitHub, developer discussion, and recent company news where available. As of April 13, 2026. Methodology
Sustainability and maintenance signals from the primary public repository.
Background and reference details
background, categories, funding, and tools stay collapsed until you need them.
Cerebras Systems is an AI compute company that designs and builds specialized hardware and software for accelerating deep learning workloads. Their flagship product, the CS-2 system, features the Wafer-Scale Engine (WSE), the world's largest AI chip, enabling unprecedented performance for training large AI models. They aim to overcome traditional compute limitations, making them a significant player in the high-performance AI infrastructure ecosystem.
Raised $710M total - DAI rank #66 suggests moderate developer adoption relative to funding.
Alpha Wave Ventures, Abu Dhabi Growth Fund
Altimeter Capital
Sequoia Capital
Vy Capital
Benchmark
Benchmark
6K npm · 25K PyPI(50% of company total)
AI compute company with wafer-scale chips
6K npm · 25K PyPI(50% of company total)
A comprehensive software stack that optimizes AI model training and inference on Cerebras hardware, supporting popular frameworks like PyTorch and TensorFlow.
A complete AI supercomputer designed for deep learning, featuring the Wafer-Scale Engine 2 (WSE-2) for unparalleled compute performance.
The world's largest and fastest AI chip, designed to accelerate deep learning workloads with massive on-chip memory and bandwidth.
Public pricing snapshots collected for Cerebras
Source: Company pricing pageUpdates: WeeklyNote: Extracted via automated page analysis; verify on sourceMethodology →Historical metrics for Cerebras
Cerebras: npm Downloads down 92% (69.6K to 5.8K). PyPI Downloads down 89% (221.0K to 24.6K). GitHub Stars up 2% (121 to 123).
| Date | npm Downloads | PyPI Downloads | GitHub Stars |
|---|---|---|---|
| Mar 15, 2026 | 69.6K | 221.0K | 121 |
| Mar 22, 2026 | 45.4K | 214.7K | 123 |
| Mar 29, 2026 | 92.3K | 214.3K | 123 |
| Apr 5, 2026 | 73.5K | 424.4K | 123 |
| Apr 12, 2026 | 47.4K | 172.6K | 123 |
| Apr 13, 2026 | 5.8K | 24.6K | 123 |