The IPO of Cerebras AI
Written by Christopher Walsh, Partner @ 7GC
CErebras, a leader Semiconductor chip company, announced ITS NASDAQ IPO in October 2024.
As we highlighted in August 2024:
“…capital markets are cyclical and will be the catalyst for the asset class. While we are not macro experts, we have seen substantial institutional investor and retail public equity demand in the past 24 months led by technology investment. Whether we see a decline in interest rates at a certain rate or not, generative AI investment and investor appetite should turn capital markets within the next 12 months in our view. A catalyst that will likely create an upturn in LP sentiment, turning on the virtuous cycle all over again.”
As Matt Turck keenly pointed out, Cerebras is a "pure play AI" company. Public market investors have had limited capabilities to invest in the generative A.I. wave outside of Nvidia and the "picks and shovels" supply chain + utility companies that Nvidia carries with them. This deal scarcity should drive strong demand that could set the stage for the next wave of "A.I. adjacent" companies.
What Does Cerebras Do?
Cerebras is a leader in the semiconductor chip segment for high-performance computing and is based in Sunnyvale, CA. It is the first "pure-play" generative A.I. company to file since the release of ChatGPT in 2022. The company plans to trade on the Nasdaq under the ticker "CBRS." Citigroup is the lead underwriter on the offering, and the company should trade within the next few weeks, barring CFIUS concerns. Cerebras was founded in 2016, and as of their most recent quarter (ending June 31st, 2024), the company did $69.7M of Revenue growing 1,118% year-over-year.
Semiconductor businesses are highly complex, and Cerebras certainly falls into this camp, although the business model is not.
Cerebras has developed an A.I. compute platform specifically designed to accelerate complex AI workloads, mainly focused on the computational demands of Generative AI (GenAI) applications. Their approach is novel relative to Nvidia, with their product focused on inference versus training.
The company's thesis and pitch counters that GPUs are too small, and scaling laws are therefore highly inefficient. Further, they believe that GPU efficiency is especially low for inference specifically. Cerebras Management positions that GPUs have too low of memory bandwidth, which subsequently drives low performance as GPUs cores are idle while waiting for data – they can run at less than 5% of utilization on certain inference tasks. Unsurprisingly, given the cause and effect of poor efficiency, this necessitates larger GPU deployments, dramatically driving up the cost of inference.
The S-1 pitch boils down to the idea that GPU companies (Nvidia) have no clear solution to memory size and memory bandwidth problems and therefore Cerebras solution is better suited for this next leg of generative AI.
Their Product
The core of their platform is the Wafer-Scale Engine (WSE), a proprietary chip encompassing an entire silicon wafer. The WSE is significantly larger than other commercially available processors like GPUs, where it is 57 times larger. Given the size, it has larger on-chip memory (7,000 times vs. Nvidia GPUs) and is not reliant to networking bandwidth.
Cerebras WSE-3 and LATEST GPU
To house the WSE, the company has the Cerebras System ("C.S."), a complete AI system that houses the WSE and provides the necessary power, cooling, and connectivity infrastructure. The structure is designed to easily integrate with standard data center builds.
The CErebras "C.S" AI System
As a third step, the company has configured what they call the Cerebras AI Supercomputer, which connects multiple C.S. units, creating a large-scale AI supercomputer. This architecture enables customers to train massive AI models without the complexity of distributed computing.
THE Cerebras AI Supercomputer
Cerebras' AI compute platform is designed to dramatically accelerate AI workloads, particularly training and inference for large-scale GenAI models. By leveraging wafer-scale integration and a comprehensive software stack, Cerebras aims to provide a high-performance, user-friendly solution that reduces time-to-solution, simplifies programming, and lowers power consumption compared to traditional GPU-based approaches. This technology enables customers to develop and deploy advanced AI applications more efficiently.
The company makes exciting performance claims and benefits in the S-1, stating their wafer is 10 times faster training time-to-solution and over 10 times faster output generation speeds than other top GPU-based inference solutions (enabling real-time interactivity for AI applications and agents).
Company Milestones
Despite filing for an IPO, the company has only been live for 8 years:
2016: Cerebras Systems was incorporated in April in Delaware
2017: Raises $25M Series B and $65M in Series C financing
2018: Raised $80M in Series D financing
2019: Raised $272M in Series E financing
2020: Shipped the first CS-1 Cerebras System
2021: Shipped the first CS-2 Cerebras System and raised $254M in Series F financing
2023: Entered into commercial agreements with strategic customer G42
2024: Introduced the CS-3 Cerebras System and inference solutions and entered into new commercial agreements with G42 for up to $1.7B in backlog orders
Go-to-Market
Cerebras Systems primarily sells through a combination of direct sales and partnership models targeted to address the rapidly expanding A.I. market. The company offers both on-prem and cloud-based solutions to provide maximum flexibility to its customers – the cloud revenue model is consumption-based.
With each deployment, the company captures revenue from the hardware it sells and then services revenue, including support services. Over 78% of revenue stems from just hardware sales.
The sales and marketing today can be broken down into three teams internally:
Standard Sales Team (AEs): Focus on customer-centric (sales-led) product development
Field ML Team: This team provides customers with access to AI expertise to ensure they are leveraging the technology effectively.
Applied ML Team: Includes product applications engineers, marketing, and business development/strategy teams, fostering a comprehensive customer support structure.
Unlike SaaS businesses, the Cerebras sales cycle is relatively volatile, and the company does not disclose the total customer count or the actual length of the typical sales cycle. Instead, the company highlights their inability to time sales as a risk factor, stating it can vary substantially from customer to customer. The company also does little to detail its strategy on how it will expand its customer base or what land and expand looks like from existing customers.
There are some high-level disclosures on Cerebras's financial and business KPI that are important to call out. Total revenue for the YE FY23 increased by $54.1M, or 220% YoY, due to a substantial increase in hardware revenue. This rapid ramp of hardware was due to a ~4x increase in the number of AI systems sold by the company. While this ramp was extremely impressive, G42, a Saudi Arabia AI technology group, represented 83% of all revenue in FY23. This relationship also ties back to the company's ownership structure, which we dive into in more detail below (see capitalization section).
Market Opportunity
Cerebras believes its full suite of AI computing solutions addresses use cases for training, inference, software, and expert services. It estimates the total addressable market (TAM) for its AI computing platform to be approximately $131B in 2024, growing to $453B in 2027, representing a 51% compound annual growth rate (CAGR). This TAM is comprised of three core markets:
AI Compute for Training: Based on market estimates in Bloomberg Intelligence research, Cerebras estimates the TAM for AI Training Infrastructure to be $72 billion in 2024, growing to $192B in 2027, a 39% CAGR.
AI Compute for Inference: Cerebras believes the inference opportunity is enormous, as the market is in the early phases of its adoption cycle. The company estimates the TAM for AI Inference to be $43B in 2024, growing at an estimated 63% CAGR to $186B in 2027.
Software and Expert Services: Based on market estimates in Bloomberg Intelligence research, Cerebras estimates the TAM for GenAI Software and Services to be $16B in 2024, growing to $75B in 2027, a 67% CAGR.
Competition
Cerebras is one of the closest competitors to Nvidia, dating back to GPT-3, where its performance benchmarked well relative to GPU technology. However, the problem for Cerebras is limited accessibility, starting with the hardware and continuing through its cloud offering. The limited accessibility to their hardware drives higher costs, with each server costing millions of dollars. Their cloud is only accessible through their own cloud offering, which limits development flexibility, stymying usage, and adoption.
This developer ecosystem pales in comparison to the Nvidia ecosystem, where people are developing on a vast array of systems, with some systems using a single GPU and others using tens of thousands of GPUs on-prem or via any cloud provider. Unsurprisingly, this has led to a specific and concentrated customer base for Cerebras, with their largest customer being G42 (see next section).
Despite this, Cerebras is genuinely one of the first vendors ever to release information that can be benchmarked to Nvidia in AI training for large models. Other vendors like Intel, AMD, and Graphcore have built dedicated training chips but fail to benchmark in large-scale clusters. You can see this abundantly clear in SemiAnalysis benchmarking below:
Cerebras vs. Nvidia Benchmarking
Source: Semianalysis; as of 12.01.2022
While training is slower in smaller models, training time is a function of how large of a cluster of chips you can obtain – a trend we see in real-time with cluster sizes over 100K and scaling to 300k. Cerebras can train significantly faster for larger parameter models while being demonstrably slower for smaller parameter models.
Given they are positioning themselves directly against Nvidia, they spend much of the S-1 business section benchmarking relative to GPU clusters and point to the conversation that a) models are scaling in size and b) inference will continue to scale. Both tailwinds tie nicely to Cerebras' market positioning as they look to exacerbate their market position relative to Nvidia.
On training size:
“…Recent models like GPT-4 and Gemini are over 10 times larger in parameter size than GPT-3… These hundreds or thousands of GPUs then need to constantly communicate with each other across a network, creating extreme communication bottlenecks and power inefficiencies. This is an ongoing cost and slows down time-to-solution, as the delicate balance of bottlenecks needs to be reconfigured every time the ML developer wants to change the model architecture, model size, or run on a different number of GPUs.”
On Inference:
“GPUs have relatively low memory bandwidth.. This leads to low performance as GPUs cores are idle while waiting for data – they can run at less than 5% utilization on inference tasks. Low utilization and limited memory bandwidth impact the responsiveness and throughput of GPU-based systems and hinders real-time applications for larger models. This can limit the capability and adoption of emerging inference applications which are especially latency-sensitive… This inefficiency also necessitates larger GPU deployments and dramatically drives up the cost of inference.”
Wafer-scale integration keeps all critical data on-chip, producing 7,000 times more memory bandwidth than the leading GPU solution. This integration allows the WSE-3 to deliver over ten times lower latency for real-time GenAI inference.
While this does seem to hold some novelty as a potential alternative, they intentionally look to avoid and not address 1) accessibility and developer ecosystem and 2) cost relative to GPU technology. These two functions clearly are driving factors for many prospective customers, as evidenced by the sub-scale and weak customer concentration.
Customer Concentration and G42
Group 42 Holding Ltd, doing business as G42, is an Emirati AI development holding company based in Abu Dhabi, founded in 2018. G42 previously had strong ties to China, which has driven the U.S. government to be concerned that G42 diverted U.S. technology to Chinese companies or the government. As of February 2024, G42 divested its stakes in China.
G42 initially acquired a ~1% stake in Cerebras back in 2021. The filing also highlights that an agreement for G42 to invest an incremental $335M (Series F-2) by April 15th, 2025. G42 also has an option (the 'G42 Option') to purchase more shares in conjunction with large purchases of hardware & services from Cerebras.
This relationship is more concerning when learning G42 contributes the majority of Cerebras' Revenue, representing ~83% of total FY23 Revenue and ~87% of 1H FY24 revenue. This revenue concentration is mainly due to a commitment from G42 to purchase $1.4B of hardware & services from Cerebras, to be pre-paid before February 2025. This contract would fall within the purchase order sizing to receive shares at a discounted IPO price.
Investor Ownership
Cerebras has raised six preferred equity rounds, with the company last being valued at $4.2B in a Series F financing round in November 2021. Outside of G42, other key investors include Benchmark, Foundation Capital (Series A), Eclipse (Series B & C), Coatue Management (Series D), Altimeter Capital (Series D), and Alpha Wave (Series E & F).
Capitalization History Since Inception
Source: Pitchbook; as of 10.08.2024
Reviewing the capitalization, these investors also have the largest investor ownerships, with Foundation Capital, Benchmark, Alpha Wave, Coatue, and Altimeter all owning more than 5% of the company. Cumulatively, they own 67.2M shares, which is greater than the total ownership of all executive offers and directors combined (57.5M shares).
The company also raised a Series F-1 convertible note from various investors to issue up to 27,285,129 shares in 2024. G42 agreed to purchase approximately 22.9M shares (~83% of the round) convertible at $14.66 per share, for anticipated gross proceeds of $335M.
The below looks at the current value of 5%+ investors at the company’s Series F-1 conversion share price:
Implied Value of Largest Shareholders at Last Valuation
Source: Cerebras S-1, 7GC Analysis
The F-1 security also stipulates if G42 purchases more than $500M in one purchase order and less than $5.0B in the aggregate of computing clusters, the Company will grant G42 the option to purchase additional shares of Series F-1. These shares are calculated by:
(i) the total aggregate purchase price for hardware for the G42 Option, which will equal 10% of the value of the relevant purchaser order divided by a share price that represents 17.5% below the average closing price per share of the Company’s common stock over the 30-day period.
The convertible note signifies a substantial discount to the company's Series F round in 2021, representing a 48% discount and setting a market capitalization of around $2.2B.
Financial Summary
We have looked to compartmentalize Cerebras’ reported financials shown below:
Source: Cerebras S-1, 7GC Analysis
What you see headlining is strong top-line growth (+220% FY23 YoY growth) coupled with an upside-down margin structure. The company's FCF losses for FY23 were twice the total revenue, and we see relatively straight-line OI margins in the first six months of FY23. FY23 hardware gross margins stand at ~20%, with 1H FY24 figures a better story with 36% gross margins, representing an 80% increase relative to FY23.
This growth has to be viewed with a cautious scope given the heavy concentration of its single largest customer, G42. Luckily, we can gain a better perspective through the S-1 details:
1H FY24 G42 Revenue Bridge
Source: Cerebras S-1, 7GC Analysis
This 1H 2024 incremental revenue was primarily a function of G42 and a contracted deal made in April 2024 for purchase orders worth at least $300M, with a $300M prepayment received in May 2024. Cerebras signed another agreement in May 2024 with G42 to purchase $1.4B by February 2025 coupled with an additional $178M in computing service revenue. Removing the G42 noise, the company did around $17.8M in total non-G42 revenue in the first six months of FY24.
Valuation
While public market comparables for "pure-play" gen AI software layer companies would be challenging given the limited public market comparables, there is no shortage of semiconductor companies. Similar public AI businesses fall into 1) data center “picks and shovels" or 2) classic semiconductor businesses. Unlike Cerebras, all these companies enjoy strong gross margins and superior free cash flow.
Cerebras’ Public Market Comparable Universe
Source: CapitalIQ; 10.08.24
Across our data center and semiconductor cohorts, you will notice revenue multiples driven by future strong free cash flows driven by significant operating margins. Nvidia and Broadcom command 48% and 51% FCF Margins, and average FCF margin is 27% across our entire cohort of companies.
Source: CapitalIQ; 09.23.24
The only semiconductor business without positive FCF is Intel, which trades at a 2.2x forward revenue or a 7.6x forward EBITDA multiple. These businesses are dramatically different than zero marginal cost SaaS businesses that can easily trade on forward revenue growth. Semi's are highly capital intensive as look to establish a technical MOAT that allows them ascertain pricing power. For Cerebras, in particular, the company still needs to establish the historical precedent that their current product offering can dictate that.
Gross Margin Ramp vs. Semi Comparables
Source: CapitalIQ; 10.09.24
However, as seen in the above section, their gross margin dollars and margin are increasing – moving from teens to 40% in Q2 2024. This trend will have to continue on a similar linear trajectory for investors to gain confidence in underwriting the business during its ongoing IPO roadshow. Based on a simple comparison relative to market leaders, there is upside if adequately articulated.
Cerebras vs. Nvidia at Time of IPO
Source: CapitalIQ, Nvidia S-1, Cerebras S-1; 10.09.24
When comparing Cerebras at the time of Nvidia's IPO, the companies share similar scale and gross profit margin structures, but Cerebras fails to match operating leverage. Following the company's IPO in 1999, the company traded in a 4.0x-5.5x forward revenue multiple over the course of the next 24 months. We see gross margin leverage maintaining similar levels over the company's first 10 years, with the 10-year average sitting at ~36%.
Illustrative Valuation Sensitivity Tables
Source: CapitalIQ, 7GC Analysis; 10.09.24
Given the limited ability to take a more granular DCF focus given the company's massive burn, we set an illustrative estimate for gross profit and revenue for the next twelve months (NTM). Again, this is an illustrative estimate as we do not have forward projections.
We assumed a baseline 171% NTM revenue growth, down from 220% FY23 growth, and a 38% NTM GM margin. Our assumptions result in a valuation range between $3.0B - $3.1B enterprise value. This assumes that investors can accept customer concentration risks among other risks.
Given Cerebras is one of the three VC-backed IPOs in 2024 their IPO performance will impact the market's potential for future <$1B revenue especially as it relates to AI companies. We find it highly likely that IPO pricing will be conservative to draw significant book demand. The floor to look out for is $2.2B, which the company raised capital at in 2024. The company has already shown a willingness to take capital at a lower share price through their Series F-1 convertible security issued in May 2024, but that had certain commercial aspects tied to that deal.
Final Thoughts
Cerebras Strengths:
Strong growth + margin leverage: Cerebras is growing revenue at a fanatical clip, growing 220% YoY in FY24 and over 10x, benchmarking 1H CY24 vs. 1H FY23. This scale will allow the company to continue to scale gross margins as costs remain fixed and new orders increase. We have seen early signs of that with G.M. margin's scaling from ~20% to ~40% in just 12 months.
David vs. Nvidia goliath: Cerebras is truly one of the only solutions that can benchmark directly with Nvidia, and it provides a novel solution that comes with certain advantages across memory bandwidth and latency that could drive robust utility as more generative AI use cases become more highly utilized.
Retail demand: IPOs are highly accretive trades for institutional investors that receive an allocation in the IPO. These investors aren't subject to lock-up and have the flexibility to sell at any point on the opening day. Given the stock will be viewed as the first pure-play AI company going public, we can expect significant retail demand following the pricing. This demand will likely supersede fundamental analysis, and many investors will be drawn to that potential upside during the book-running process.
Cerebras Concerns / Risk Factors:
Customer concentration: Cerebras has significant customer concentration, with one customer, G42, accounting for 83% and 87% of total revenue in 2023 and the first half of 2024 respectively. This exposes Cerebras to substantial risk if there are any adverse changes in demand from G42 or their strategic relationship. Losing or seeing reduced purchases from this crucial customer could materially harm Cerebras' business and financial results.
Geographic mix: Cerebras faces clear CFIUS risks in connection with UAE-based G42 Primary Purchase, where G42 agreed to purchase shares of Cerebras' Series F-2 preferred stock (or Class N common stock if after the IPO) for $335 million. While the two firms initially filed a joint voluntary CFIUS notice to purchase voting shares, they changed the agreement to non-voting shares. Following this, they withdrew their notice, believing non-voting shares are not subject to CFIUS jurisdiction. This transaction could still be viewed as reviewable by CFIUS and, if rejected, could create adverse issues for the company's ability to raise capital and impact the overall commercial relationship with G42.
Management team: CEO Andrew Feldman was previously named in an SEC enforcement action and DOJ prosecution related to his role at a prior company, Riverstone Networks, in the early 2000s. While the issues appear to be from a much earlier time, association with prior legal actions is a risk factor for the CEO's reputation.
Path to profitability: Cerebras has incurred significant net losses since inception and expects to continue incurring losses to fund operations and R&D. In 2023, the net loss was $127M. While revenue is proliferating, jumping 220% from 2022 to 2023, Cerebras will need to continue scaling revenue substantially to achieve profitability, especially with significant R&D investments. The path to profitability relies heavily on rapidly expanding the customer base and use cases beyond the current concentrated business.
Cerebras is the 1st generative AI IPO since the release of ChatGPT in 2022, and it will be watched closely across institutional and retail investor channels. It will also be carefully viewed by the next wave of potential IPOs that are waiting in the private markets. Given their impressive breakthroughs on the technology side that allow them to be included in the conversation compared to Nvidia, the retail demand for the stock should draw a successful offering. Their pricing multiple and post-IPO trading performance will be predicated on the management's ability to drive confidence in the current backlog and continued margin leverage in the next 4-8 quarters.