Databricks chief calls out ‘insane’ AI startup valuations while backing long‑term enterprise AI bets

By |

Databricks CEO Ali Ghodsi has warned that multi‑billion‑dollar artificial intelligence startups with no revenue are evidence of a “huge bubble” forming in parts of the AI market, even as he argues that practical, data‑driven enterprise AI remains underexploited. Speaking at Fortune’s Brainstorm AI conference in San Francisco, Ghodsi criticized investors for funding “insane” valuations, explained why the $134 billion data and AI company is holding back from an initial public offering, and outlined where he believes real value—and real risks—lie in the next phase of AI adoption.


Ghodsi’s ‘bubble’ warning on AI startup valuations

In remarks reported by Fortune, Ali Ghodsi, CEO of analytics and AI platform Databricks, described what he sees as speculative excess around certain artificial intelligence startups. Referring to companies “worth, you know, billions of dollars with zero revenue,” he said, “that’s clearly a bubble, right, and it’s, like, insane.” Ghodsi added that he sees “a huge bubble in many, many portions of the market,” though he stopped short of calling the entire AI sector a bubble.

Databricks, privately valued at around $134 billion according to multiple press reports, sits at the center of the enterprise data and AI infrastructure market. Ghodsi, who holds a PhD in computer science, has long argued that sustainable value in AI depends on high‑quality data pipelines and real business outcomes, not just model performance benchmarks or user growth.

His comments arrive after several years in which funding for generative AI startups surged. Market trackers such as CB Insights and PitchBook have documented tens of billions of dollars in venture capital and strategic investment flowing into model developers, tooling providers, and application‑layer AI companies since late 2022, following breakthroughs in large language models (LLMs). Some of these firms have raised money at valuations in the tens of billions of dollars despite limited or no recurring revenue.

“Companies that are worth, you know, billions of dollars with zero revenue, that’s clearly a bubble, right, and it’s, like, insane.”
— Ali Ghodsi, Fortune Brainstorm AI, San Francisco, as reported by Fortune

While Ghodsi did not single out specific companies, his remarks echo broader concerns raised by some economists and investors about “AI exuberance.” Comparisons are often drawn to the dot‑com bubble of the late 1990s, when internet startups with minimal revenue achieved sky‑high valuations before a sharp correction in 2000–2001.


‘Bad vibes’ in Silicon Valley and investor fatigue

Beyond valuations, Ghodsi described what he sees as deteriorating sentiment in Silicon Valley’s AI ecosystem. He told the audience that “the vibes in the Valley are bad,” adding that even venture capitalists helping to fund the boom privately acknowledge how stretched the market feels.

According to Fortune’s account, Ghodsi said that in private conversations some investors have joked about stepping away from dealmaking for several months. “Maybe I should just go on a break for, like, six months and come back and it’ll be, like, really financially good for me,” he quoted unnamed VCs as saying. The remark suggests a belief that valuations or deal dynamics may look more attractive after a cooling‑off period.

Ghodsi also endorsed critiques of “circular financing” in AI, where companies and investors are entangled in ways that can prop up valuations. In this structure, AI startups might invest in or buy services from one another—sometimes facilitated by shared investors—creating revenue and growth figures that may not reflect sustainable external demand.

He predicted that this “circular aspect” would worsen before it corrects, saying, “I think like 12 months from now, it’ll be much, much, much worse.” However, he argued that recent “wobbles” in the market are healthy, in that they encourage company leaders to pause aggressive expansion and reassess fundamentals.

Other voices in the investment community remain more optimistic. Some venture capitalists contend that while individual companies may be overvalued, structural shifts—such as AI‑driven productivity gains, new software categories, and infrastructure modernization—justify strong capital flows. They argue that early‑stage revenue may not be the best metric for frontier technologies where adoption curves and business models are still forming.


Why Databricks is holding off on an IPO

Ghodsi’s skepticism about current market conditions helps explain Databricks’ reluctance to go public immediately. While the CEO has previously acknowledged “flirting” with an initial public offering, he told Fortune’s audience that remaining private gives the company more flexibility amid volatility.

According to his remarks, many peer companies rushed to list during the 2021 technology boom. By 2022, however, some of those newly public firms were forced into aggressive cost‑cutting as interest rates rose and investors rotated out of high‑growth tech stocks. Ghodsi said that while others were trimming headcount, Databricks—which had stayed private—was able to hire “thousands of people.”

He suggested that if an AI bubble does burst, a private structure would allow Databricks to continue investing in long‑term research and enterprise AI utility without responding to daily fluctuations in public market sentiment. This approach mirrors a broader debate in the startup ecosystem over the right timing for IPOs in capital‑intensive AI and data infrastructure businesses.

Some market observers counter that public listings can provide valuable discipline and transparency. Public investors, they note, may push management teams to demonstrate sustainable margins, diversified customer bases, and clear paths to profitability—factors that can temper speculative behavior. Others argue that in fast‑moving sectors like AI, the scrutiny of public markets could make it harder to pursue long‑horizon bets that may not pay off for years.


Real‑world hurdles: security, governance and legacy data

While investor enthusiasm remains high, Ghodsi argued that the practical rollout of AI inside large organizations is being slowed less by technology constraints than by organizational and risk‑management challenges. Databricks, which helps enterprises manage and analyze data across clouds, works with many customers whose data systems are a decade or more old.

“The big thing holding you back” as a large organization, he said, “is that you can’t actually do anything because you’re so worried about getting hacked.” Security and data governance, he added, are the main bottlenecks. Many enterprises face heightened concerns about exposing sensitive information to third‑party models, meeting regulatory requirements, and protecting against breaches as AI tools interact with core systems.

Ghodsi also pointed to what he called an “absolute mess” in legacy data architecture—a result of four decades of layering new software and vendors on top of each other. The outcome is siloed, duplicated, and inconsistent data that must be cleaned and unified before advanced analytics or generative AI can operate reliably. This challenge is central to Databricks’ own value proposition and to competitors such as Snowflake, Google Cloud, Microsoft Azure, and others offering data platforms.

Another emerging obstacle, according to Ghodsi, is the rise of “AI lawyers” inside corporations. These in‑house or external legal specialists are tasked with interpreting evolving AI regulations, drafting acceptable‑use policies, and reviewing model deployments. While advocates say such roles are essential to ensure compliance and protect consumers, Ghodsi suggested that intensive legal scrutiny is also slowing down experimentation and implementation.

Policy experts note that regulatory frameworks are still taking shape. The European Union’s AI Act, various U.S. state‑level initiatives, and sector‑specific rules in healthcare and finance all influence how enterprises adopt AI. Legal and compliance teams often push for more transparency into how models work, what data they are trained on, and how outputs are monitored—all areas that may require additional tooling and documentation.


Where Ghodsi sees AI’s real economic value

Despite his warnings about speculative excess, Ghodsi remains bullish on specific areas of AI that he believes can deliver tangible economic value, particularly in the enterprise context. He highlighted “AI agents” and what he referred to as “vibe coding” as examples of high‑utility applications.

AI agents are software systems that autonomously perform tasks—such as generating reports, spinning up infrastructure, or responding to operational events—using large language models in combination with tools, APIs, and structured data. “For the first time we’re seeing over 80% of the databases that are being launched on Databricks are not being launched by humans but by AI agents,” Ghodsi said, describing a shift in how technical resources are provisioned on the platform.

By contrast, Ghodsi argued that the foundation model layer, dominated by providers such as OpenAI, Google, Anthropic, and others, is becoming increasingly competitive and commoditized. He suggested that margins at this layer could be relatively thin, as model quality converges and cloud providers bundle access into broader offerings.

In his view, the larger revenue opportunity lies in the application layer built on top of these models. This includes specialized systems for areas such as drug discovery in life sciences, automated research workflows in finance, personalized recommendation engines in retail, and AI‑assisted software development. Here, domain knowledge, proprietary data, and integration with existing business processes can provide defensible advantages.

Some analysts agree with this “stack” view of AI economics, arguing that while model providers are crucial, much of the value will accrue where AI is tightly coupled with specific industry problems. Others, however, believe that the largest foundation model companies may still capture outsized value by controlling distribution channels, ecosystems, and proprietary training data.


AI leadership, internal politics and governance

Beyond technology and regulation, Ghodsi pointed to internal corporate politics as a drag on AI adoption. He described a “tussle” inside many organizations as executives compete to become the leading “AI person,” potentially leading to fragmented strategies and duplicated projects.

His advice to CEOs was blunt: “Pick one person for your company” to lead AI efforts, rather than creating what he called a “three‑headed monkey” of competing leaders. Centralizing responsibility, he argued, can clarify priorities and accelerate decision‑making on investments, vendor selection, and risk management.

Corporate governance experts emphasize that AI leadership structures vary. Some organizations establish a Chief AI Officer or an AI Center of Excellence to coordinate initiatives, while others embed AI responsibilities within existing functions such as the Chief Data Officer, Chief Information Officer, or Chief Digital Officer. Regardless of model, they say, effective oversight typically involves cross‑functional input from IT, security, legal, compliance, and business units.


Historical context: tech bubbles and AI investment cycles

Ghodsi’s remarks fit into a longer history of technology investment cycles marked by periods of rapid capital inflows, speculative valuation, and subsequent corrections. Economists often reference:

  • The dot‑com boom (mid‑1990s to 2000), when early internet companies went public at high valuations before many business models were proven.
  • The social media and mobile app surge (late 2000s to mid‑2010s), which produced a smaller but still significant wave of richly valued startups.
  • The cryptocurrency and Web3 boom (2017 and again in 2020–2021), which saw rapid appreciation and volatility in digital assets and blockchain projects.

In each case, some companies failed or saw valuations collapse, while others emerged as dominant, profitable platforms. Analysts who are optimistic about AI often point to those historical examples to argue that even if a near‑term correction occurs, long‑term value creation could be substantial.

Skeptics, by contrast, focus on potential over‑investment in overlapping capabilities—particularly in foundation models and general‑purpose AI platforms—as well as on the risk that regulatory limits, security incidents, or slower‑than‑expected enterprise adoption could weigh on growth.

Ghodsi’s position, as reflected in his Fortune interview, appears to straddle these views: warning that some current valuations are “insane,” while simultaneously asserting that AI agents, data platforms, and targeted applications will generate what he sees as durable, long‑term returns.


AI infrastructure in practice

The contrast between speculative startup valuations and the painstaking work of modernizing data infrastructure underscores the divide Ghodsi sees between hype and utility in enterprise AI.

Visualization of enterprise data infrastructure, where AI agents increasingly automate tasks such as launching and managing databases.

Readers seeking more detail on the perspectives summarized here and on broader AI market dynamics can consult:

  • Fortune’s original coverage of Ali Ghodsi’s remarks at Brainstorm AI (Fortune, external link).
  • Market analyses by research firms such as PitchBook and CB Insights on venture funding for AI startups.
  • Regulatory updates from the European Commission and U.S. agencies on AI governance and compliance obligations.

Conclusion: between caution and conviction on AI’s future

Ali Ghodsi’s comments underscore a dual reality in today’s AI landscape: pockets of exuberant valuation and investor fatigue coexisting with steady, often complex progress in enterprise deployment. His prediction of a worsening “circular” financing environment contrasts with his confidence in AI agents, data platforms, and industry‑specific applications as long‑term value drivers.

For companies navigating this environment, the debate highlighted in his remarks centers on timing and focus: how to invest in AI capabilities and governance without overreacting to market cycles, and how to balance security, regulation, and internal politics against the potential gains from automation and new data‑driven services. As with previous technology waves, the eventual outcome may hinge less on short‑term valuation swings and more on which organizations successfully translate AI capabilities into durable business results.