Inside the Regulatory Squeeze: How Big Tech and Generative AI Are Being Reined In

Regulators around the world are rapidly tightening the rules on Big Tech and generative AI, reshaping competition, data protection, and AI governance. This article explains what is happening, why it matters for platforms, startups, and open-source communities, and how new antitrust and AI-specific frameworks like the EU AI Act could determine whether the next era of AI is dominated by a few giants or remains open and competitive.

The regulatory spotlight on large technology platforms and their generative AI offerings has moved from abstract worries about “Big Tech power” to specific enforcement actions, draft laws, and binding codes. Competition authorities, data protection regulators, and newly created AI oversight bodies are converging on one core question: how do you protect innovation and fundamental rights without locking in the dominance of a few global firms?

This article maps the fast-changing landscape: the antitrust cases in the U.S. and EU, the rise of AI-specific rules like the EU AI Act, the ripple effects on startups and open-source ecosystems, and the unresolved tensions between innovation, competition, and public protection.

Regulators reviewing documents in front of digital icons representing big technology platforms and AI
Regulators worldwide are scrutinizing how Big Tech deploys AI at scale. Image: Pexels / Lukas.

Mission Overview: Why Big Tech and Generative AI Are Under Pressure

Generative AI has moved from labs into billions of devices, embedded in search, office software, operating systems, and social platforms. This rapid integration has triggered overlapping concerns:

  • Market power: Incumbent platforms may use distribution advantages to favor their own AI services.
  • Data protection and privacy: Foundation models can be trained on massive, often opaque datasets that may include personal or copyrighted data.
  • Safety and misinformation: Deepfakes, hallucinations, and automated persuasion raise societal risks.
  • Accountability: Existing laws (consumer protection, product safety, non-discrimination) must be adapted to probabilistic AI systems.
“The question is no longer whether we regulate AI, but how we do so in a way that keeps markets open and people safe.” — Margrethe Vestager, Executive Vice-President, European Commission

In policy podcasts, Hacker News threads, and publications like The Verge, Wired, and Recode, this “regulatory squeeze” is now a central storyline for the future of AI.


Technology Landscape: From Foundation Models to Platform Integration

To understand the regulatory response, it helps to look at how generative AI is actually built and deployed by large platforms.

Foundation Models and General-Purpose AI

Modern generative AI systems are typically large-scale foundation models—deep neural networks trained on diverse corpora of text, images, code, and other data. They can then be adapted, via fine-tuning or prompt engineering, to specific tasks:

  • Text models: Large language models (LLMs) for chatbots, summarization, and coding assistance.
  • Vision models: Systems that generate or interpret images and video.
  • Multimodal models: Architectures that integrate text, images, and potentially audio or video.

These models are increasingly offered as general-purpose AI (GPAI) or “general-purpose AI models” (GPAIM), a concept that is central in regulatory frameworks such as the EU AI Act.

Deep Integration into Existing Products

Big Tech firms are weaving generative AI directly into dominant products:

  1. Search engines that answer in paragraphs instead of just returning links.
  2. Productivity suites where AI drafts emails, documents, and presentations.
  3. Operating systems with integrated AI assistants that manage files, apps, and system settings.
  4. Developer tools that autocomplete entire functions or generate boilerplate code.

This technical integration is a key antitrust concern because it can blur the line between a neutral platform and a self-preferencing gatekeeper.

Laptop screen showing AI-related code and visualizations in a dark workspace
Generative AI foundation models are integrated deep into operating systems and productivity tools. Image: Pexels / Tima Miroshnichenko.

Antitrust Front: Self-Preferencing, Bundling, and Platform Power

Antitrust investigations increasingly focus on how large platforms bundle their in-house AI assistants and models with already-dominant products. Key themes include:

Self-Preferencing and Default Choices

Authorities in the U.S., EU, and UK are probing whether:

  • Browsers and mobile operating systems favor a platform’s own AI assistant by default.
  • Search results prioritize the platform’s AI-generated answers over links to competitors or open-source tools.
  • App stores impose conditions that discourage third-party AI apps or charge discriminatory commissions.
“AI is the latest layer where gatekeepers can entrench their position. Defaults matter, and so do the terms under which rivals can access users.” — Excerpted from a 2025 speech by a senior EU competition official

Data Access and “Must-Have” Inputs

Another focal point is data foreclosure—whether dominant platforms can deny or restrict access to critical datasets or computing resources needed to train rival models. Regulators are exploring:

  • Interoperability obligations for certain APIs.
  • Data portability rules that let businesses move their data between cloud and AI providers.
  • Non-discrimination requirements for cloud credits and AI-specific hardware rentals.

On policy forums like Lawfare and Stanford HAI policy briefs, scholars debate whether traditional antitrust tools (market definition, merger review) are agile enough for AI-era dynamics.


AI-Specific Regulation: The EU AI Act and Beyond

While antitrust addresses market structure, new AI-specific laws focus on risk, transparency, and accountability. The EU AI Act—which reached political agreement in late 2023 and has been phasing in obligations through 2025–2026—remains the most comprehensive example.

Risk-Based Approach

The EU AI Act classifies AI systems into risk categories:

  • Unacceptable risk: Systems banned outright, such as certain forms of social scoring.
  • High risk: AI in critical sectors (healthcare, finance, employment, education, law enforcement), subject to strict conformity assessments and documentation.
  • Limited risk: Systems primarily facing transparency requirements (e.g., chatbots that must disclose they are AI).
  • Minimal risk: Most other applications, where best practices are encouraged but not mandated.

General-Purpose AI and Foundation Models

A major innovation is that the Act explicitly targets general-purpose AI models, especially large foundation models. Obligations include:

  1. Technical documentation detailing training processes, capabilities, and limitations.
  2. Policies and tools to ensure downstream deployers can comply with the Act.
  3. Transparency about training data sources (at least at a high level).
  4. Enhanced testing and evaluation for “systemic risk” models—those with very high compute and reach.

The European Commission’s AI regulatory framework page and detailed Q&A documents explain how these rules will be enforced and updated.

Global Ripple Effects

Other jurisdictions, including the U.S., UK, Canada, and several Asia-Pacific countries, are borrowing elements of this approach:

  • The U.S. has issued AI executive orders emphasizing safety testing, watermarking of synthetic media, and agency-specific guidelines.
  • The UK is pursuing a sector-led, “pro-innovation” framework, with regulators like the FCA and CMA developing AI playbooks.
  • OECD and G7 processes aim to harmonize core principles such as safety, fairness, and accountability.
Government buildings with European Union flags representing regulatory institutions
The EU AI Act has become a global reference point for AI regulation. Image: Pexels / Pixabay.

Scientific Significance and Societal Stakes

The regulatory squeeze is not just a legal or business story; it shapes the trajectory of AI research itself. Constraints on data use, documentation rules, and safety tests can directly influence:

  • What kinds of models are feasible to train.
  • How open or closed their weights, datasets, and benchmarks can be.
  • How researchers share and reproduce results.

Balancing Open Science and Safety

In academic circles, a central debate is how far openness should go for powerful models. Full weight releases enable reproducibility and innovation, but also raise risks of misuse. Safety-focused labs and organizations advocate staged releases, red-teaming, and capability evaluations.

“We need norms for responsible publication that preserve the engine of open science while recognizing the dual-use nature of frontier AI.” — Adapted from commentary by Dario Amodei and colleagues

Regulatory experiments, such as sandbox programs and structured model evaluations, are now part of the scientific workflow, not an afterthought. Initiatives like the Stanford Center for Research on Foundation Models (CRFM) explore standardized benchmarks and documentation (e.g., “model cards” and “datasheets for datasets”) that regulators increasingly reference.


Key Milestones in the Regulatory Squeeze

From roughly 2020 onward, several milestones mark the pivot from concern to concrete action. A simplified timeline:

  1. Pre-2022: Antitrust cases focus on search, mobile OS, app stores, and ad tech; AI is a peripheral issue.
  2. 2022–2023: Public launch of highly capable generative models triggers new hearings on deepfakes, training data, and concentration of compute.
  3. Late 2023: Political agreement on the EU AI Act, including provisions for general-purpose AI and foundation models.
  4. 2024–2025: AI-specific executive orders, voluntary safety commitments, and model evaluations become standard for major deployments.
  5. 2025–2026: Enforcement actions begin to test how existing competition and data protection rules apply to bundled AI assistants.

Each step has reshaped expectations for due diligence: what kind of logging, auditing, and documentation must accompany an AI product launch on a global platform.

Close-up of a judge's gavel symbolizing legal milestones and regulation
Court cases and new statutes are testing how old rules apply to new AI realities. Image: Pexels / Pixabay.

Challenges: Innovation, Compliance Costs, and Open-Source Tensions

The shift toward tighter regulation has not been frictionless. Different stakeholders face different pain points.

For Startups and Smaller Labs

Startups often lack large compliance teams. Their key worries include:

  • Legal uncertainty: Ambiguous definitions of “high-risk” uses or “systemic risk” models.
  • Compliance overhead: Documentation, risk assessments, and data provenance checks eating into limited runway.
  • Liability allocation: Unclear rules on who is responsible when a startup fine-tunes a large model from a major provider.

Many founders now treat regulatory capability as strategic infrastructure, investing in AI governance skills alongside engineering.

For Open-Source Ecosystems

Open-source maintainers and communities grapple with questions such as:

  • How to release models or code without assuming impossible legal risk.
  • Whether upstream foundation model providers must furnish safety tools and documentation for downstream users.
  • How new rules might inadvertently favor closed, well-resourced incumbents that can absorb compliance costs.
“Poorly designed AI regulation will not tame Big Tech—it will entrench it, by making it impossible for anyone else to compete.” — Common refrain in policy debates and forums such as Hacker News

For Regulators Themselves

Agencies must build technical capacity fast. Evaluating safety claims about large models, inspecting training pipelines, or monitoring systemic risks requires:

  • Access to independent expertise in machine learning and security.
  • Tools for model evaluation, red-teaming, and interpretability.
  • International coordination to avoid regulatory arbitrage.

Practical Tools: How Organizations Can Adapt

For teams building or integrating generative AI, a structured governance approach can reduce both legal and operational risk.

Core Elements of an AI Governance Program

  • Model inventory: Maintain a catalog of all models in use, including source, version, and purpose.
  • Risk classification: Map each use case to relevant regulations (e.g., high-risk sectors, GPAI obligations).
  • Data governance: Track data lineage, consent basis, and retention policies.
  • Evaluation and monitoring: Regularly test for bias, robustness, and misuse potential.
  • Incident response: Define procedures for rollbacks, updates, and user notifications when issues arise.

Practitioners often consult resources from organizations such as the NIST AI Risk Management Framework and the OECD AI Observatory for practical checklists and templates.

Helpful Reading and Hardware

For technical leaders, books such as “The AI Governance Handbook” (a popular choice among practitioners in the U.S.) provide accessible frameworks for building compliant AI pipelines. Teams running local experiments may also benefit from modern GPUs and workstation setups that support reproducible model evaluations.


Conclusion: Will the Next AI Era Be Open or Dominated by a Few Giants?

The regulatory squeeze on Big Tech and generative AI is not a passing phase; it is the new baseline. Antitrust authorities are probing how AI is bundled into dominant platforms, while AI-specific laws such as the EU AI Act create direct obligations for providers of powerful foundation models.

Whether this environment ultimately curbs or cements Big Tech’s power depends on execution. Well-designed rules can:

  • Preserve room for startups and open-source innovators.
  • Protect citizens from data abuse, discrimination, and misuse of synthetic media.
  • Encourage more rigorous, transparent science around AI capabilities and limitations.

Poorly designed rules, by contrast, could load smaller players with disproportionate compliance burdens and leave complex global obligations manageable only by the largest firms. As debates unfold across legislatures, agencies, and online communities, researchers, developers, and product leaders all have a stake in shaping a framework that is both safe and genuinely competitive.


Additional Resources and Next Steps

To stay current on the fast-evolving intersection of antitrust, data protection, and AI governance, consider:

For organizations building AI today, it is wise to:

  1. Assign explicit responsibility for AI governance and regulatory monitoring.
  2. Document model and data decisions as if they may be scrutinized later—which they increasingly are.
  3. Participate in public consultations and standards-setting efforts, ensuring that the voices of builders, not just incumbents, shape the rules.

The intersection of competition law, data protection, and AI governance is becoming a core competency for any serious AI initiative. Teams that invest in understanding this landscape now will be better positioned to innovate responsibly as the rules of the game continue to evolve.


References / Sources

Continue Reading at Source : Recode