Japan’s Antitrust Watchdog Targets AI Search Engines Over News Content Usage

Asia Daily
16 Min Read

Japan’s Regulatory Action Against AI Search Engines Marks Global Shift

TOKYO – The Japan Fair Trade Commission (JFTC) has announced plans to launch a fact-finding investigation into search engines that utilize generative artificial intelligence, signaling growing government scrutiny of how major technology companies use news content. The commission believes that some practices of tech companies, including AI’s use of media organizations’ news articles without permission, may constitute an abuse of a dominant position and other violations under Japan’s Antimonopoly Law.

This investigation represents a significant development in the ongoing tension between AI developers and news publishers worldwide. The JFTC’s action comes amid growing concerns that AI-powered search engines are threatening the business models of news organizations by scraping content and presenting it as summarized answers without directing users to the original sources or compensating publishers.

Advertisement

The probe will target major providers of AI-powered search engines, including U.S.-based companies Google LLC and Microsoft Corp., startup Perplexity AI Inc., as well as Japan’s LY Corp. The investigation may also cover other companies, such as U.S.-based OpenAI, which provides the AI chatbot service ChatGPT. These companies are at the forefront of developing generative AI systems that have transformed how users search for and consume information online.

Understanding AI-Powered Search Engines

AI-powered search engines represent a significant evolution from traditional search tools. Unlike conventional search engines that provide a list of links to websites in response to a query, these new systems can understand questions asked in a conversational tone and provide direct, synthesized answers. The technology uses large language models trained on vast amounts of data collected from the internet to generate summarized responses to user queries.

This approach is generally regarded as more convenient than traditional search engines because it provides more direct answers without requiring users to visit multiple websites. Users can ask complex questions in natural language and receive comprehensive responses that synthesize information from multiple sources. Critics argue this very convenience has raised concerns among news publishers about the impact on their business models and ability to sustain quality journalism.

Advertisement

Generative AI systems like ChatGPT and Perplexity work by ingesting enormous amounts of text data from the internet, including news articles, books, and other publications. They then use this training data to identify patterns and relationships, allowing them to generate human-like responses to queries. While the technology is sophisticated, critics argue that it essentially repurposes the work of journalists and content creators without proper attribution or compensation.

The Economic Threat to News Organizations

The rise of AI-generated search results poses an existential threat to the news industry’s traditional business model. News agencies generate revenue primarily through displaying advertisements on their websites, which depends on users clicking through to read full articles. The spread of AI-generated news summaries that directly answer user queries could lead to a substantial decline in that income, potentially undermining the financial sustainability of professional journalism.

This problem is complicated by the fact that major tech companies often have substantial market power in search and digital advertising. When these companies incorporate AI into their search engines without negotiating licensing agreements with news publishers, they effectively extract value from the news ecosystem while returning little to those who create the content. Critics argue this constitutes an abuse of dominant market position, as established platforms leverage their market power to dictate terms that disadvantage content creators.

Advertisement

The impact on news publishers could be severe. Industry analysts note that when users receive complete answers from AI without visiting publisher sites, news organizations lose not only advertising revenue but also the opportunity to build relationships with readers, promote subscriptions, and demonstrate the value of their journalism. Over time, this could reduce the resources available for investigative reporting, local news coverage, and other essential journalism that requires significant investment.

Historical Context: Platform Dominance and News Revenue

The current situation with AI search engines echoes earlier tensions between tech platforms and news publishers. In the 2010s, social media platforms like Facebook dramatically changed how news was distributed online, initially driving significant traffic to publishers but later algorithm changes that prioritized other content types led to substantial declines in referral traffic. Many news organizations that had optimized their operations for social media distribution suffered revenue losses when these platforms changed their priorities.

Similarly, Google’s dominance in search has long given the company significant power over news publishers’ visibility and revenue. Publishers have complained for years about Google’s practices, including using snippets of news content in search results that may satisfy users’ information needs without requiring them to click through to publisher sites. The shift to AI-generated answers represents an acceleration of this trend, potentially making it even less likely that users will visit original news sources.

Advertisement

The JFTC’s investigation is not the commission’s first foray into examining the relationship between tech platforms and news media. In 2023, the commission released a report on a fact-finding investigation into news media that raised significant concerns about platform practices. That report pointed out that if major tech companies use articles while paying nothing or significantly low fees, it may constitute an abuse of a dominant position and violate Japan’s Antimonopoly Law.

The concept of “abuse of a dominant bargaining position” under Japanese antitrust law is particularly relevant to this investigation. This legal framework prohibits companies with substantial market power from imposing unfair terms on smaller trading partners. The JFTC has signaled that it believes AI companies using news content without permission or offering inadequate compensation may fall under this prohibition.

Advertisement

Japan’s approach to AI and copyright has been relatively permissive historically. In 2019, Japan revised its Copyright Act to include Article 30-4, which allows broad rights to ingest and use copyrighted works for information analysis, including training AI models. This made Japan one of the world’s most AI-friendly countries regarding copyright, even allowing for commercial use of copyrighted materials for AI training. The JFTC’s current investigation suggests that while copyright law may permit certain uses, antitrust law may still impose obligations on dominant platforms.

The Agency for Cultural Affairs’ Recent Guidance

Adding complexity to the legal landscape, Japan’s Agency for Cultural Affairs (ACA) released draft guidelines on “AI and Copyright” in January 2024, which were revised after receiving nearly 25,000 public comments. These guidelines clarify that while AI training on copyrighted materials is generally permitted, there are important limitations.

The guidelines distinguish between using copyrighted content for “information analysis” versus for the “enjoyment” of thoughts and sentiments expressed in a work. Ingesting content for pure data analysis may be allowed without consent, but use intended to generate outputs perceived as creative expressions of copyrighted works, including mimicking specific creators’ styles, may require permission.

Advertisement

Furthermore, the guidelines address the concept of “unjust harm” to copyright holders, which occurs when AI output conflicts with the copyright holder’s market or inhibits potential marketing channels. This suggests that even if AI training is legally permitted under copyright law, practices that harm publishers’ ability to monetize their content may face legal challenges.

Global Context: Similar Scrutiny Worldwide

Japan is not alone in examining the practices of AI search engines and their impact on news publishers. AI-powered search engines are facing heavy scrutiny in Europe, where the European Commission has begun investigating Google over alleged violations of EU competition law. The European Union has also developed comprehensive regulatory frameworks including the Digital Markets Act and AI Act that address some of these concerns.

The European Commission’s Digital Markets Act designates certain large online platforms as “gatekeepers” subject to specific obligations, including requirements to ensure fair competition and not self-preference their own services. These regulations could potentially address some of the concerns raised by AI search engines, though their application to generative AI is still being developed.

Advertisement

In the United States, regulatory approaches have been more fragmented. The Federal Trade Commission (FTC) under the Biden administration had signaled an aggressive approach to regulating AI through existing authority, issuing warnings that using AI tools with discriminatory impacts or making unsubstantiated claims about AI capabilities may violate the FTC Act. The FTC has also taken enforcement action against companies that deceived or harmed consumers through AI technologies.

The United States lacks comprehensive federal legislation specifically regulating AI development or deployment. Various states have enacted their own AI regulations, and Congress has considered numerous AI bills covering a range of issues. The regulatory landscape in the U.S. thus presents a patchwork of state-level requirements and existing federal laws applied to new technologies.

Specific Legal Actions Against AI Companies

Beyond regulatory investigations, AI companies are facing direct legal challenges from news publishers and content creators. Several news organizations have filed lawsuits alleging copyright infringement related to AI training data. The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content. These cases raise fundamental questions about whether training AI models on copyrighted materials constitutes fair use under copyright law.

Advertisement

Other legal actions include lawsuits from artists, photographers, and writers whose work has allegedly been used to train AI image and text generators. These cases seek compensation for past unauthorized use of content while also attempting to establish legal precedents that protect intellectual property in the AI era. The outcomes of these cases could substantially reshape the legal requirements for AI development.

Startup Perplexity AI, one of the companies named in the JFTC investigation, has faced specific legal challenges. The Chicago Tribune has sued Perplexity, alleging it used copyrighted articles without permission and bypassed paywalls to generate content. Encyclopedia Britannica and Merriam-Webster have also filed lawsuits against Perplexity over copyright and trademark violations. These cases highlight the legal risks AI companies face when incorporating news content into their systems without proper agreements.

What the Investigation Means for AI Companies

For the AI companies targeted by the JFTC investigation, the probe represents both a near-term compliance challenge and a potential harbinger of broader regulatory action. Depending on the findings, companies may face requirements to change their business practices, including potentially negotiating licensing agreements with news publishers or modifying how their AI systems present information.

The investigation could also influence how AI companies approach their relationships with content creators globally. If Japan establishes precedents requiring compensation or permission for using news content, other jurisdictions may follow similar approaches. This could increase the costs of developing and operating AI systems, as companies would need to establish licensing frameworks and compensation mechanisms.

Advertisement

Some AI companies have already begun pursuing partnerships with news publishers. OpenAI has struck agreements with the Associated Press and Axel Springer, among others, to access archived articles for training large language models in exchange for licensing fees and experimental use of OpenAI software. These partnerships represent one potential model for addressing the concerns raised by the JFTC and other regulators.

However, the scale of licensing challenges is substantial. AI models are trained on enormous datasets encompassing billions of articles and other content items. Establishing licensing agreements with potentially millions of rights holders would be administratively complex and prohibitively expensive. This has led some AI companies to argue for broad fair use exemptions or to develop opt-out mechanisms that allow publishers to block their content from being used in training.

What the Investigation Means for News Organizations

For news publishers, the JFTC investigation offers hope that regulators will address practices they see as threatening their viability. If the investigation leads to requirements that AI companies negotiate licensing agreements or pay for news content, it could establish new revenue streams that help support journalism.

However, news organizations face practical challenges in negotiating with large tech companies that have substantially greater market power and resources. Even if regulatory pressure forces AI companies to the bargaining table, publishers may struggle to secure favorable terms without collective bargaining power or regulatory frameworks that establish fair compensation standards.

Advertisement

Some publishers have pursued direct legal action rather than waiting for regulatory intervention. Lawsuits filed against AI companies seek compensation for past unauthorized use of content. These legal strategies represent complementary approaches to regulatory action.

The long-term sustainability of news may require more than just licensing deals with AI companies. Industry analysts note that even successful negotiations may not fully offset the broader disruption to news business models caused by AI. Publishers may need to diversify their revenue streams, develop direct relationships with audiences, and find new ways to demonstrate the unique value of professional journalism in an AI-saturated information environment.

Potential Outcomes and Timeline

The JFTC has characterized its current action as a “fact-finding investigation,” which suggests the commission is in an information-gathering phase rather than preparing immediate enforcement action. Such investigations typically involve collecting data from companies, analyzing market dynamics, and assessing whether specific practices violate existing laws.

Following the investigation, the FTC is expected to report any potential violations of the Anti-Monopoly Law and issue recommendations for improvement. This could result in administrative guidance, voluntary compliance commitments from companies, or formal enforcement actions if violations are identified. The timeline for this process is unclear, but similar investigations in other jurisdictions have typically taken months to years to complete.

The investigation is also expected to assess what improvements have been made on issues already identified in the JFTC’s 2023 report on news media transactions. That report warned that if a dominant platform operator used its stronger position to create a disadvantage, it would constitute “an abuse of a superior bargaining position under the Anti-Monopoly Law.” Examples cited included unilaterally changing contract terms or setting excessively low usage fees.

Regardless of the specific outcomes, the investigation signals that Japanese regulators are paying close attention to the impact of AI on news markets. Even if formal enforcement action is not taken immediately, the scrutiny itself may encourage companies to modify their practices to avoid potential legal risks.

Broader Implications for the Digital Economy

The JFTC’s investigation is part of a broader global reassessment of how digital platforms interact with content creators. The rise of AI has accelerated concerns about platform power that have been building for years, as companies like Google, Meta, and Amazon have come to dominate various aspects of the digital economy.

Policymakers worldwide are grappling with how to apply existing legal frameworks to rapidly evolving AI technologies. Competition law, copyright law, and consumer protection regulations were all developed before the current generation of AI systems, raising questions about whether they are adequate to address new challenges or need to be updated.

The investigation also highlights the tension between innovation and protection of existing industries. AI advocates argue that overly restrictive regulations could stifle technological progress and hinder beneficial applications. Conversely, creators and publishers argue that without adequate protections, their economic viability is threatened, which could reduce the quality and diversity of content available in the digital ecosystem.

How Japan resolves these tensions could influence approaches in other countries. As a major economy with significant technological capabilities, Japan’s regulatory choices often carry weight internationally. The JFTC’s balanced approach—using existing antitrust law rather than creating entirely new AI-specific regulations—may serve as a model for other jurisdictions facing similar challenges.

The Path Forward for AI and News

Finding sustainable solutions to the tensions between AI developers and news publishers will likely require multiple approaches across legal, technological, and business domains. Regulatory action like the JFTC investigation can help establish fair rules of the road, but it cannot alone create a sustainable ecosystem for journalism in the AI era.

Technological solutions may also play a role. Some developers are exploring ways for AI systems to better attribute sources and drive traffic to original content, potentially aligning AI incentives with publisher interests. Technical standards for content provenance and attribution could help ensure that creators receive appropriate credit and compensation when their work is used by AI systems.

Business model innovation will be equally important. News organizations are exploring new revenue approaches beyond traditional advertising and subscriptions, including membership models, events, consulting, and diversified media offerings. AI companies are also developing new business models that could potentially share revenue with content creators.

Ultimately, a sustainable relationship between AI and journalism will require recognition of mutual dependence. AI systems need high-quality training data to function effectively, and journalism needs sustainable economic models to continue producing the content that informs public discourse. Finding ways to align these interests may determine whether AI ultimately supports or undermines the future of professional journalism.

The Bottom Line

  • The Japan Fair Trade Commission has launched an investigation into AI search engines’ use of news articles, targeting Google, Microsoft, Perplexity AI, LY Corp, and potentially OpenAI.
  • The commission believes AI use of news articles without permission may constitute an abuse of dominant position under Japan’s Antimonopoly Law.
  • AI-powered search engines generate summarized answers from internet data, potentially reducing traffic to news sites and threatening publisher revenue models.
  • The investigation follows a 2023 JFTC report on news media that warned about platform practices harming publishers.
  • Japan’s 2019 copyright law permits AI training on copyrighted materials, but antitrust law may still impose obligations on dominant platforms.
  • Similar scrutiny is occurring globally, with European authorities investigating Google and U.S. regulators examining AI practices through existing legal frameworks.
  • Multiple lawsuits have been filed against AI companies by news publishers alleging copyright infringement.
  • Some AI companies have begun licensing deals with news publishers, but scaling these arrangements presents challenges.
  • The investigation is in a fact-finding phase, with potential outcomes including administrative guidance or enforcement actions.
  • The outcome could influence global approaches to regulating AI’s relationship with content creators.
Share This Article