China’s Gray Market for AI-Generated Content Rate Reduction: The New Academic Arms Race

Asia Daily
By Asia Daily
12 Min Read

The Rise of AIGC Rate Reduction Services in China

As China’s universities intensify their scrutiny of academic integrity, a new gray industry has emerged: artificial intelligence-generated content (AIGC) rate reduction services. These services, widely available on major e-commerce platforms like Taobao, promise to help students lower the detectable percentage of AI-generated text in their academic papers. The phenomenon has surged during the graduation season, raising concerns among educators, policymakers, and students alike about the implications for academic standards and the effectiveness of AI detection technologies.

At the heart of this industry are vendors advertising “manual modifications” and “24-hour online consultation,” often without clear pricing or educational credentials. Many insist on moving conversations and payments to private channels like WeChat, making transactions difficult to trace and complicating consumer protection efforts. The proliferation of these services reflects both the rapid adoption of AI tools in academic writing and the growing pressure on students to pass increasingly sophisticated AI-detection checks.

How Do AIGC Rate Reduction Services Work?

These services claim to reduce the AI-detection rate in academic papers by rewriting or modifying text to appear more human-like. Some vendors boast of proprietary databases and algorithms capable of identifying and replacing vocabulary or sentence structures commonly flagged by AI detectors. Prices vary widely: a basic 5,000-word thesis might cost as little as 32 yuan (about $4.45), while more customized, algorithm-driven modifications can reach up to 20,000 yuan.

Despite the “manual modification” label, much of the work is still AI-driven. Vendors typically access academic databases, compare content, and use algorithms to adjust sentences and vocabulary. The process is designed to evade detection by popular AI-detection systems used by Chinese universities, such as the VIP Paper Check System and CNKI. However, the quality of these modifications is inconsistent, with many students reporting fragmented sentences, unprofessional language, and logical inconsistencies in the revised texts.

Why Has This Industry Emerged?

The rise of AIGC rate reduction services is closely tied to universities’ efforts to regulate the use of AI in academic writing. In response to concerns about academic fraud and the misuse of generative AI tools like ChatGPT, Baidu’s Ernie Bot, and Alibaba’s Tongyi Qianwen, many Chinese universities have introduced policies requiring students to disclose AI usage and setting thresholds for acceptable levels of AI-generated content in theses. Some institutions mandate that the AIGC rate must be below 20% to 40% for a thesis to pass, while others use the rate as a reference for supervisors’ final decisions.

This regulatory push has created a “pseudo-demand,” as described by Luo Xueming, a senior expert at the Guangdong Modern Urban Industrial Technology Research Institute. The introduction of AIGC detection systems has not only spurred the development of rate reduction services but also expanded the gray industry chain to include ghostwriting and PowerPoint creation. The lack of clear academic ethical boundaries regarding AI-polishing, plagiarism, and adaptation has allowed these services to flourish in a regulatory gray area.

Inside the Student Experience: Pressure, Anxiety, and Workarounds

For many students, the pressure to meet AI-detection standards is intense. Wen Suyu, a senior at a university in Guangdong, spent hours revising her dissertation to lower its AI-detection rate, only to see minimal improvement. Desperate, she turned to a rate reduction service, which successfully reduced the detection rate but left her text sounding unnatural and robotic. Her experience is echoed by thousands of students across China, who share tips and anxieties on social media platforms like Xiaohongshu.

Common strategies for reducing AI-detection rates include:

  • Rewriting sentences to avoid patterns typical of AI-generated text, such as consistent sentence structures and the use of certain transition words.
  • Using multiple AI tools to diversify language and style, making the text less likely to be flagged by detection algorithms.
  • Manually editing AI-generated drafts to inject more personal voice and irregularities.

Despite these efforts, many students find the process time-consuming and the results unpredictable. Detection outcomes can vary significantly between different systems, with some students reporting discrepancies of over 50% in AI-detection rates for the same text across platforms.

How Do AI Detection Tools Work—and Are They Reliable?

AI content detectors analyze text for patterns, sentence structures, vocabulary diversity, and other features that distinguish human writing from machine-generated content. The most widely used systems in China, such as the VIP Paper Check System and CNKI, claim to detect text produced by major AI models and assign a suspected AIGC rate. However, the reliability of these tools is a subject of ongoing debate.

According to Zong Chengqing, a researcher at the Institute of Automation, Chinese Academy of Sciences, AI-generated text is based on mathematical rules calculated by large models. Detection systems use similar mathematical analysis to identify whether text matches the statistical patterns of machine-generated content. While some experts claim detection accuracy rates above 80%, real-world results are less consistent.

Science and Technology Daily, an official publication of China’s Ministry of Science and Technology, has warned that overreliance on AI detection tools is a form of “technological superstition.” The publication highlighted cases where classic essays and original research were incorrectly flagged as AI-generated, underscoring the limitations and potential for false positives. In one widely cited example, a 100-year-old Chinese essay was rated as over 60% AI-generated by a popular detection tool.

Science and Technology Daily editorial cautions:
“We aspire to eliminate the negative effects of AI with clear-cut solutions, but using AI to detect AI is essentially still a form of technological superstition.”

These challenges have led some universities, both in China and abroad, to reconsider or even abandon the use of AI detection tools, emphasizing the importance of faculty judgment and holistic evaluation of academic work.

Regulatory Response: China’s Push for AI Content Labelling

In response to the proliferation of AI-generated content and related gray industries, China is moving to tighten regulations. New rules set to take effect in September 2025 will require all AI-generated content—including text, images, audio, and video—to be clearly labeled. The Cyberspace Administration of China (CAC), along with other government agencies, has mandated both explicit and implicit markers for AI content, with penalties for non-compliance.

Key provisions of the new regulations include:

  • Explicit labels visible to users and implicit digital watermarks embedded in metadata.
  • Mandatory compliance checks by online service providers before content is published.
  • Record-keeping requirements for service providers and users posting AI-generated content.
  • Strict prohibitions on the removal or alteration of AI content labels.

The regulations are part of a broader campaign to combat misinformation, academic fraud, and online deception. However, enforcement remains a challenge, especially for real-time applications and cross-platform content. Technical limitations in watermarking and metadata, as well as the ease with which labels can be removed or altered, complicate efforts to ensure transparency and accountability.

The rapid evolution of AI technology has outpaced the development of comprehensive legal and ethical frameworks. Oversight of the AIGC rate reduction industry is complicated by the involvement of multiple authorities, including those responsible for education, market supervision, internet security, and science and technology. The lack of clear guidelines on what constitutes acceptable AI assistance versus academic misconduct leaves students, educators, and service providers navigating a murky landscape.

Experts like Chu Zhaohui, a researcher at the China National Academy of Educational Sciences, argue that strengthening academic ethics education is a practical approach to addressing these challenges. Rather than relying solely on detection tools, universities are encouraged to focus on cultivating independent thinking, research skills, and integrity among students.

Chu Zhaohui, educational researcher, suggests:
“A practical approach would be to strengthen academic ethics education among students.”

Li Juan, an associate professor at Central South University’s Law School, estimates that 60% to 80% of students have used AI tools in their theses this year, from refining sentences to composing large portions of content. While she acknowledges improvements in language quality, she also notes the risk of “AI hallucinations”—convincing but false statistics or legal cases generated by AI. Ultimately, she emphasizes the role of supervisors in reviewing flagged content and guiding students through the revision process.

Consumer Risks and the Dark Side of the Gray Industry

Despite their popularity, AIGC rate reduction services are fraught with risks for consumers. Many students have reported difficulties obtaining refunds, poor quality modifications, and even being unable to contact service providers after payment. The deliberate use of private communication channels and untraceable payment methods is designed to prevent consumers from gathering evidence for complaints or legal action.

These practices not only undermine consumer rights but also perpetuate a cycle of mistrust and dissatisfaction. As the industry continues to grow in secrecy, calls for greater oversight and clearer ethical standards are mounting.

In Summary

  • A gray industry of AIGC rate reduction services has rapidly expanded in China, driven by universities’ adoption of AI-detection tools for academic papers.
  • These services use a mix of AI and manual editing to lower the detectable percentage of AI-generated content, but quality and reliability vary widely.
  • Students face significant pressure to meet AI-detection standards, leading to widespread use of both AI tools and rate reduction services.
  • AI detection tools themselves are controversial, with concerns about false positives, inconsistent results, and overreliance on technology.
  • China is introducing new regulations requiring clear labeling of all AI-generated content, but enforcement and technical challenges remain.
  • Experts advocate for stronger academic ethics education and greater reliance on faculty judgment rather than solely on detection tools.
  • Consumer risks in the gray industry are significant, with many students reporting poor service and lack of recourse.
Share This Article