China AI computing vouchers cut costs for SMEs and fill idle data centers

Asia Daily
13 Min Read

Why China is rolling out AI computing vouchers

China is trying a new way to put powerful artificial intelligence tools within reach of smaller companies. City governments from Beijing and Shanghai to Chengdu and Shenzhen are issuing computing power vouchers, public subsidies that lower the price of renting high performance servers for AI training and inference. The goal is twofold. First, make it affordable for small and medium sized enterprises to build and test AI models. Second, put underused capacity in the country’s fast growing data center network to work. Shanghai has earmarked about 600 million yuan for compute vouchers that can cover up to 80 percent of rental fees, plus 100 million yuan for data vouchers focused on large language model training. Shenzhen has announced a 500 million yuan annual program with standard subsidies of up to 50 percent for enterprises and institutions, and 60 percent for startups. Chengdu is expanding a pilot with 100 million yuan for research institutions. Shandong is offering 30 million yuan now and planning a much larger 1 billion yuan investment in local AI infrastructure. Beijing and Henan have begun taking applications for similar schemes.

These vouchers sit inside a larger national strategy. The National Development and Reform Commission rolled out the mechanism in December 2024 to reduce research and development costs for smaller firms and to connect more users to a nationwide computing fabric. China’s big build out of data centers, much of it in western provinces with cheaper power, has left pockets of idle machines. Industry reports say some facilities have been running at only 20 to 30 percent of capacity. Policymakers want to route AI workloads into those racks, raise utilization, and accelerate adoption. The China Academy of Information and Communications Technology has estimated that each yuan invested in computing power can generate three to four yuan in economic output. That multiplier is one reason local governments are moving quickly.

A new lever in China AI industrial policy

This subsidy push fits a wider plan to expand AI capabilities by 2030. Research groups note that Beijing is using many tools at once, including public funding, state backed labs, talent programs, and now subsidized access to compute. US led export controls have made advanced chips harder to buy, turning compute into a pinch point for startups. Vouchers can soften that constraint by opening doors to public and commercial data centers at lower prices while China backs domestic hardware and software. For deeper background on this policy approach, see analysis by RAND on China’s full stack AI strategy here. A complementary view from MERICS on self reliance and the AI stack can be found here.

How the vouchers work and who qualifies

Computing power vouchers are city run credits. Eligible users include small and medium sized enterprises, startups, universities, and research institutes. Recipients redeem credits at participating data centers, which can be state owned, university operated, or commercial. The vouchers typically cut a share of the bill for renting clusters of graphics processing units, storage, and networking, with coverage ratios that vary by city. Some localities also issue data vouchers that reduce the cost of training data, data labeling, or curated corpora for large language models. Supported workloads range from pretraining and fine tuning to computer vision projects and scaled inference.

Subsidy rates and budgets in practice

  • Shanghai: about 600 million yuan for compute vouchers with up to 80 percent of rental fees covered, plus 100 million yuan in data vouchers tied to large language model training.
  • Shenzhen: a 500 million yuan annual program. Standard subsidies of up to 50 percent for companies, universities, and research institutions renting intelligent computing power. For startups the share rises to 60 percent. City notices also mention model and corpus vouchers, and project level caps for training support.
  • Chengdu: an expanded pilot that includes 100 million yuan in voucher aid for research institutions.
  • Shandong: 30 million yuan available now, with a proposed 1 billion yuan plan to expand local AI infrastructure.
  • Beijing and Henan: early stage schemes are live, with application windows open and providers enrolling.
  • Guizhou: updated rules in November 2024 extended vouchers to domestic adaptation services and training. Reported coverage reaches 30 percent of eligible costs, capped at 1.5 million yuan for adaptation and 5 million yuan for training.

Where the money is going city snapshots

Shanghai is pairing compute and data support in a way that shows how cities see the full AI pipeline. Compute vouchers target the heavy lift of training and fine tuning. Data vouchers reduce the cost of acquiring or preparing training sets, including for large language models. By bundling the two, officials are trying to address the twin expenses that stop many small teams from moving past a proof of concept.

Shenzhen is leaning into applications as well as infrastructure. City measures point to model service providers and product builders that register their work with the National Internet Information Office. The intent is to push usable systems into the market, from enterprise chat assistants and coding copilots to AI features in consumer devices. Subsidy percentages rise for startups, a nod to the fact that young companies often face the steepest bills for early training cycles and cannot easily reserve capacity long in advance.

Chengdu and Shandong are using vouchers to build ecosystems. Chengdu’s pilot steers funds toward research institutions that can seed local talent and projects. Shandong is combining a starter voucher pool with a plan for major infrastructure investment, which can attract providers that want predictable off take in the form of subsidized users. Beijing and Henan are at earlier stages, but are already signing up data center partners and opening application portals.

Guizhou shows the inland strategy in action

Guizhou has become a flagship region for moving data and compute inland. Under the national East Data West Computing approach, capacity in inland provinces serves demand from the coast. Reports from Data Centre Magazine say Guizhou’s intelligent computing capacity has reached 85 EFLOPS, with more than 98 percent classed as intelligent computing rather than traditional storage. About 90 percent of the hardware is described as domestically produced. Forty eight key data centers are in service or under construction across the province, including 28 large sites. Storage capacity totals 25 exabytes. Those headline numbers indicate a shift from storage focused facilities to a more balanced model that favors AI training and inference.

Energy is central to the model. Electricity consumption in Gui’an New Area rose sharply in early 2025. Local authorities have added substations and dedicated lines to ensure reliable power for high density racks. Hydropower is a large contributor, with renewable capacity rising to roughly a third of total generation by May 2025. Voucher measures introduced in late 2024 extended subsidies to domestic adaptation of software and to training services, with cost coverage capped in the low millions of yuan per project. That package tries to link infrastructure, energy, and direct user support so that inland data centers can deliver useful cycles to businesses across the country.

Filling idle capacity with East Data West Computing

China’s data center map expanded quickly during the past few years. The East Data West Computing plan encourages companies and public bodies to send data to western regions with cheaper land and power, then serve users over long haul networks. That build out created pockets of spare capacity, especially in facilities that came online before demand caught up. Underutilization in the 20 to 30 percent range is common in press accounts and industry analyses. Vouchers are a demand side tool to raise utilization rates and recover capital expenses.

For providers, voucher funded jobs can smooth usage patterns. Training cycles often arrive in bursts when teams schedule multiday runs on thousands of GPUs. If cities coordinate voucher calendars and queue management, providers can pack jobs more tightly, fill off peak windows, and offer predictable slots to startups that need to plan. For users, a lower net price per GPU hour can turn a two month runway into six months of experiments and iterations. That is especially crucial for teams that are fine tuning rather than training from scratch. Fine tuning can deliver good results on midrange clusters, so a lower price makes those runs feasible.

Export controls, Nvidia, and domestic chips

Compute costs and access are shaped by geopolitics. US led export controls restrict the sale of top tier accelerators to Chinese customers. That has complicated procurement for companies that want the latest Nvidia hardware. There have been headlines about plans to equip dozens of new data centers with large orders of Nvidia Hopper GPUs. Those reports raise legal and logistics questions under current restrictions and have not been verified in detail. The broad effect of export controls is clear. China is leaning harder on domestic chips, universities, and local cloud providers to fill the gap, while some firms explore overseas hosting options.

Domestic alternatives, such as Huawei’s Ascend line, are improving. Many developers still rely on global open source frameworks like PyTorch and TensorFlow, while local stacks such as MindSpore and PaddlePaddle develop in parallel. That split can complicate training plans. Code portability, software maturity, and operator expertise all influence which clusters a team can use. Vouchers help by lowering the threshold for trying compatible hardware. They cannot erase the performance gap between cutting edge US hardware and current domestic options, but they can reduce time to first results for many companies.

What SMEs stand to gain and where they may struggle

Lower priced access to compute changes what smaller teams can attempt. A company that could only afford to rent a handful of GPUs for short windows can now book longer runs, try more variants, and move from a prototype to a field test. Universities and labs gain budget flexibility, which can bring more students and early career researchers into large model work. Local governments gain a way to monetize idle capacity and to justify earlier capital spending on data centers.

Potential benefits

  • Cheaper training and fine tuning cycles that expand the range of feasible experiments.
  • Faster prototyping and shorter time from idea to pilot in sectors such as manufacturing, logistics, finance, and healthcare.
  • Better utilization of inland data centers, which can support energy and network planning and reduce stranded investment.
  • More balanced access to compute beyond big tech platforms, giving smaller firms a chance to compete in niche applications.
  • Encouragement for data providers and labeling firms through paired data vouchers, particularly in cities such as Shanghai and Shenzhen.

Likely challenges

  • Administrative friction during application, verification, and scheduling, especially when demand spikes near deadlines.
  • Hardware and framework compatibility, since some models are tuned for specific accelerators and software stacks.
  • Uneven regional coverage, with richer cities able to fund deeper subsidies and build stronger ecosystems.
  • Temporary effects if vouchers do not convert into sustained demand once subsidies expire.
  • Risk of crowding out private cloud providers if voucher rules steer users only to specific centers.

Measuring whether the policy works

Early numbers are sparse, but a few indicators are emerging. Guizhou authorities report that by January 2025, 177 vouchers had been issued nationwide, supporting computing power transactions worth more than 10.5 billion yuan. That figure bundles large enterprise deals and voucher backed transactions, so it is not a clean read on subsidy effects. It does show that the market for compute is expanding quickly. Over the next year, watch three metrics. First, utilization rates at western data centers and the spread between peak and off peak usage. Second, voucher take up by genuine small and medium sized enterprises versus large firms. Third, the time it takes to move from application to scheduled compute time and completed jobs.

Transparency will matter. Clear reporting on who receives vouchers, what workloads are funded, and what outcomes are achieved can improve trust in the schemes. City governments can also share wait time data and success rates for scheduling, which would help teams plan projects. Providers can publish compatibility matrices for hardware and frameworks so that applicants know what will run where. If programs evolve in this direction, vouchers can function as a bridge between large, capital intensive infrastructure and the many smaller teams now trying to build useful AI systems.

International and domestic context

China’s voucher push comes amid intense investment in AI worldwide. The country is trying to align public spending with private innovation. Analysts see the most progress when public incentives match the needs of developers and users. In China’s case, that match looks strongest where vouchers meet ready workloads like fine tuning and inference for industry specific applications. Many companies are not training giant frontier models. They are adapting open source or commercial models to their data and use cases, which is well suited to subsidized clusters and shorter training runs.

Local plans also tie into national goals for the AI Plus initiative referenced in the 2025 Government Work Report. The policy encourages the spread of AI across traditional sectors and the development of intelligent terminals and robotics. Vouchers are one of the practical tools to connect those aims to the factory floor, the warehouse, and the clinic. If the programs reduce the cost of experimentation and expand the pool of teams with access to compute, they will change the pace and breadth of adoption across many industries.

Highlights

  • Multiple Chinese cities are issuing computing power vouchers that cut AI rental fees for SMEs by 50 to 80 percent, with Shanghai and Shenzhen leading large programs.
  • Shanghai set aside about 600 million yuan for compute vouchers and 100 million yuan for data vouchers, while Shenzhen launched a 500 million yuan annual scheme with higher subsidies for startups.
  • Chengdu expanded a pilot with 100 million yuan for research institutions, and Shandong paired a 30 million yuan pool with a plan to invest 1 billion yuan in AI infrastructure.
  • Voucher programs launched after a December 2024 national directive and aim to lift utilization at underused data centers that in some cases run at 20 to 30 percent capacity.
  • Guizhou illustrates the inland strategy, reporting 85 EFLOPS of intelligent computing capacity, rising renewable power, and voucher rules that support adaptation and training.
  • US export controls limit access to top tier accelerators, pushing more workloads to domestic chips and software; vouchers reduce costs but cannot erase performance gaps.
  • Benefits for SMEs include cheaper training cycles and faster prototyping, while challenges include administrative friction, compatibility issues, and uneven regional coverage.
  • Early signals to track include voucher take up by smaller firms, time from application to scheduled runs, and higher utilization at western facilities.
Share This Article