Foxconn and Nvidia to Build Taiwan’s Largest Supercomputing Cluster by 2026

Asia Daily
11 Min Read

A new anchor for AI compute in Asia

Foxconn is building a 1.4 billion dollar supercomputing center in Taiwan in partnership with Nvidia, with operations targeted for the first half of 2026. Once online, the site will host Taiwan’s largest advanced GPU cluster and the first installation in Asia built around Nvidia’s Blackwell GB300 platform. The facility, rated at 27 megawatts (MW), is being developed by Visonbay.ai, Foxconn’s new unit for AI supercomputing and cloud services.

The project arrives at a moment when businesses are racing to secure capacity for training large language models, running foundation model inference at scale, and deploying robotics and autonomous systems. Modern AI workflows require dense GPU clusters that can move data between chips at very high speeds while keeping power and heat in check. By combining Nvidia’s newest architecture with Foxconn’s fast growing role as a manufacturer and integrator of AI racks, the Taiwan center is designed to serve those needs for customers across the region.

With 27 MW of power, the site sits at the high end of single building deployments in Asia. It is not a mega campus, yet it is large enough to train state of the art models, run national scale research, and serve commercial clients that do not have the time or capital to assemble their own facilities. The build targets a short time to service, driven by factory built rack systems and an emphasis on liquid cooling to keep performance high and power usage effectiveness tight.

What Blackwell GB300 brings to the table

Nvidia’s Blackwell generation focuses on larger models, faster training, and better cost per unit of compute. The GB300 line combines very fast GPU cores with high bandwidth memory and an updated NVLink fabric that knits dozens of GPUs together into a single pool. In practice, that makes a cluster more efficient at moving parameters and activations between GPUs, which improves the speed of both training and inference for very large models.

These systems are designed to run at very high density. Heat loads far exceed the limits of traditional air cooling, so vendors are shifting to direct liquid cooling and other advanced designs that move heat away from hot components with far greater efficiency. The Nvidia rack scale systems paired with GB300 GPUs reflect that shift, with plumbing, manifolds, and controls built into the racks from the factory.

Blackwell also adds platform features for enterprise deployments, including enhanced security and confidential computing support. That matters for organizations that need to protect data and model weights during training and inference while meeting regulatory and internal compliance requirements.

How an AI rack works

An AI rack houses servers packed with GPUs, high speed switches, and storage, along with the power distribution and cabling that binds the system together. Inside the rack, NVLink lets GPUs talk to each other at extremely high bandwidth with low latency. Between racks, high speed Ethernet or InfiniBand connects the cluster so that work can be distributed across many racks when a job is too large for a single rack to handle.

Why liquid cooling is becoming standard

Liquid cooling removes heat much more effectively than air, which helps data centers push more compute into each square meter of floor space. It also reduces energy wasted on fans, which can help lower a site’s power usage effectiveness (PUE). For operators, liquid cooling adds new maintenance practices, but the trade pays off when GPU density, efficiency targets, and noise limits are tight.

Advertisement

Rent versus build, a shift in AI economics

At Foxconn’s tech day, Nvidia framed a clear calculation for customers weighing capital expense against speed. Companies can buy and run their own clusters, or they can rent compute capacity from operators that specialize in keeping GPU fleets current. Rapid advances in GPU performance make long asset lifecycles difficult for buyers that do not plan to refresh frequently.

Alexis Bjorlin, a vice president at Nvidia, told attendees that the rental model can be more attractive as product cycles shorten and models evolve quickly. She said the approach can boost flexibility and returns for many businesses.

Renting compute resources may offer a far better return on investment, enabling flexibility and enabling companies to scale their compute according to both product and business cycles.

Foxconn’s new unit Visonbay.ai is expected to offer that model in Taiwan, giving enterprises access to GB300 capacity without the wait, staffing, and facility work that private builds require. A service model also supports public sector and research buyers that need predictable budget flows rather than large upfront costs. For Nvidia, the setup increases the installed base for Blackwell, while for Foxconn it creates a recurring services business layered on top of its manufacturing role.

Foxconn’s bet on AI infrastructure

Foxconn has spent the past two years building a second growth engine around AI and cloud infrastructure. The company says it can turn out roughly 1,000 AI racks each week and plans to increase that run rate next year. Chairman Young Liu has also mapped a long term capital plan, committing to invest 2 to 3 billion dollars each year in AI projects and capacity.

Beyond the data center, Foxconn and Nvidia are taking AI into production lines. The goal is to apply vision systems, digital twins, and robotics to squeeze out defects, raise yields, and shorten the time it takes to bring new products to market. The collaboration is already active and is expected to inform how Foxconn designs and operates its own factories.

Spencer Huang, a product leader at Nvidia focused on robotics and the son of Nvidia founder Jensen Huang, described the work as a practical push to make smarter factories.

Nvidia is working with Foxconn to bring AI to factories and manufacturing lines.

Foxconn also used the event to show progress in its electric vehicle program. The Model A, designed by Japanese engineers, is aimed at Japan’s market. Foxconn plans to build the vehicle in Japan once volumes support a local plant, part of a strategy to serve automakers that want to outsource more assembly.

Advertisement

How large is 27 megawatts

Power is the simplest way to compare data centers built for AI. At 27 MW, the Foxconn Nvidia site will be a major installation for Taiwan, large enough to host thousands of cutting edge GPUs. Hyperscale campuses run far larger, sometimes 100 to 300 MW across multiple buildings, but this project is focused on a single high density cluster that can deliver training and inference for many customers at once.

Capacity of that scale supports research labs and enterprises across sectors. Semiconductor firms can simulate chip designs and photolithography steps. Biotech teams can run protein structure discovery and drug screening with generative models. Fintech and e commerce companies can fine tune large language models for recommendation, fraud detection, and customer support. Robotics developers can train policies and run synthetic data pipelines at production speed.

Taiwan’s ecosystem advantage

Taiwan offers a tight supply chain for advanced compute. TSMC fabricates many of the chips, contract manufacturers build the servers and racks, and a mature network services market connects customers to cloud providers. Hosting a GB300 cluster in Taiwan tightens feedback loops between hardware, system integration, and early adopters. The location also reduces shipping time and import complexity for regional buyers.

Who is likely to use the cluster

Local universities and national labs will pursue AI research that benefits from sovereign compute on the island. Regional cloud providers and software firms can book capacity for model training, fine tuning, and inference. Industrial groups with factories in Taiwan and Japan may use the cluster for simulation and computer vision workloads tied to production. The mix should help keep utilization high across changing market cycles.

Manufacturing power with weekly output in the thousands

Making AI racks at scale is a complex job. Each unit must be assembled, plumbed for liquid cooling, wired for very high bandwidth, and validated under load before shipment. Foxconn’s line rate of about 1,000 racks per week shows how far the company has moved beyond phone assembly toward mission critical infrastructure. That scale shortens lead times for customers and helps the new Taiwan center fill quickly once it opens.

Foxconn also plans to build data center equipment with partners abroad. Earlier plans included work with SoftBank at a former Foxconn electric vehicle site in Ohio, part of a broader push to supply equipment for large AI programs in the United States. The global scope of those agreements gives Foxconn logistical options as bottlenecks shift between regions.

Advertisement

Energy, cooling and sustainability

Large AI clusters concentrate power and heat. That raises familiar questions about how sites connect to the grid, how they source cleaner electricity, and how they handle water. Operators in Taiwan have been expanding use of power purchase agreements for wind and solar, and they are pursuing heat reuse and chiller upgrades to reduce waste. A move to direct liquid cooling cuts fan energy, improves heat capture, and makes high GPU density practical.

Liquid systems need careful design and high quality controls to avoid downtime. Service teams must be trained to handle quick disconnects, coolant chemistry, and leak detection. The upside is more predictable thermal performance and a better PUE target across the year, including during hot summer months common in many parts of Asia.

What this means for Nvidia and the AI market

For Nvidia, the project does two things at once. It accelerates deployment of the Blackwell family into production and it expands the market for compute rental models. The arrangement deepens Nvidia’s ties to a key manufacturing partner, while opening a path for customers to access top tier GPUs without running procurement cycles or building new halls.

For buyers, the timing lines up with a wave of model updates. Enterprises planning to refresh from previous generation accelerators can move workloads to GB300 instances with higher memory bandwidth and better performance per watt. Research teams can reserve time on the cluster for training runs that would be impractical to schedule on public clouds during peak demand. The added capacity also aids national programs that want sovereign AI resources inside Taiwan.

Advertisement

Timeline, investment and what to watch

Foxconn says the Taiwan site will be ready in the first half of 2026. The company has board approval to procure equipment for AI compute clusters and a supercomputing center through late 2026, suggesting a staged build out and room to expand if demand materializes quickly. The 27 MW project, paired with an annual investment plan of 2 to 3 billion dollars for AI, positions Foxconn to grow both as a supplier of AI racks and as a service operator through Visonbay.ai.

Customers will be watching for details on availability of GB300 racks, the service catalog that Visonbay.ai plans to offer, and connectivity options into regional networks. The center’s performance on efficiency and uptime will set a benchmark for future Blackwell deployments in Asia, where many sites are now planning liquid cooled rooms and modular designs tailored for GPU clusters.

Key Points

  • Foxconn and Nvidia are building a 1.4 billion dollar supercomputing center in Taiwan, scheduled to be ready by the first half of 2026.
  • The site will be Taiwan’s largest advanced GPU cluster and the first in Asia to run on Nvidia’s Blackwell GB300 platform.
  • The data center is rated at 27 MW and will be developed and operated by Foxconn’s unit Visonbay.ai.
  • Nvidia promotes compute rental as a flexible way to access top tier GPUs, a model Visonbay.ai plans to offer.
  • Foxconn can manufacture about 1,000 AI racks per week and plans to increase output next year.
  • Spencer Huang of Nvidia said the companies are working together to apply AI in factories and manufacturing lines.
  • Foxconn plans to invest 2 to 3 billion dollars annually in AI to meet rising demand in 2026 and beyond.
  • The project highlights Taiwan’s growing role in advanced compute while raising new focus on energy, cooling, and efficiency goals.
Share This Article