TECHNOLOGY
Chinese Entities Utilize U.S. Cloud Services to Circumvent AI Export Restrictions
State-linked Chinese entities are reportedly accessing advanced U.S. artificial intelligence (AI) technologies and high-end chips through cloud services provided by companies like Amazon Web Services (AWS), despite stringent U.S. export restrictions. These revelations come from a review of over 50 tender documents from Chinese databases, which show that at least 11 Chinese entities have sought to procure restricted U.S. technologies or cloud services.
The U.S. government has placed significant restrictions on the export of high-end AI chips to China, citing concerns over enhancing the Chinese military’s capabilities. However, these regulations do not cover access to such technologies via cloud services, leading to a loophole that Chinese organizations appear to be exploiting.
For instance, Shenzhen University, through an intermediary company, Yunda Technology Ltd Co, accessed AWS cloud servers powered by Nvidia’s A100 and H100 chips, which are banned from being exported to China. The tender document reveals that the university spent approximately 200,000 yuan ($27,996) on this service for an unspecified project. Similarly, Zhejiang Lab, a research institute working on its own large-language model (LLM) GeoGPT, planned to spend 184,000 yuan to purchase AWS cloud computing services due to insufficient computing power from local providers like Alibaba.
These practices have raised alarms in the U.S., with legislators and officials expressing concerns about the potential military and strategic advantages these technologies could afford China. Michael McCaul, chair of the U.S. House of Representatives Foreign Affairs Committee, highlighted the urgent need to close this loophole, which allows foreign entities to bypass export restrictions through remote cloud access.
In response, the U.S. Department of Commerce has been working on new regulations to tighten controls on cloud services. Proposed rules include requiring U.S. cloud service providers to verify users and report when large AI models are being trained that could have malicious applications.