Louise Matsakis of Semafor reported an important scoop this week: the White House’s upcoming executive order on AI may require cloud computing platforms to share customer information with regulators. If this scoop turns out to be true, this EO will basically implement a “know your customer” (or KYC) type scheme to the realm of AI, effectively turning clouds into banks.

Is this a good approach?

While I’m not generally in favor of (too much) regulation, if we, as a society, have agreed on an intention to regulate (which appears to be the case regarding generative AI), then it’s better to use analogous rules that are already working elsewhere and fit them within the existing industry landscape, rather than come up with something totally new.

So if the goal is to prevent bad actors or foreign adversaries from accessing AI computing power, as training and inference chips become strategically critical resources, then yes, I think this KYC-like approach applied to the clouds is a reasonable and practical approach. Let's expand on why.

Unsplash/ Tabrez Syed via Semafor

Regulating Within A Well-Baked Industry

The cloud computing industry is maturing and the landscape is becoming increasingly well-baked. This is not to suggest that competition isn’t still fierce and there won’t be more innovation and growth. But the key players – the hyperscalers like AWS, Azure, GCP, Oracle, and to some extent AliCloud, Huawei Cloud, and Tencent Cloud – are well-established. (See my previous post on the global data center footprint comparison between the American and the Chinese clouds.)

This industry maturity makes regulating certain aspects of generative AI more straightforward, because almost all users and builders of future AI applications will have to access the computing resources via one or two of these massive cloud platforms. Sure, there will be the occasional deep-pocketed or compliance-sensitive enterprises, who would spend the money and human power to buy and own the GPUs and build their own AI training infrastructure. But, by and large, companies who want to build or use AI will rent the resources from the cloud.

This is good news for pragmatic, well-intentioned regulators for two reasons.

First, there is only a small universe of hyperscalers that matters. As opposed to banks, where there are more than 4,000 banks in the US and 36 of them have assets of over $100 billion, there are literally only a handful of clouds that are relevant in the AI age. We can list them all here: AWS, Azure, GCP, Oracle, IBM, and a few upstarts in the Nvidia orbit, like CoreWeave and Lambda Labs. This is a great set up that makes enforcement practical. (I’m, of course, leaving out the Chinese clouds since they are out of the jurisdiction of US regulators.)

Second, most of these hyperscalers already serve large banks and other financial service institutions as their IT infrastructure, so they are familiar with the complex set of regulatory compliance that the financial industry must adhere to. Banks are arguably the most demanding, but also the most lucrative customers for cloud platforms. On the technical side, banks require the highest level of data accuracy, consistency, and security, since they store people’s money. On the compliance side, they need a huge amount of granular capabilities to meet various audit requirements, like KYC. It is a monumental undertaking to get a bank to move to the cloud, but once they do, they stay for the long haul. That’s why AWS touts its work with HSBC and Standard Chartered, Azure does the same with Blackrock and RBC, and GCP highlights its collaboration with Goldman Sachs and Deutsche Bank. Banks are the customers that will bring in more customers from all industries.

Hyperscalers are already good at compliance. They have to be for business reasons. Routing AI-related regulatory concerns through the clouds by tapping into their existing compliance processes, perhaps with some small additions here and there, is the most effective and least cumbersome way to reach regulatory goals. This approach would have prevented embarrassing loopholes, like how Chinese tech companies that are on the US entity list simply rented Nvidia GPUs from different cloud service providers when they were barred from buying those chips.

I would even venture to say that requiring cloud platforms to share customer information with regulators would be good for new AI startups too. With AI being regarded as a strategic capability for national competitiveness, geopolitics is always in the air, and which customer from which country is using which AI service is always under scrutiny. Putting the compliance onus on the hyperscalers or specialized AI clouds would remove a huge burden that few startups have the resources to bear.

Possible Drawbacks

Just like any regulatory approaches, there are drawbacks to regulating AI through the clouds. Here are a couple I can think of.

Compute Threshold Hard to Draw: if the amount of compute used by a customer is what triggers reporting from the cloud provider to the government, drawing the right line to trigger the reporting can be difficult, if not impossible. As Matsakis noted in her reporting, as the cost of compute to train AI models continues to come down, a compute threshold is too fast moving of a target. In my view, if the regulation’s goal is to preemptively prevent AI threats, especially from foreign actors, then a compute threshold is not the right trigger. In this case, the “who” is the most important factor. If a terrorist organization from the Middle East or a SenseTime (or any other blacklisted Chinese company) runs even a tiny workload using AI chips racked in an AWS data center in the UAE or South Africa, wouldn’t an American regulator want to know?

The cleanest way this regulation would work is a constant cross-checking process between the hyperscalers’ customer list and the Commerce Department’s entity list, Treasury Department’s OFAC list, State Department’s Foreign Terrorist Organizations list, and other similar lists the US government currently maintains. Blacklisted entities aren’t stupid, of course, and are already using subsidiaries and shell companies to obfuscate their identity when trying to access sanctioned computing resources in the cloud. A random, new customer that appears out of nowhere and starts using AI compute resources should trigger an automatic report. Enforcement will require extra vigilance and cooperation between the hyperscalers and the regulators.

The Big Gets Bigger: this may sound counterintuitive, but more AI regulatory requirements placed on the hyperscalers will only make them stronger in the marketplace. These already big players will get bigger. There will be less room for new entrants to disrupt this market, unless they have some special relationship and backing from an existing big player, like the relationship between CoreWeave and Nvidia, the AI kingmaker. It is not surprising that this “KYC the cloud” idea has been pushed by Microsoft and OpenAI; being the regulatory targets in this case benefits them, the incumbents.

This is a classic example of “regulatory capture”, where regulation empowers the incumbents, promotes more rent seeking, reduces competition, and produces net-negative effects for society. I don’t have a good solution for this drawback. It has happened time and time before the US, given the prominent place that corporate lobbying places in the American lawmaking process. Benchmark’s Bill Gurley gave a compelling presentation a couple of weeks ago on this very subject, by citing past regulatory capture examples in the telecom and pharmaceutical industries, and the impending AI regulations being pushed by Sam Altman and others.

My view sits somewhere in the middle. I don’t think zero regulation on AI is right. I don’t think lots of regulations that clearly only benefit the incumbents are right either. There are many bone-headed, counterproductive ways to regulate AI. However, requiring some customer reporting transparency from the hyperscalers and treating the clouds like banks is not one of them.

AI在把云计算变成银行

美国科技媒体 Semafor的Louise Matsakis 本周报道了一条重要的独家新闻:白宫即将发布的有关AI的行政命令可能会要求云计算平台与监管机构分享客户信息。如果这个消息属实,此行政命令影响就是会在AI领域内实施一个类似 “实名客户”(或称Know Your Customer,KYC)的方案,把云平台像银行一样监管。

这是个好方法吗?

虽然我通常不支持(太多的)监管,但如果一个社会已经就监管的必要性和意图已经达成共识,而对于AIGC目前的态度就是这样,那最好的方式是使用其他领域已经行之有效的类似规则,并将它们适应到现有的行业景观中,而不是从新创造一套规矩。

因此,如果监管的最终目的是防止非法分子或外国敌对组织获取AI算力,因为训练(training)和推断(inference)的芯片已经视为关键资源,那么我认为这种KYC式的方法应用在云上是一个合理且实际的方法。此文会展开此问题,详细解释为什么。

Unsplash/ Tabrez Syed via Semafor

在一个成熟的行业状况内监管

云计算行业正在成熟,整体格局也日益稳定。这并不是说竞争不再激烈,或者不会有更多的创新和增长。但是关键的玩家 – AWS、Azure、GCP、Oracle,以及在某种程度上国内的厂商像阿里云、华为云和腾讯云 – 都已经稳固立足。 (请参见我之前关于美国云和中国云之间的全球数据中心占比的文章。)

这种行业的成熟使得对AIGC的某些方面进行监管变得更为简单,因为几乎所有未来AI应用的用户和创造者都将必须通过这些大型云平台之中的一或两个来用算力。当然,偶尔会有几家资金雄厚或对合规敏感的企业,他们愿意花大笔资金和人力来购买和拥有GPU,并搭建自家的AI培训基础设施。但总的来说,想做AI应用的公司大多数将从云中租用资源。

这对于务实、善意的监管者来说是个好消息,原因有两点。

首先,真正有份量,值得关注的云计算平台屈指可数。与金融界相比,在美国大大小小的银行有超过4,000家、其中36家资产超过1000亿美元,但在AI时代里真正有分量的云计算平台其实没几个。我们可以把它们都列出来:AWS、Azure、GCP、Oracle、IBM以及Nvidia圈内的一些新兴公司,如CoreWeave和Lambda Labs。这种行业状态使得落实监管变得实际可行。(当然,我排除了中国的云厂商,因为它们不在美国监管机构的管辖范围内。)

其次,这些大规模云厂商中的大部分已经为大型银行和其他金融服务机构提供IT基础设施,因此它们熟悉金融行业必须遵守的复杂的监管合规要求。银行无疑是对云计算平台最苛刻,但也是最有利可图的客户。在技术方面,银行要求极高的数据准确性、一致性和安全性,因为它们存的是客户的钱。在合规方面,它们需要高效率的得到许多很细微的信息来满足各种审计要求,如KYC。让一家银行上云是一项巨大的工程,但一旦上了,他们就是长期的客户,不会轻易挪动。这就是为什么AWS夸耀其与HSBCStandard Chartered的合作,Azure与BlackrockRBC做同样的宣传,而GCP则突出其与Goldman SachsDeutsche Bank的合作。银行是从所有行业吸引更多客户的客户。

大规模云厂商已经非常擅长合规。出于商业原因,他们必须有这种能力。通过利用他们现有的合规流程(或许在某些地方有些小的添加),在把与AI相关的监管需求通过云平台去落实,是达到监管目标的最有效且最不繁琐的方法。如果早些开始用这种方法,本应可以防止已暴露的尴尬漏洞,例如,被列入美国实体名单的中国科技公司在被禁止购买高端芯片时,就直接从不同的云服务提供商那里租用英伟达的GPUs就好。

我甚至敢预言,要求云平台与监管机构分享客户信息其实对新的AI创业公司也是有益的。由于AI被视为每个国家竞争力的战略资源,地缘政治因素无处不在,哪个客户来自哪个国家,使用哪套AI服务和算力总会受到审查和关注。将合规责任放在大规模云厂商或专注AI云平台上,会减轻创业公司对合规的巨大负担。

可能的缺陷

任何监管手段都不是完美的,通过云计算来监管AI也存在缺陷。以下是我所能想到的两点。

用算力划线很困难:如果把使用的算力当作云厂商向政府提供客户信息触发点,那超过多少算力才值得申报,怎么划这条界线其实非常困难,甚至不可能做到。正如Matsakis在她的报道中指出的,随着训练AI模型的算力成本不断下降,算力这条线的变化实在太快。在我看来,如果监管的目标是预先防止AI带来的威胁,尤其是来自境外的威胁,那么算力并不是正确的触发因素。在这种情况下,“谁”是最重要的因素。如果中东某个恐怖组织或像商汤这种已经上了美国的黑名单的公司,在AWS在阿联酋或南非的数据中心使用AI芯片的话,无论他使用的算力负载再小,做为美国监管机构,他难道不想知道吗?

落实这种监管体制的最清晰的运作方式就是把大规模云厂商的客户名单和美国商务部的实体名单、财政部的OFAC名单、国务院的外国恐怖组织名单以及美国政府目前维护的其他类似名单之间进行不断的交叉检查。当然,被列入黑名单的组织也不笨,很多已经在用子公司和壳公司来混淆他们的身份,用来从各大云平台上获取各种算力资源。一个突然冒出来并立即开始使用AI计算资源的新客户应该自动触发申报。具体落实将需要大规模云厂商和监管部门之间的紧密合作和额外警惕。

大厂变更大:这可能听起来有些违反直觉,但对大规模云厂商施加更多的AI监管要求只会使他们在市场上变得更强大。大厂将变得更大。新兴创业公司要想打破这个AI云计算市场,空间将变得更小,除非他们与现有的大玩家有某种特殊的关系和支持,比如CoreWeave和AI界的王者英伟达之间的关系。不足为奇的是,这个 “KYC the cloud” 的监管提议就是来自微软和OpenAI携手推动的。在这种情况下,成为监管的目标对大厂其实是有利的。

这是一个 “监管捕获”(regulatory capture)的典型例子,其中的监管规定赋予了现有大企业权力,促进更多的寻租行为,同时减少竞争,并为社会产生了净负面效果。我对这个缺陷也没有什么好的解决方案。在美国,这种情况已经发生了一次又一次,因为企业游说团在美国的立法流程中占据了极有影响的地位。著名风投Benchmark的合伙人,Bill Gurley,几周前就此主题做了一个引人注目的演讲,通过引用电信和制药行业过去的监管捕获例子,以及Sam Altman和其他人正在推动的即将到来对AI的监管。

我的观点处于中间位置。我不认为对AI一点监管都没有是正确的。我也不认为大量明显只有利于现有的大企业的监管制度是正确的。现在已经有很多愚蠢、适得其反的监管AI的方式在被各国议论。而要求大规模云厂商提供一些客户信息和透明度,把云计算平台视为银行,并不是其中之一,而是个好主意。