🤖EU, China, US: Three Different Attitudes Toward AI Regulation

One interesting aspect of the fast-evolving landscape of generative AI is the global consensus on the need to regulate it. This was not the case at the dawn of the Internet or the early days of mobile social media. Perhaps precisely because regulators regretted their “go with the flow” attitude towards the two previous moments of tech platform shifts, as generative AI ushers in a new platform shift (at least, this is the consensus among tech circles), the desire to regulate has never been stronger.

Even though it has only been nine months since ChatGPT’s first launch, we already have a good sense of how the three largest economic bodies – EU, China, US – plan to regulate generative AI. There are many good analyses out there on the substance of these regulations. So instead, I will do a comparison of the “why” behind the attitudes and motivations of European, Chinese, and American regulators.

It is arguably more important for both large companies and startups to develop a sense of the attitudes and incentives that drive regulators, rather than simply knowing what the rules say. Attitudes and incentives are, after all, what portend future behaviors, and the future of generative AI is by no means clear or settled.

EU: Chasing the Brussels Effect

EU regulators are the first to start regulating generative AI, and that “need to be first” is driven by their desire to achieve the so-called “Brussels effect”. This is not a speculation on my part; EU lawmakers have explicitly cited the Brussels effect as a key reason to quickly pass the EU’s AI Act, while spurring AI innovation is a lesser, secondary goal. That’s why when the EU parliament did pass the AI Act in June, it took pains to tell everyone that this is the “world’s first comprehensive AI law.”

Why is the Brussels effect such a big deal? Without sounding too cynical, it is the most direct way for EU regulators to gain power and relevance globally. And they’ve done it at least once before with GDPR (General Data Protection Regulation).

Let’s illustrate the Brussels effect with GDPR. This effect manifests itself in two ways: de facto (or in fact or in practice) and de jure (or by law).

By being the first, most comprehensive, and also the most stringent set of rules on data privacy, GDPR forced every company, big and small, to comply as long as it has a non-trivial number of European users and a website that may collect cookies. That’s why we all see those accept or reject cookie notifications – an example of GDPR’s de facto Brussels effect. Even though these companies don’t necessarily have to comply in the same way for their users in Brazil or India, internally, it is more straightforward to comply up to the toughest standards than to have different levels of compliance for different countries. Both Facebook and Microsoft did exactly that in 2018 – applying GDPR standards across all its users in all geographies. In practice, GDPR is a global standard, not just a European one.

GDPR also became the basis (or at least the inspiration) for other data privacy laws that have passed since. The best example is the California Consumer Privacy Act. This is an instance of the de jure effect, where the “first to market” law becomes the influential foundation for future similar laws.

As you can see, to achieve the full Brussels effect, you have to be both the first and the toughest. And that’s exactly what is happening with the EU AI Act.

The Act’s risk-first approach, by enumerating a list of “unacceptable risks” and “high risks” covering a wide swath of industries and use cases, is arguably the toughest framework we’ve seen yet. It’s so tough that Stanford’s Center for Research on Foundation Models have concluded that none of the foundation models on the market, from GPT4 to LLaMA, comes anywhere close to compliance. The EU lawmakers also appear to be dictating what they deem to be unacceptable risks or high risks without much input or collaboration with industry players, in stark contrast to both China and the US’s approach. Procedurally, the European Commission is pushing hard to wrangle all the EU member states to adopt the AI Act before the end of this year, so the rules can take effect in early 2024.

We won’t know for a few years whether the EU AI Act will achieve the Brussels effect; this analysis from Brookings is skeptical of that potential. But there is little doubt that the EU regulators’ main motivation is to reclaim the power trip they once felt with GDPR, while promoting technology innovation comes in at a distant second.

EU parliament headquarters in Brussels

China: Regulate Like A Startup

China has also solidified its generative AI regulations in record speed, though the speed is not motivated by a desire to achieve the Brussels effect.

The Cyberspace Administration of China (CAC) first released a set of draft rules in April. Three months later, a set of interim or provisional rules were released and scheduled to take effect on August 15, after taking in comments and inputs from industry players. Regulations that would set the limits and guardrails on how 1.4 billion people use generative AI came together in about four months. Six other agencies, in addition to the CAC, also signed on to give the rules more enforcement teeth and consistency.

This speed of execution is more like a startup. This “startup-like” quality of Chinese tech regulators was well-articulated by Kendra Schaefer in a recent Sinica podcast episode. Because Chinese regulators have released rules previously on synthetic AI and deepfakes, they are also not starting from scratch (in contrast to American regulators), so has the foundation to move in “startup speed.”

Many of the rules in the first draft were very stringent, some impossibly so, but the regulators put them out anyways to get feedback (or pushback). The consensus analysis of the end result released earlier this month is that the rules are less stringent, more reasonable, and more watered-down. This post from Pekingnology has a solid comparison of the changes between the draft and the soon-to-be-promulgated interim version. Matt Sheehan also did a good, fast-twitch thread on the differences:

If I were to sum up Chinese regulators’ attitude in one line, it would be: startup pragmatism with redlines.

ChatGPT’s release made it painfully obvious that China is still behind the US in AI innovation, and US sanctions of high-end GPUs are widening that gap. But if unregulated, generative AI will quickly touch many of the redlines that no companies can cross in China, when it comes to products or outputs that could shape or influence public opinion. All Chinese entrepreneurs and executives are well-aware of what those lines are. Most multinationals are also aware of them, and used to put up with them in order to access the Chinese market, but are less willing to do so these days.

So the Chinese regulators need to thread the needle, by acting fast but not acting crude, in order to reinforce the redline guardrails without dampening innovation. The initial draft in April was most certainly crude, but got the process started. The current interim version limits the compliance hurdles to apply just to generative AI services with the capacity to guide public opinion or mobilize society – reinforcing the redline while leaving some room to innovate in other use cases, like enterprise B2B software. When this version takes effect in August, edge cases will pop up and regulators will keep changing the rules as they see fit.

The rulemaking around generative AI is by no means finished in China, but the attitude and iterative approach of the rulemakers is rather clear.

US: Laissez-Faire and Learn

The attitude by US regulators and lawmakers, thus far, is what I call: laissez-faire and learn. Even though there has been a lot of activities – from congressional hearings and White House meetings, to a hodgepodge of bill proposals – nothing close to a concrete set of rules has been put forward by either the legislative or executive branch to regulate generative AI.

Last month, Senate Majority Leader Chuck Schumer announced a framework he is personally pushing to help Congress come up with comprehensive legislation on AI. However, the most tangible next step is a series of listening sessions happening in the fall, for congressional members to learn more about AI’s potential and risks. Unlike their Chinese counterparts, Schumer admitted that Congress is “starting from scratch” when it comes to legislating on generative AI.

Last week, the White House convened the executives of seven leading companies that make foundation AI models (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI) to commit to making their AI safe. The pledge basically amounted to these companies saying “we promise to do the right thing”, while the resulting eight commitments are mostly things these companies are already doing.

President Biden speaking on July 21 about commitments that seven companies made to manage the risks of artificial intelligence. Photo Credit: Kenny Holston/The New York Times

I don’t think this lack of concrete regulatory action is necessarily a bad thing or a dereliction of duty. I can think of a few reasons why US regulators are currently motivated to just not do much.

First, it is entirely plausible that legislators and executive branch officials honestly don’t know enough about AI to make rules. In that case, learning before regulating is the right thing to do.

Second, there is real fear and insecurity in Washington that regulation could kill innovation and diminish America's technological lead in AI, especially vis-a-vis China. While this fear may be unsubstantiated and unfounded, as this Foreign Affairs article argued, it is a powerful narrative that US regulators are very susceptible to. It is also a line of argument that tech incumbents (like the ones who showed up at the White House to make those voluntary commitments) are likely pushing. And why not? It has worked before with social media. When Mark Zuckerberg showed up in Congress in 2018 to try to stave off regulation of Facebook, one of his main arguments was the fierce competition he is feeling from Chinese Internet companies and that he is the lesser of two evils (you want Xi or Zuck?).

Third, pre-emptive regulation is perhaps just not in the American DNA. We like to wait for problems to get really bad first before we regulate. The Great Financial Crisis needs to happen first before Congress passes Dodd-Frank. Even then, the regulation gets watered down over time just so another financial crisis, albeit a more minor one, with SVB and First Republic Bank could happen.

This decidedly laissez-faire attitude certainly has many drawbacks. Many voices from the media and think tank world will harshly criticize this approach. It’s worth keeping in mind though that the media industry, in particular, feels very threatened by generative AI, already filing lawsuits, and would like to see strong regulations ASAP to protect itself. However, as I noted in a previous post, the US’s more light touch and grassroots-oriented way is distinct from the Chinese and EU approach, which is both top-down. And it is just as legitimate of an approach as any others to strike the elusive balance between innovation and safety (broadly defined).

Whether you are a large tech firm with plenty of legal resources or a young startup, it’s hard enough to stay updated on the latest regulatory movements in the EU, China, and the US, let alone any new variations that may pop up from India, Japan, Brazil, or Abu Dhabi, each trying to exert their own sovereignty over generative AI. Thus, knowing the attitudes and the “why” behind different national regulators’ actions is a helpful shorthand.

If you forget everything you’ve read so far, just remember: the US is the most open, the EU is the least open, and China (knowing its redlines) is somewhere in between.

欧盟、中国、美国:三种监管AI的不同态度

在飞速演变的生成式AI领域中的一个有趣的现象是全球对需要监管此技术的共识。在互联网的黎明时期或移动社交媒体的初期并非如此。也许正是因为监管机构对之前两次技术平台变更时的“随意”态度有些后悔,随着生成式AI引领了又一次新的平台转变(至少,在科技界这是共识),监管它的渴望从未如此强烈。

尽管自ChatGPT首次发布仅有九个月的时间,但我们已经对三个最大的经济体 - 欧盟、中国、美国 - 如何计划监管生成式AI有了很好的了解。关于这些法规实质内容的分析很多。所以,在这篇文章中我将对欧洲、中国和美国监管机构背后的态度和动机(也就是“为什么”)进行比较。

无论是大公司还是创业公司,不仅仅了解监管规则是什么,还要了解什么力量推动着监管机构的态度和举动,是极为重要的。毕竟,态度和动机才是预示未来行为的因素,而生成式AI的未来远非明确。

欧盟:追求“布鲁塞尔效应”

欧盟监管机构是首个开始监管生成式AI的机构,这种 “追求当第一” 的动力是被他们企图实现所谓的“布鲁塞尔效应”的愿望驱动的。这并不是我的个人猜测;欧盟立法者明确提到布鲁塞尔效应是快速通过《欧盟AI法案》的关键原因,而推动AI创新则是次要的。这也就是为什么当欧盟议会在6月通过AI法案时,费尽心思宣布告诉大家这是“世界上首项全面的AI法律”。

为什么“布鲁塞尔效应”如此重要?这是欧盟想再次达到全球影响力的最直接方式。他们在GDPR(通用数据保护法规)曾经达到过。

让我们用GDPR这个例子来解释一下“布鲁塞尔效应”。此效应以两种方式现身:de facto (实际效应)和 de jure(法律效应)。

通过成为有关数据隐私的第一个、最全面、也是最严格的一组规则,GDPR迫使每家公司,无论大小,只要有一定数量的欧洲用户并可能收集cookie,就必须遵守。这就是为什么我们上网时都会看到那些接受或拒绝cookie通知的原因 —— 这是GDPR所谓de facto“布鲁塞尔效应”的最佳例子。尽管这些公司不一定必须为他们在巴西或印度的用户做出相同的合规方式,但从公司内部执行而言,遵循最严格的标准比为不同国家设定不同合规标准要更为简单。FacebookMicrosoft在2018年就是这么做的 — 将GDPR标准应用于所有国家的所有用户。最终的结果是,GDPR不仅仅是欧洲的标准,而变成了全球标准。

从de jure的角度看,GDPR还成为自那时以来通过的其他数据隐私法的基础(或最初启发)。最好的例子是《加州消费者隐私法》。这是de jure法律效应的一个实例,第一个“进入市场”的法律成为未来类似法律的根基。

显而易见,要想达到“布鲁塞尔效应”,必须既是最快生效的也是最严厉的法规。而这正是《欧盟AI法案》的核心目的。

该法案的内容列举了一系列“不可接受的风险”和“高风险”的应用场景覆盖了广泛的行业,可以说是我们迄今为止看到的最严厉的框架。严厉程度到,以至于斯坦福大学的基础模型研究中心分析后得出结论是,市场上从GPT4到LLaMA的所有基础AI大模型都无法合规。欧盟立法者制定风险分类的程度的过程,似乎也没有接纳大多业界组织和公司的意见,与中国和美国的做法形成鲜明对比。从立法程序上看,欧盟委员会正在努力争取所有成员国在今年年底前采纳AI法案,这样规章可以在2024年初生效。

《欧盟AI法案》是否会最终实现“布鲁塞尔效应”,还需要几年的时间答案才会揭晓;布鲁金斯智囊的这份分析对实现的可能性持怀疑态度。但毫无疑问,欧盟监管机构的主要动机是要夺回曾经在GDPR中得到的光环,促进AI技术的创新和发展则遥遥排在第二。

欧盟位于布鲁塞尔的议会总部

中国:监管步伐向创业公司

中国也以创纪录的速度大体确立了对其生成式AI的法规,尽管速度背后的驱动并非也为了实现“布鲁塞尔效应”。

国家互联网信息办公室(网信办)首次在4月份发布了一套草案。三个月后,在听取来自业界意见之后,发布了一套暂行办法,准备于8月15日生效。关于14亿人如何使用生成式AI的各种政策和保护措施在短短四个月内汇聚成型。除网信办外,还有其他六个政府机构也签署了规则,增强规则的执法力度和一致性。

这种执行速度更像是家创业公司。科技领域监管的这种 “创业公司式” 品质 Kendra Schaefer 在最近的一期Sinica播客节目中给了很在点的阐述。由于中国监管机构之前发布了有关合成AI和深度伪造(deepfakes)的规则,也不是从零开始(与美国立法机构相反),所以也具备了以 “创业公司速度” 推进的基础。

草案初稿中的许多规则非常苛刻,甚至有些脱离现实,但网信办还是把它发布了出去,以获得反馈(或反对)。对本月出炉的暂行办法的普遍分析的共识是,规则放宽了些,更合理,更松了些。这篇文章详细的比较了草案初稿和暂行办法之间的区别。

如果要用一句话概括中国监管机构的态度,那就是:有红线的创业公司务实主义。

ChatGPT的发布赤裸裸地揭示了中国在AI创新方面仍然落后于美国,而美国对高端GPU的制裁也正在扩大这一差距。但是,如果不去监管,生成式AI将迅速触及许多不可越过的红线,特别是涉及影响公众舆论的方面。所有中国企业家和高管都非常清楚这些界限在哪里。大多数跨国公司也知道,并曾为了进入中国市场而妥协,但现在不太愿意了。

因此,中国监管机构需要小心平衡地行事,迅速但不粗暴地采取行动,加强红线的先明度而又不抑制创新。4月份出的草稿无疑有些粗暴,但却启动了整个过程。当前的暂行办法将合规难关仅限于具有引导公众舆论或动员社会能力的生成式AI服务 – 强化了红线,同时在其他用途中,如企业B2B软件,留下了创新空间。当暂行办法在8月生效后,将会出现其他还没有想到的边缘情况,监管机构也会根据需要不断更改规则,毕竟是“暂行”办法。

中国关于生成式AI的规章制定远未结束,但监管者的态度和迭代方法已相当明确。

美国:一边观望一边学

到目前为止,美国监管机构和立法者的态度我把它概括为:一边观望一边学。尽管有很多活动——从国会听证会白宫会议,到大量的法案提案 —— 但国会和行政部门都尚未提出任何具体的规定来监管生成式AI。

上个月,参议院多数党领袖查克·舒默(Chuck Schumer)宣布了一个他亲自推动的框架,以帮助国会制定关于AI的全面性法案。然而,最具体的下一步是秋季的一系列聆听会议,供国会议员了解更多关于AI的潜力和风险。与他们的中国同行不同,Schumer承认,涉及到制定生成式AI的规章制度,国会目前的状态是“从零开始”。

上周,白宫召集了七家AI公司的高管 – 亚马逊、Anthropic、谷歌、Inflection、Meta、微软、OpenAI – 美家都在研发自己的基础AI模型。会议的结果是,公司自愿承诺会把AI打造设计的更安全。这一承诺基本上就是每家发誓说:“我们不会做坏事”,会议宣布的八项承诺绝大部分这些公司内部已经在做了。

拜登总统在7月21日谈到七家公司为管理AI的风险所作的承诺。照片来源:纽约时报

我个人并不认为缺乏具体监管行动一定是坏事或是失职。我能想到几个原因,来解释美国监管机构为什么目前不想在生成式AI上做太大的举动。

首先,立法者和行政部门官员如果真的不够了解关于AI的基本知识,暂时不制定规则,是完全合理的。在这种情况下,在监管之前先学习是正确的态度和做法。

其次,美国政府官员有很深的恐惧和不安,担心监管过度会会扼杀创新,削弱美国在AI方面的技术领先地位,特别是与中国相比。虽然这种恐惧可能没有太多依据,正如这篇Foreign Affairs文章所表达的观点和分析,但这一恐惧很容易打动美国监管者。这也是各个科技公司(像那些出席在白宫做出自愿承诺的公司)很擅长推动的论点。又为什么不呢?之前在对消弱社交媒体监管方面就很起作用。当扎克伯格在2018年出席国会听证会上试图抵制对Facebook的监管时,他的拿手论点之一就是来自中国互联网公司的激烈竞争。

第三,先发制人的监管态度也许根究不符合美国的基因。我们喜欢等问题变得非常严重后再监管。通过《多德-弗兰克法案》之前,先需要2008年的金融危机发生。即便如此,随着时间的推移,规章也会被逐渐削弱,以便可能给下一场金融危机创造机会,例如 SVB和First Republic银行的倒闭和收购。

这种自由放任主义自然有许多缺点。许多来自媒体和智囊界的声音会严厉批评这种态度和做法。不过值得记住的是,尤其是媒体行业,已经感受到了来自生成式AI的严重威胁,法庭诉讼已经开始,并希望尽快看到强有力的监管来保护自己的饭碗。然而,正如我在之前的一篇文章中所提到的,美国相对放手、以草根导向的监管方式与中国和欧盟自上而下的方法截然不同,也是一种合理的监管手法。毕竟在寻找在AI创新和安全之间,还没有一个国家找到完美的平衡。

无论是拥有充足法律资源的科技大厂还是年轻的创业公司,要想实时了解在欧盟、中国和美国各自监管的最新动态已经够难的了,更不用说可能从印度、日本、巴西或阿布扎比出现的其他规章制度。每个国家都想在生成式AI上施加自己的主权。因此,了解不同国家监管者行动背后的态度和动机是很有用的捷径。

如果您已经忘记了这篇文章里的所有内容,只需记住:美国是最开放的,欧盟是最不开放的,而中国(在了解其红线的前提下)则介于两者之间。