One interesting aspect of the fast-evolving landscape of generative AI is the global consensus on the need to regulate it. This was not the case at the dawn of the Internet or the early days of mobile social media. Perhaps precisely because regulators regretted their “go with the flow” attitude towards the two previous moments of tech platform shifts, as generative AI ushers in a new platform shift (at least, this is the consensus among tech circles), the desire to regulate has never been stronger.
Even though it has only been nine months since ChatGPT’s first launch, we already have a good sense of how the three largest economic bodies – EU, China, US – plan to regulate generative AI. There are many good analyses out there on the substance of these regulations. So instead, I will do a comparison of the “why” behind the attitudes and motivations of European, Chinese, and American regulators.
It is arguably more important for both large companies and startups to develop a sense of the attitudes and incentives that drive regulators, rather than simply knowing what the rules say. Attitudes and incentives are, after all, what portend future behaviors, and the future of generative AI is by no means clear or settled.
EU: Chasing the Brussels Effect
EU regulators are the first to start regulating generative AI, and that “need to be first” is driven by their desire to achieve the so-called “Brussels effect”. This is not a speculation on my part; EU lawmakers have explicitly cited the Brussels effect as a key reason to quickly pass the EU’s AI Act, while spurring AI innovation is a lesser, secondary goal. That’s why when the EU parliament did pass the AI Act in June, it took pains to tell everyone that this is the “world’s first comprehensive AI law.”
Why is the Brussels effect such a big deal? Without sounding too cynical, it is the most direct way for EU regulators to gain power and relevance globally. And they’ve done it at least once before with GDPR (General Data Protection Regulation).
Let’s illustrate the Brussels effect with GDPR. This effect manifests itself in two ways: de facto (or in fact or in practice) and de jure (or by law).
By being the first, most comprehensive, and also the most stringent set of rules on data privacy, GDPR forced every company, big and small, to comply as long as it has a non-trivial number of European users and a website that may collect cookies. That’s why we all see those accept or reject cookie notifications – an example of GDPR’s de facto Brussels effect. Even though these companies don’t necessarily have to comply in the same way for their users in Brazil or India, internally, it is more straightforward to comply up to the toughest standards than to have different levels of compliance for different countries. Both Facebook and Microsoft did exactly that in 2018 – applying GDPR standards across all its users in all geographies. In practice, GDPR is a global standard, not just a European one.
GDPR also became the basis (or at least the inspiration) for other data privacy laws that have passed since. The best example is the California Consumer Privacy Act. This is an instance of the de jure effect, where the “first to market” law becomes the influential foundation for future similar laws.
As you can see, to achieve the full Brussels effect, you have to be both the first and the toughest. And that’s exactly what is happening with the EU AI Act.
The Act’s risk-first approach, by enumerating a list of “unacceptable risks” and “high risks” covering a wide swath of industries and use cases, is arguably the toughest framework we’ve seen yet. It’s so tough that Stanford’s Center for Research on Foundation Models have concluded that none of the foundation models on the market, from GPT4 to LLaMA, comes anywhere close to compliance. The EU lawmakers also appear to be dictating what they deem to be unacceptable risks or high risks without much input or collaboration with industry players, in stark contrast to both China and the US’s approach. Procedurally, the European Commission is pushing hard to wrangle all the EU member states to adopt the AI Act before the end of this year, so the rules can take effect in early 2024.
We won’t know for a few years whether the EU AI Act will achieve the Brussels effect; this analysis from Brookings is skeptical of that potential. But there is little doubt that the EU regulators’ main motivation is to reclaim the power trip they once felt with GDPR, while promoting technology innovation comes in at a distant second.
China: Regulate Like A Startup
China has also solidified its generative AI regulations in record speed, though the speed is not motivated by a desire to achieve the Brussels effect.
The Cyberspace Administration of China (CAC) first released a set of draft rules in April. Three months later, a set of interim or provisional rules were released and scheduled to take effect on August 15, after taking in comments and inputs from industry players. Regulations that would set the limits and guardrails on how 1.4 billion people use generative AI came together in about four months. Six other agencies, in addition to the CAC, also signed on to give the rules more enforcement teeth and consistency.
This speed of execution is more like a startup. This “startup-like” quality of Chinese tech regulators was well-articulated by Kendra Schaefer in a recent Sinica podcast episode. Because Chinese regulators have released rules previously on synthetic AI and deepfakes, they are also not starting from scratch (in contrast to American regulators), so has the foundation to move in “startup speed.”
Many of the rules in the first draft were very stringent, some impossibly so, but the regulators put them out anyways to get feedback (or pushback). The consensus analysis of the end result released earlier this month is that the rules are less stringent, more reasonable, and more watered-down. This post from Pekingnology has a solid comparison of the changes between the draft and the soon-to-be-promulgated interim version. Matt Sheehan also did a good, fast-twitch thread on the differences:
If I were to sum up Chinese regulators’ attitude in one line, it would be: startup pragmatism with redlines.
ChatGPT’s release made it painfully obvious that China is still behind the US in AI innovation, and US sanctions of high-end GPUs are widening that gap. But if unregulated, generative AI will quickly touch many of the redlines that no companies can cross in China, when it comes to products or outputs that could shape or influence public opinion. All Chinese entrepreneurs and executives are well-aware of what those lines are. Most multinationals are also aware of them, and used to put up with them in order to access the Chinese market, but are less willing to do so these days.
So the Chinese regulators need to thread the needle, by acting fast but not acting crude, in order to reinforce the redline guardrails without dampening innovation. The initial draft in April was most certainly crude, but got the process started. The current interim version limits the compliance hurdles to apply just to generative AI services with the capacity to guide public opinion or mobilize society – reinforcing the redline while leaving some room to innovate in other use cases, like enterprise B2B software. When this version takes effect in August, edge cases will pop up and regulators will keep changing the rules as they see fit.
The rulemaking around generative AI is by no means finished in China, but the attitude and iterative approach of the rulemakers is rather clear.
US: Laissez-Faire and Learn
The attitude by US regulators and lawmakers, thus far, is what I call: laissez-faire and learn. Even though there has been a lot of activities – from congressional hearings and White House meetings, to a hodgepodge of bill proposals – nothing close to a concrete set of rules has been put forward by either the legislative or executive branch to regulate generative AI.
Last month, Senate Majority Leader Chuck Schumer announced a framework he is personally pushing to help Congress come up with comprehensive legislation on AI. However, the most tangible next step is a series of listening sessions happening in the fall, for congressional members to learn more about AI’s potential and risks. Unlike their Chinese counterparts, Schumer admitted that Congress is “starting from scratch” when it comes to legislating on generative AI.
Last week, the White House convened the executives of seven leading companies that make foundation AI models (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI) to commit to making their AI safe. The pledge basically amounted to these companies saying “we promise to do the right thing”, while the resulting eight commitments are mostly things these companies are already doing.
I don’t think this lack of concrete regulatory action is necessarily a bad thing or a dereliction of duty. I can think of a few reasons why US regulators are currently motivated to just not do much.
First, it is entirely plausible that legislators and executive branch officials honestly don’t know enough about AI to make rules. In that case, learning before regulating is the right thing to do.
Second, there is real fear and insecurity in Washington that regulation could kill innovation and diminish America's technological lead in AI, especially vis-a-vis China. While this fear may be unsubstantiated and unfounded, as this Foreign Affairs article argued, it is a powerful narrative that US regulators are very susceptible to. It is also a line of argument that tech incumbents (like the ones who showed up at the White House to make those voluntary commitments) are likely pushing. And why not? It has worked before with social media. When Mark Zuckerberg showed up in Congress in 2018 to try to stave off regulation of Facebook, one of his main arguments was the fierce competition he is feeling from Chinese Internet companies and that he is the lesser of two evils (you want Xi or Zuck?).
Third, pre-emptive regulation is perhaps just not in the American DNA. We like to wait for problems to get really bad first before we regulate. The Great Financial Crisis needs to happen first before Congress passes Dodd-Frank. Even then, the regulation gets watered down over time just so another financial crisis, albeit a more minor one, with SVB and First Republic Bank could happen.
This decidedly laissez-faire attitude certainly has many drawbacks. Many voices from the media and think tank world will harshly criticize this approach. It’s worth keeping in mind though that the media industry, in particular, feels very threatened by generative AI, already filing lawsuits, and would like to see strong regulations ASAP to protect itself. However, as I noted in a previous post, the US’s more light touch and grassroots-oriented way is distinct from the Chinese and EU approach, which is both top-down. And it is just as legitimate of an approach as any others to strike the elusive balance between innovation and safety (broadly defined).
Whether you are a large tech firm with plenty of legal resources or a young startup, it’s hard enough to stay updated on the latest regulatory movements in the EU, China, and the US, let alone any new variations that may pop up from India, Japan, Brazil, or Abu Dhabi, each trying to exert their own sovereignty over generative AI. Thus, knowing the attitudes and the “why” behind different national regulators’ actions is a helpful shorthand.
If you forget everything you’ve read so far, just remember: the US is the most open, the EU is the least open, and China (knowing its redlines) is somewhere in between.
尽管自ChatGPT首次发布仅有九个月的时间，但我们已经对三个最大的经济体 - 欧盟、中国、美国 - 如何计划监管生成式AI有了很好的了解。关于这些法规实质内容的分析很多。所以，在这篇文章中我将对欧洲、中国和美国监管机构背后的态度和动机（也就是“为什么”）进行比较。
让我们用GDPR这个例子来解释一下“布鲁塞尔效应”。此效应以两种方式现身：de facto （实际效应）和 de jure（法律效应）。
通过成为有关数据隐私的第一个、最全面、也是最严格的一组规则，GDPR迫使每家公司，无论大小，只要有一定数量的欧洲用户并可能收集cookie，就必须遵守。这就是为什么我们上网时都会看到那些接受或拒绝cookie通知的原因 —— 这是GDPR所谓de facto“布鲁塞尔效应”的最佳例子。尽管这些公司不一定必须为他们在巴西或印度的用户做出相同的合规方式，但从公司内部执行而言，遵循最严格的标准比为不同国家设定不同合规标准要更为简单。Facebook和Microsoft在2018年就是这么做的 — 将GDPR标准应用于所有国家的所有用户。最终的结果是，GDPR不仅仅是欧洲的标准，而变成了全球标准。
从de jure的角度看，GDPR还成为自那时以来通过的其他数据隐私法的基础（或最初启发）。最好的例子是《加州消费者隐私法》。这是de jure法律效应的一个实例，第一个“进入市场”的法律成为未来类似法律的根基。
这种执行速度更像是家创业公司。科技领域监管的这种 “创业公司式” 品质 Kendra Schaefer 在最近的一期Sinica播客节目中给了很在点的阐述。由于中国监管机构之前发布了有关合成AI和深度伪造（deepfakes）的规则，也不是从零开始（与美国立法机构相反），所以也具备了以 “创业公司速度” 推进的基础。
因此，中国监管机构需要小心平衡地行事，迅速但不粗暴地采取行动，加强红线的先明度而又不抑制创新。4月份出的草稿无疑有些粗暴，但却启动了整个过程。当前的暂行办法将合规难关仅限于具有引导公众舆论或动员社会能力的生成式AI服务 – 强化了红线，同时在其他用途中，如企业B2B软件，留下了创新空间。当暂行办法在8月生效后，将会出现其他还没有想到的边缘情况，监管机构也会根据需要不断更改规则，毕竟是“暂行”办法。
上周，白宫召集了七家AI公司的高管 – 亚马逊、Anthropic、谷歌、Inflection、Meta、微软、OpenAI – 美家都在研发自己的基础AI模型。会议的结果是，公司自愿承诺会把AI打造设计的更安全。这一承诺基本上就是每家发誓说：“我们不会做坏事”，会议宣布的八项承诺绝大部分这些公司内部已经在做了。
第三，先发制人的监管态度也许根究不符合美国的基因。我们喜欢等问题变得非常严重后再监管。通过《多德-弗兰克法案》之前，先需要2008年的金融危机发生。即便如此，随着时间的推移，规章也会被逐渐削弱，以便可能给下一场金融危机创造机会，例如 SVB和First Republic银行的倒闭和收购。