Ever since Large Language Models (LLMs) became synonymous with AI, thanks to ChatGPT, the consensus is that LLMs created in China lag those from the US, and the gap will persist. For a variety of reasons – from tighter government regulations, to lack of quality training data, to difficulties in getting the best Nvidia GPUs (sanctions!) – I agree with this view.

However, this consensus obscures China’s underestimated convening power in AI. This power was on full display last weekend in Beijing.

What is Convening Power

I got my firsthand taste of convening power when I was working at the White House under President Obama.

Convening power, in a (colloquial) nutshell, is an institution’s ability to gather every person who matters on an issue and put them in a room, no matter how much these people may dislike or rival each other. As the symbol of American power, the White House wields massive convening power. This power is granularly applied to every issue, every industry, every country, and down to every room in the building.

During the Obama years, we’ve gathered big bank CEOs to solve the Great Financial Crisis, health insurance CEOs to reform healthcare, and foreign country leaders to strengthen bilateral relations. Depending on the importance of the problem, these gatherings will take place in the Oval Office or the Roosevelt Room (very important), the State Dining Room (very important), or a conference room in the adjacent office building (less important).

When trying to solve thorny issues with many competing factions, getting all the parties into a single room and meeting cordially is oftentimes the hardest part. The White House often plays that convener role. When the White House invites you, you show up, no matter who else is there, and everyone's on their best behavior. That’s convening power at the highest level.

In the AI context, this convening power was applied in early May by the Biden administration with a meeting of the CEOs of Microsoft, Alphabet, OpenAI, and Anthropic. They were given the Roosevelt Room treatment (very important).

White House AI meeting in the Roosevelt Room on May 4th.

You don’t need to have worked in or been to the White House to intuitively grasp how convening power works. Nor is the White House the only or best example of convening power. During the financial crisis of 1907, John Pierpont Morgan led the rescue effort and hashed out the plan with other bankers in his newly-built library in New York City; Morgan’s library was the pinnacle of convening power for that crisis, not the White House.

JP Morgan’s library shortly after its completion

Convening power is quite similar, in fact, to the familiar Chinese concept of “face” or mianzi. The institution that commands the most convening power is, effectively, the one that commands the most “face giving”(给面子).

In the field of AI, one organization that seems to command a surprising amount of convening power (or “face giving”) is the Beijing Academy of Artificial Intelligence (BAAI).

Beijing Academy of AI Conference

Last week, the BAAI held its 5th annual AI conference in Beijing. And the speakers lineup was a who’s who of the generative AI industry and scientific community. The fact that the conference took place on a Friday and Saturday deterred few from showing up. (In fact, hosting events on the weekends, from small meetups to large conferences, is a common occurrence and distinct cultural trait of the Chinese engineering and startup community – weekdays are for working on your company, not going to events that look like work.)

The conference organizers proudly boasted the lineup:

  • Four Turing award winners (Geoffrey Hinton, Yann LeCun, Joseph Sifakis, and Andrew Yao)
  • Max Tegmark (MIT professor and president of the Future of Life Institute, which called for a six-month pause on AI development that Elon Musk signed)
  • Stuart Russell (Berkeley professor, whose seminal AI textbook every computer science student who studied AI has read or referenced at some point)
  • Co-founders of buzzy AI startups like Anthropic (Christopher Olah) and Midjourney (David Holz)
  • Sam Altman (CEO of OpenAI, whose celebrity power drew media attention to the otherwise very technical and academic event from even the Wall Street Journal)
  • Many other scientists, researchers, and practitioners from the likes of Nvidia, Meta, Google, etc.

What’s noteworthy is that every speaker either attended in-person (like Tegmark and Russell) or spoke live remotely (like Altman, LeCun), even when speakers were given the option to pre-record. The head of BAAI, professor Huang Tiejun of Peking University, could barely contain his (justifiably) proud smile when introducing Yann LeCun by sharing the fact that he was dialing-in live from France where the local time was 4am, even though he was given the choice to record his presentation. (This revelation was met with loud cheers and applauses from the audience.)

That’s the ultimate gesture of “face giving.”

Professor Huang introduces Yann LeCun and Max Tegmark. You can watch the session recording: https://2023-live.baai.ac.cn/2023/live/?room_id=10009090

The BAAI – a nonprofit research lab with the backing of the Ministry of Science and Technology and the Beijing city government – is punching well above its weight. Huang shyly suggested during his remarks that there is probably no other AI conference in the world that can convene a speaker lineup of the same caliber and star power; I think he is probably right.

Convening Power is Important in AI Regulations

If you are pessimistic about China’s growing opaqueness and skeptical of its AI capabilities, this BAAI conference may have surprised you. And if you are on the pro-regulation side of the generative AI debate, then the BAAI’s convening power is especially worth noting.

Being able to gather the best technical and scientific minds (as opposed to business and policy minds) is at the heart of getting AI regulations right. That’s why Altman, who's been flying around the world advocating for regulations, felt the need to speak live to BAAI’s audience and pitch talented Chinese AI researchers to contribute to the cause, even though his own company's products are deliberately made unavailable in China.

No serious person who believes that a global AI regulatory framework is warranted should also think that such a framework would work without China. If China chooses to actively lead this global regulatory undertaking, its convening power may be flexed in ways that few western policymakers understand or appreciate. The few remaining optimists of the state of US-China relations should almost cheer on the rapid emergence of generative AI as an existential threat that may force the US and China to cooperate on some level (in ways that climate change should but really hasn’t). The specter of robots killing us all may be our best chance of seeing Biden and Xi in a room together.

In no way am I suggesting that the BAAI somehow holds the same convening power as the White House. I’m sure if the Biden administration wants to gather the same group of scientists in the Roosevelt Room, they would all show up.

Just because you command the convening power, it does not mean you would always wield it. Thus far, the Biden White House has put a “light touch” on all things AI. Besides a four-CEO meeting I mentioned above, $140 million from the National Science Foundation to build seven more AI research institutes, the only other public gesture is endorsing the upcoming DEFCON 31 developer conference as the designated venue to publicly assess the AI safety of leading foundational models.

Personally, I think this grassroots approach – tapping into the energy and hands-on participation of developers, hackers, and businesses – is quite interesting and distinct from the Chinese and EU approach, which is decidedly top-down. Every well-meaning regulator is trying to thread the needle between AI safety and AI innovation (not to mention nation-to-nation competition). No one knows what the perfect balance is.

Whether it is the “EU way”, the “Chinese way”, or the “American way”, the end result depends on who shows up. And that is more easily observable. We now know who showed up to the BAAI Conference in Beijing. In a few months, we will see who shows up to do some “face giving” at the AI Village at DEFCON 31 in Las Vegas, and who just sends a pre-recorded video.


自从大型语言模型(LLMs)因为ChatGPT的风靡而与AI成为同义词以来,行业内外普遍认为,中国出产的LLMs落后于美国,且这种差距将持续存在。由于各种原因 —— 从更严格的政府监管,到缺乏高质量的训练数据,到获取最顶尖的英伟达GPU的困难(因为制裁!)—— 我总体同意这个观点。
















  • 四位图灵奖得主 (Geoffrey Hinton, Yann LeCun, Joseph Sifakis和姚期智)
  • Max Tegmark(麻省理工学院教授和生命未来研究所主席,该研究所呼吁对AI的发展进行为期六个月的暂停,这个呼吁得到了埃隆·马斯克的支持)
  • Stuart Russell(伯克利大学教授,他的AI教科书是每个学过AI的科班学生必读或参考过的经典)
  • 一些热门AI创业公司的创始人,如Anthropic的Christopher Olah和Midjourney的David Holz
  • Sam Altman(OpenAI的CEO,他的名人效应导致连《华尔街日报》这种主流媒体对大会进行了报道,虽然大会内容总体都非常偏技术和学术。)
  • 以及来自Nvidia,Meta,Google等大厂的众多科学家,研究人员和从业者。

值得注意的是,每位演讲者要么亲自出席(如Tegmark和Russell),要么实时远程做演讲(如Altman,LeCun),即使组织者都给了他们预先录制视频的选项。黄铁军教授(北大教授及北京智源研究院院长)在介绍 Yann LeCun 时,分享了一个细节:LeCun当时人在法国,当地时间是凌晨4点,但他坚持要实时从远程做演讲与大会互动,尽管他可以选择录制演讲。分享时黄教授无法掩饰他(有情可原的)自豪的微笑,同时也引起了观众的热烈欢呼和掌声。


黄铁军教授介绍Yann LeCun 和 Max Tegmark。可以用此链接看但是的视频: https://2023-live.baai.ac.cn/2023/live/?room_id=10009090

北京智源人工智能研究院 —— 一家得到科技部和北京市政府支持的非盈利研究机构 —— 其影响力远超自身规模。在黄教授的发言中,他谦虚地提出,世界上应该没有其他AI会议能够召集同等水平和明星效应的演讲者阵容了;我觉得他的估计是对的。



能够聚集最优秀的技术和科学精英(而非商业和政策精英)是做好AI监管的核心。这就是为什么最近在世界各地倡导监管的必要性的 Altman,觉得有必要向智源大会的观众发表实时演讲,并鼓动优秀的中国AI研究人员为这个事情做出贡献,尽管他自己的公司的产品在中国明确不可用。



仅仅因为有召集力,并不意味着该到处用。目前为止,拜登白宫在所有与AI相关的事情上都持 "轻触" 态度。除了我之前提到的四位CEO的会议,以及美国国家科学基金会为建设七所AI研究院拨款1.4亿美元以外,唯一公开表态的举动就是给今年的DEFCON 31开发者大会站台,作为官方公开评估领先基础AI模型安全性的指定场所。

我个人认为这种“草根”战略方式 —— 鼓励开发者、黑客和商业实体的参与 —— 还是很有意思,与中国和欧盟的方式比很独特的,后两者明显是自上而下的。每一位认真的监管者都试图在AI安全与AI创新(更不用说国与国之间的竞争)之间找到平衡。但也没有人知道最完美的平衡点在哪里。

无论是"欧洲方式","中国方式",还是"美国方式",最后的结果都取决于谁会出席。这一点更容易观察的到。我们已经知道谁都出席了北京智源大会。到了八月份,我们将看到谁会在拉斯维加斯的DEFCON 31的 AI Village 给足面子,而谁只会发个录好的视频。