I’ve written many posts on the global public cloud ecosystem lately, from in-depth comparisons of the big ones: AWS, Alibaba Cloud, Azure, Google Cloud Platform, to a few of the less prominent but still very competitive ones: IBM, Oracle, Tencent.
All of these clouds are spawned out of already-large tech companies, many of whom have built a massive technical infrastructure to support their original businesses, then turn those resources into services to rent out in the form of cloud. As I have discussed before, Google has always had the most global and technologically advanced infrastructure among its peers, because it needed to support a dominant search engine and many other services and apps that must be “always on” around the world. The one company that most closely resembles the same technical requirements and demands is Facebook.
Just Not Interested
It’s definitely not because Facebook doesn’t have the technology. In fact, Facebook has been incredibly successful in inventing new technologies in multiple layers of the technical stack to support its huge user base and new features, which are increasingly AI-intensive. There would definitely be companies interested in paying to use those technologies for their own use cases. Instead of monetizing them, Facebook has chosen to open source them, allowing any one to use and modify them for free.
A few fairly well-known examples:
- Cassandra, RocksDB (databases)
- GraphQL (app server layer)
- React (front end)
- PyTorch (deep learning AI)
Its preference for open source, as opposed to direct monetization, reached into the very heart of any cloud platform -- data centers. In 2011, when Facebook put its brand new data center in Prineville, Oregon online, it also open sourced that data center’s entire design, from custom-built servers to a cooling system that was fueled by cold air from the outside, as opposed to electric-powered water chillers in traditional data center design.
The open sourcing of this design led to Facebook initiating the Open Compute Project, a foundation to shepherd and evolve these designs further with other tech companies. Out of the large public cloud providers, Alibaba, Google, IBM, Microsoft, and Tencent are all members of the Open Compute Project. Amazon, the market leader, is not. It’s quite possible that these members of the Open Compute Project are all leveraging the designs that Facebook initially came up with to build their new data centers today. And Facebook doesn’t seem to mind one bit.
Clearly, Facebook didn’t care much for the enterprise IT business back then and didn’t think innovative data centers were even a competitive advantage worth protecting (unlike Google). Its infrastructure is built for the single purpose of growing its user base like crazy and selling ads to them.
Is it too late to change its mind?
Too Crowded or Room for More
The global public cloud computing spending was $229 billion in 2019 and will grow to $500 billion in 2023, according to IDC. The North America market is projected to be more than half of that growth, while Western Europe is second with close to 20%.
There’s plenty of growth left to go. But no one is standing still.
The way I see it, there is room for more -- one more -- if the new entrant has enough differentiating technologies and a track record of innovation. And Facebook could be that one, given its history of innovation on all levels of the stack. Most players in the industry are still in what I call “land grabbing mode”, selling the cloud basics of compute and storage resources wired by quality networking, before moving up the value chain with higher-end, differentiating services, like machine learning APIs and large-scale distributed databases.
Facebook currently has 15 data centers around the world, 11 of which are in the United States, with three in Europe and one in Asia. The U.S. is the biggest and most lucrative cloud market at the moment, so if Facebook does decide to rent out a portion of its capacity as a cloud service, it already has strong coverage to support it. If it can turn its Al capabilities into easily consumable APIs, trained already on its billions of users and an ungodly number of pictures, videos, and texts, a hypothetical “Facebook cloud” could offer very attractive, differentiating services beyond the basics of compute, storage, and network.
The tougher part is perhaps not technical, but human. Facebook will have to build out an enterprise sales and support team that is culturally and operationally very different from running an ad-supported social network. Google, also a primarily ad-supported company, suffered from this “identity crisis” and is still trying to catch up to AWS and Azure, despite offering what many believe is a superior technical product.
Technology never just sells itself, no matter how good it is.
Even if Facebook solves the technical and personnel challenges and has marshaled both the will and financial resources (which it has) to go into the cloud business, its biggest hurdle is trust, given its problematic record on data privacy.
During the early days of the public cloud, mid to late-2000s give or take, the biggest resistance from large companies to the cloud was lack of privacy and control. That’s why most cloud users were small startups or greenfield projects that did not have high privacy requirements, and did not need any migration of data. (Large enterprise CIOs are generally risk averse and hate migration of any sort.) That concern has subsided somewhat, as shown by the growing usage of cloud services generally. But it’s still there, which is giving rise to the “private cloud” or “hybrid cloud”.
That’s the rationale behind AWS’s Outposts offering and GCP’s Anthos. Both AWS and GCP (and others) are willing to provide the software and hardware to build a completely separate cloud (“private cloud”) inside a large enterprise’s own facility (“on-premises”) with network connectivity to their existing public cloud (“hybrid cloud”). All this extra work is intended to alleviate large companies of privacy, compliance, security, and regulatory concerns, while giving them maximum control. These concerns are especially acute for enterprises in tightly-regulated industries like financial services, healthcare, energy, and telecom. But they are deep-pocketed customers, so all the extra work is worth it.
Engaging in these types of customers must begin from a baseline of trust. And if you have a deficit of trust, either real or perceived, the road ahead is rough. Google is already suffering from its own trust deficit with the healthcare industry, caused by its questionable record on data privacy. AWS has steered clear of most industries’ concerns, except for other retail and e-commerce companies, though that’s due more to market competitive dynamics than data privacy. Alibaba Cloud’s trust deficit is rooted in trade wars, nationalism, and a strong suspicion of the government of its home country that goes far beyond technology or loosely-worded privacy policies. Microsoft’s Azure is the only major player that does not have a trust deficit, which is perhaps a main driver for Azure’s impressive growth, despite what I believe to be inferior technology and architecture.
Facebook’s troubling history with data privacy is well-documented at this point. Can it overcome it eventually and build a meaningful cloud business?
Never say never. It wasn’t that long ago, maybe five years at most, when Facebook was a company that could do no wrong. And now it’s a company that could do no right. In reality, the truth is always somewhere in between the extremes.
As COVID-19’s impact in the U.S. and around the world intensifies, normal freedoms of activities and movements that we all take for granted are being restricted everywhere. A second-order effect of these restrictions may be diminishing concern for digital privacy, when our personal physical privacy is limited to combat spread of the coronavirus -- an opportunity for Facebook to resuscitate its reputation.
It’s possible that Facebook has already decided that it has missed the cloud gravy train and so be it. Instead, it has moved on to the next generation of distributed computing technology -- the blockchain -- which would explain the significant amount of capital, both financial and political, that Facebook as a company and Mark Zuckerburg as an individual have put into the Libra project.
In classic Facebook fashion, when it announced Libra in June 2019 the initial code base was open sourced, much like the data center designs that seeded the Open Compute Project. Unlike the Open Compute Project, the Libra Association has since faced widespread media skepticism, opposition from multiple central banks and regulators, intense questioning by the U.S. Congress, and many initial members pulling out.
Launching a cloud would be a lot easier, with few regulatory hurdles and a straightforward business case. But maybe Facebook just wasn’t and still isn’t interested. It has a lot more to prove and show to the world, it seems, than just generating another revenue stream.
这些云都是已经成气候的科技巨头的一个分支，这些公司已经建立了庞大的技术基础设施来支持他们原来的业务，然后再将这些资源转变成云服务，出租出去。正如我之前所分析过的，谷歌一直拥有最全球化和技术最先进的基础设施，因为它需要支持一个庞大的搜索引擎和许多其他服务和应用程序，而它们都必须在全球 “always on”。从这个角度看，Facebook是最接近谷歌的公司。
开源数据中心的设计即是Facebook启动一个新的非盈利基金会Open Compute Project的前奏。通过这个组织，Facebook想让数据中心设计继续进步，并与其他科技公司合作。在所有公有云大厂中，阿里巴巴、谷歌、IBM、微软和腾讯都是Open Compute Project成员。市场领头羊，亚马逊，却不是。这些Open Compute Project的成员应该多多少少都在利用Facebook最初的设计来构建他们今天的新数据中心。Facebook也并不介意。
AWS Outposts和GCP Anthos这两款产品都是为了满足市场中的这种需求。AWS和GCP（以及其他云厂商）都愿意提供软件和硬件，在大型企业自己的设施内（“on-premises”）构建一个完全独立的云（“私有云”），并可以与自己的公有云链接在一起（“混合云”）。所有这些额外的服务都是为了减轻大公司对隐私、法规、安全和监管方面的担忧，同时给予它们最大限度的管控能力。对于金融服务、医疗、能源和电信等监管严格的行业来说，减轻这些担忧尤其重要。这些额外的工作也都是值得的，因为他们都是财大气粗的客户。