Newsletter 10 - Navigating the AI Labyrinth: Silicon Valley's Modern Odyssey
In the United States, this week we celebrate Thanksgiving. In this season I want to thank you for your readership and for your time every time you read this newsletter. And if you do celebrate it, I wish you a very Happy Thanksgiving. Research suggests that gratitude can buffer the negative psychological outcomes associated with stressful life events, so it is a proven feel-good tactic and therefore Thanksgiving is truly my favorite holiday.
~~
This is the 10th edition in this newsletter series and I am making some changes. First, I will move to a bi-weekly cadence and also I will migrate the whole process to Substack. The structure will also change to presenting complex information (especially related to AI) in a simplified way. In the Nota Bene section below, I explain the introspection and the feedback that led me to this.
~~
Navigating the AI Labyrinth: Silicon Valley's Modern Odyssey
A Nice Midwestern Boy Looks for Meaning
Sam Altman grew up the oldest of four siblings in suburban St. Louis, born to a Dermatologist and a Real Estate Broker. During his sophomore year at Stanford, he started Loopt with support from Y Combinator and worked on it so hard that he developed scurvy!
By his own admission, it never really caught on to the extent other startups founded around the same time did and he was forced to sell. He made a reputed $5MM but felt lost. He traveled to India and spent time in Ashrams trying to figure it out, getting heavily influenced by Advaita Vedanta philosophy. Vedanta is one of the six key schools of philosophy based on the ancient Indian texts called the Vedas, it proposes a “non dualistic” perspective, unifying the universal and individual consciousness. As I understand his worldview, a lot of it seems to have stuck with him.
He then led Y Combinator successfully steering investments, startups and founders to the storied place that YC achieved in Silicon Valley. At that time a philosophical discourse was circulating in the valley ecosystem around a concept called “Effective Altruism” that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis”. People who pursue the goals of effective altruism, may choose careers based on the amount of good that they expect the career to achieve or donate to charities based on the goal of maximizing impact.
We Must Save Humanity
Immersed in these thoughts and with the emerging possibilities of AI, in 2015 Altman co-founded OpenAI, as a nonprofit, with Elon Musk and four others — Ilya Sutskever, Greg Brockman, John Schulman, and Wojciech Zaremba. The mission of the 501(c)(3) is to create “a computer that can think like a human in every way and use that for the maximal benefit of humanity.” The idea was to build good AI and dominate the field before less altruistic people took control. Effective Altruism influenced OpenAI's formation by promoting a focus on long-term impacts, emphasizing the need for risk mitigation in AI development, advocating for the global distribution of AI benefits, encouraging collaboration and openness in AI research, and instilling a strong ethical framework to guide AI advancements.
Where am I going with this?
I was working on this newsletter over last weekend to lay out for you how the various key players in the AI tech space from the events I mentioned in Newsletter 6 ended up. It would have been a good follow on from last week’s newsletter (#9) and indeed from Newsletter 6. Then, this news emerged.
This past Friday, Sam Altman was reportedly fired from Open AI by the board. It is rumored that the person who delivered the message was Ilya, yes that Ilya! Sam had a new Job at Microsoft by Sunday evening and with the threat of most of Open AI’s people quitting (indeed a few top people immediately did) this past Wednesday, Sam was back as CEO of Open AI and there is a new Board!
I had to pivot! I reworked my draft to not only tell you the histories of the key people laid out in Newsletter 6, I also compiled the background on a philosophical divide that ostensibly influenced the events that transpired this past weekend. To tell you this story, you needed to know who Sam is and how he might be influenced in his thinking.
~~
AI is like Tolkein’s One Ring, that Binds them All.
In The Lord of the Rings: The Fellowship of the Ring, Galadriel uses a line during her opening monologue to describe why the Ring was never destroyed - even when it fell into the hands of Isildur, who seemed like the perfect person to rid Middle-Earth of its evil influence.
“The Ring passed to Isildur, who had this one chance to destroy evil forever, but the hearts of Men are easily corrupted. And the Ring of Power has a will of its own.”
Galadriel, a Noldorin Lady who witnessed Middle-earth during the Three Ages is one of the most significant characters both in J.R.R. Tolkien's Lord of the Rings (LOTR) trilogy and if you've seen any of the film adaptations, then you're likely well aware of who Galadriel is.
As you know, the Ring gives whoever uses it an extremely desirable and completely unique power, and even the most strong-willed can feel burning temptation when they come into contact with it.
Isildur, a king from Tolkien's Middle-earth, was a noble yet flawed character. He obtained the One Ring after cutting it from the dark lord Sauron's hand in a crucial battle. Although advised to destroy it, Isildur kept the ring as a token of victory and reparation. This decision showcased a mix of grief, pride, and the ring's immediate corrupting influence. The ring's power began to affect Isildur, inflaming his desires and clouding his judgment, a reflection of its malevolent nature. Ultimately, this led to his downfall, as the ring betrayed him, leading to his death and the ring's loss for centuries.
So Open AI is set up to own this AI “ring” of power before evil forces can use it for less than altruistic purposes so that they can use it to effectively guide it for the benefit of humanity. But mark Galadriel’s words, “but the hearts of Men are easily corrupted”.
The Struggles of Open AI, its founder and, its Charter
In 2017, Sam considered running for California governor. He published a platform, the United Slate, outlining three core principles: prosperity from technology, economic fairness, and personal liberty. Altman abandoned his bid after a few weeks.
Please remember from newsletter 6 that in 2017 the famous “Attention is all you need” paper came out from Google describing “Transformers”. I've described its radical approach that was a very effective but a very expensive model to create LLM based Generative AI applications.
Early in 2018, Musk tried to take control of OpenAI, claiming that the organization was falling behind Google. By February, Musk walked away, leaving Altman in charge.
In 2019, Sam joined Open AI full time and he created a capped but for-profit subsidiary. Building AI, especially after the Transformer led revolution in LLMs, proved to be wildly expensive. That same year, he’d raised a billion dollars from Microsoft, roughly half in the form of Azure credits to jointly develop new technologies for the Azure platform and “further extend” OpenAI’s large-scale AI capabilities. In exchange, OpenAI agreed to license some of its intellectual property to Microsoft, which the company would subsequently commercialize and sell to partners, and train and run AI models on Azure as OpenAI worked to develop next-gen computing hardware.
At that point, OpenAI Nonprofit’s board consisted of OpenAI LP employees Greg Brockman (Chairman & CTO), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Holden Karnofsky, Reid Hoffman, Shivon Zilis, and Tasha McCauley. Microsoft held no board seats. In 2018, Reid Hoffman also left the board of Open AI.
A brief description of the board here, it was majority independent. According to Open AI, “Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.” The Board therefore was still aligned to its charter which was not like a traditional commercial entity with a duty of loyalty and a duty of care to the investible entity but rather to a higher “Effective Altruism” purpose. So technically, if Sam wasn't pursuing that agenda, then they could have, and in fact did, fire him.
Microsoft invested an additional $2 billion in OpenAI between 2019 and early 2023 (NYT). The tech giant also became a key backer of OpenAI’s Startup Fund, OpenAI’s AI-focused venture and tech incubator program. Earlier this year, It was rumored that Microsoft would receive three-quarters of OpenAI’s profits until it recovers an investment as high as $10 billion (rumored to also include Azure credits), and when that was paid back Microsoft would end up owning 49% with additional investors including Khosla Ventures taking 49% and OpenAI retaining the remaining 2% in equity. There’s also a profit cap that varies for each set of investors, which investors hope might return 20 or 30 or maybe even no more than 100 times their invested money but not an unlimited potential. Microsoft never got on to the board.
Nevertheless, Open AI was now beholden to Microsoft and Microsoft to it. Some employees quit, upset at the mission creep away from “the maximal benefit of humanity.”
The Darth Vader effect, the Rebels Turn into Another Empire
One such set of rebels included Dario Amodei, the former VP of research at OpenAI, who launched Anthropic in 2021 as a public benefit corporation, taking with him a number of OpenAI employees, including OpenAI’s former policy lead Jack Clark and his sister, Daniela Amodei, who led OPen AI’s policy/safety teams. These were the folks who had worked on GPT-3 and Reinforcement Learning from Human Feedback (RLHF). Since its founding, Anthropic has published 14 research papers showing how to build language models that are reliable and controllable. Anthropic also is a weird corporate structure. Besides being a “public benefit” corporation; it is shown in its public filing to have two classes of shares. One that has 10 times the voting rights as the other. It's interesting that earlier this year it was reported in Techcrunch that, “Anthropic aims to raise as much as $5 billion over the next two years”. According to TC, “a pitch deck for Anthropic’s Series C fundraising round discloses these and other long-term goals for the company”.
I guess this group of rebels, maybe had a change of heart and felt that Sam shouldnt be the only one allowed to raise money “for the benefit of saving humanity”. It was also reported in the same article that “Google is also among Anthropic’s investors, having pledged $300 million in Anthropic for a 10% stake in the startup. Under the terms of the deal, which was first reported by the Financial Times, Anthropic agreed to make Google Cloud its “preferred cloud provider” with the companies “co-develop[ing] AI computing systems.”
Then in September, another blockbuster news was reported. According to geekwire, “Amazon was investing up to $4 billion and taking a minority stake in Anthropic''. As part of this investment, Anthropic committed to make Amazon Web Services its main cloud provider. This news came less than eight months after Anthropic declared its allegiance to Google Cloud. The companies said Anthropic will build, train, and deploy its AI models on AWS Trainium and Inferentia chips. The startup will also expand its support for Amazon Bedrock, the AWS service that provides access to AI foundation models for cloud customers to use in building their own apps and services.
But Anthorpic’s history was also convoluted. Other Anthropic backers include James McClave, Facebook and Asana co-founder Dustin Moskovitz, former Google CEO Eric Schmidt and founding Skype engineer Jaan Tallinn. Most interestingly, the pitch deck revealed that Alameda Research Ventures, the sister firm of Sam Bankman-Fried’s collapsed cryptocurrency startup FTX, was a “silent investor” in Anthropic with “non-voting” shares — responsible for spearheading Anthropic’s $580 million Series B round. Anthropic expects Alameda’s shares to be disposed of in bankruptcy proceedings within the next few years. As it turns out, prior to late 2022, a major funder of the “Effective Altruism” movement was Sam Bankman-Fried.
We must really mark Galadriel’s words, “but the hearts of Men are easily corrupted”.
Who’s on First
Let's go back and take stock of everyone who worked on Alexnet, Word2Vec and the “Attention is all you need” work as I outlined in my newsletter 6 and you will see a thread.
In 2012, Alexnet was the creation of Alex Krizhevsky in collaboration with Ilya Sutskever and Geoffrey Hinton.
Ilya ended up at Open AI and was on the board and ostensibly the one who told Sam that he was fired. By Sunday he had a change of heart and signed a letter along with most of OpenAI employees threatening to quit if Sam wasn't brought back. As of right now he is at Open AI but not on the board.
Alex, lost interest in his work in 2017 and left to join Dessa which was then acquired by Square. He briefly surfaced at VC form Two Bear Capital and is now lying low somewhere.
Dr Hinton was colloquially referred to as the “Godfather of AI” and after a long career, in May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I." That month, Hinton revealed in an interview with BBC that AI might soon surpass the information capacity of the human brain. He described some of the risks posed by these chatbots as "quite scary". Hinton explained that chatbots have the ability to learn independently and share knowledge. This means that whenever one copy acquires new information, it is automatically disseminated to the entire group. This allows AI chatbots to have the capability to accumulate knowledge far beyond the capacity of any individual
In 2013, Tomáš Mikolov built word2vec.
He worked at Facebook from 2014 till 2020 and is now a Senior Researcher at the Czech Institute of Informatics, Robotics and Cybernetics.
Mikolov has argued that humanity might be at a greater existential risk if an artificial general intelligence is not developed.
In 2017, Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin; then researchers at Google wrote the now famous paper called, “Attention is all you need”.
Ashish left Google by the end of 2021 to join startup Adept AI as co-founder, Adept is “an ML research and product lab building general intelligence by enabling humans and computers to work together creatively”. He has now left and is working on a startup in stealth.
Noam Shazeer founded and runs Character AI. Other investors include former GitHub CEO Nat Friedman, Elad Gil, A Capital and SV Angel. This year it raised $150 million at a $1 billion valuation in a funding round led by Andreessen Horowitz.
Niki Parmar also left along with Ashish to co-found Adept but is now also working on a stealth startup, not sure if it's the same one as Ashish.
Jakob Uszkoreit started Inceptive in 2021 with funding from 16z Bio + Health, NVentures (the venture arm of NVIDIA), Obvious Ventures, S32 etc. On their website they claim that they “are creating tools to develop increasingly powerful biological software for the rational design of novel, broadly accessible medicines and biotechnologies previously out of reach.”
Llion Jones left Google to co-found Sakana AI who claim to be a “Tokyo-based R&D company on a quest to create a new kind of foundation model based on nature-inspired intelligence”.
Aidan N. Gomez co-founded Cohere in 2019. Cohere has raised $170 million to date from institutional venture capital firms, including Tiger Global Management and Index Ventures, and has a number of associations with Google. Google Cloud AI chief scientist Fei-Fei Li and Google fellow Geoffrey Hinton were early backers of Cohere, Cohere also has a partnership with Google to train large language models on the company’s dedicated hardware infrastructure. In a strange twist, Cohere has started Cohere For AI structured as a nonprofit research lab. The desire to be perceived as effectively altruistic runs deep!
Łukasz Kaiser works at OPen AI and has not quit!! 😀
Illia Polosukhin founded and runs Near since 2017. A blockchain/Web3 startup.
So talent is getting organized along philosophical lines of whether we are to worry about AGI or not. It's beginning to become clear who in the big tech arms race they are all getting aligned with, how they navigate the concern, the anticipated regulatory ecosystem and, the boundless opportunity in the coming times.
“War has come to middle earth!”
A Philosophical Battle
We are now in the throes of a philosophical debate between the people who are purists focussed on AI as benefit of humanity only. This group is prepared to destroy AI related entities if needed versus the ones who thought that using capitalism to build and control AI was the right way to go. As The Economist puts it, “on one side are the “doomers”, who believe that, left unchecked, AI poses an existential risk to humanity and hence advocate stricter regulations. Opposing them are “boomers”, who play down fears of an AI apocalypse and stress its potential to turbocharge progress.”
It is this straight ideological battle that culminated in Sam’s firing and the removal of certain board members as he was brought back. Two people removed this week from Open AI’s Board especially emerge in this conflict; Ilya is notably influenced by Geoffrey Hinton, a very concerned and public “doomer” and Helen Toner, a researcher and director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology. According to CNBC, “Toner offered what could be seen as public criticism of OpenAI in an October paper, a decision with which Altman reportedly took issue. The paper suggested that OpenAI’s launch of ChatGPT undermined the company’s efforts to develop AI safely, by spurring other tech companies into launching their own competing chatbots and forcing them to “accelerate or circumvent internal safety and ethics review processes.”” Most Big Tech, Marc Andreeson with his Techno Warrior manifesto are opposed to the doomer view, they want “effective accelerationism” so we can use AI for the benefit of humanity faster.
The War of the Tech Titans
Big tech concerned for the future of humanity and who might have access to AI and possible AGI? Let me just call BS on that right now. Big Tech is behind on controlling this tech. Microsoft, through some kind of Gandalf-like magic (another Tolkien LOTR reference), has gotten everything on its table for a pittance of an investment. Google clearly managed to squander a distinct advantage. AWS is further behind now trying to get back via Anthropic. IBM out in the wilderness with a decade old failed Watson debacle. Meta has taken a completely radical approach by open sourcing their LLM models and tech, ostensibly to dilute any proprietary advantage that first movers may have and attracting independent developers to its platform. VCs love open source because they can back smaller potential investments but incumbent will call for regulation to freeze such threats out. The dramatic hand wringing and asking for regulation from these incumbents does smack of some duplicity, regulations which they help influence solidifies their advantage and freezes out smaller upstarts.
Not to mention the chip war.
Someday they are going to make a movie about this. What a time to be alive!!
Take care of yourself,
-abhi
Nota Bene:
Feedback clearly indicated that simplified explanations of LLMs, AI and other topics resonated the most. My “musings” on leadership less so. Philosophical angles were generally appreciated. I will incorporate.
Writing weekly is a professional obligation and if I am to research and form simplified explanations, more time is needed. This is why I will move to a bi-weekly cadence. Expect only one more edition this year in Mid December.
Every time people joined, a lot of effort went in to add people, send them past editions etc. Substack will automate all this for me. I also got feedback suggesting this. I looked at Medium, Beehiiv and Notion also but Substack seemed the easiest to adopt.