Andreessen Horowitz Founders Notice AI Models Hit The Roof
Despite continuing to bet on early AI and chip systems, the founders of venture capital firm Andreessen Horowitz say they have seen a slowdown in AI model development in recent years. Two years ago, OpenAI's GPT-3.5 model was “ahead of everyone else,” said Marc Andreessen, who founded Andreessen Horowitz and Ben Horowitz in 2009, in a podcast released yesterday (Nov. 5). “I'm sitting here today, there are six people who agree with that. They are like hitting the same thing in terms of skills,” he added.
That doesn't mean the investment firm doesn't have faith in the new technology. One of the most aggressive investors in the AI space, Andreessen Horowitz earlier this year committed $2.25 billion in funding for AI-focused applications and infrastructure and has led investments in notable companies including Mistral AI, a French startup founded by DeepMind and the former Meta (META). ) researchers, and Air Space Intelligence, a space company that uses AI to improve air travel.
Despite their embrace of new technology, Andreessen and Horowitz admit there are limits to growth. In the case of OpenAI models, the difference in power growth between its GPT-2.0, GPT-3 and GPT-3.5 models compared to the difference between GPT-3.5 and GPT-4 shows that “we have really slowed down the amount of development, ” said Horowitz.
One of the main challenges for AI developers has been the global shortage of graphics processing units (GPUs), the chips that power AI models. OpenAI CEO Sam Altman last week cited the demands of distributed computing as causing the company to “face a lot of constraints and difficult decisions” about which projects to focus on. Nvidia, the leading GPU maker, has described the shortage as making customers “intense” and “sensitive”.
In response to this demand, Andreessen Horowitz recently launched a chip lending program that provides GPUs to its portfolio companies in exchange for equity. The company was reportedly working to build a stockpile chip collection of 20,000 GPUs, including Nvidia's. However, chips aren't the only aspect of computing that is of concern, according to Horowitz, who pointed to the need for more power and cooling in all data centers that house GPUs. “Once they get the chips we won't have enough power, and once we have power we won't have enough cooling,” he said on yesterday's podcast.
But computing requirements may not be the biggest hurdle when it comes to developing AI modeling capabilities, according to the venture capital firm. The availability of the training data needed to teach AI models how to behave seems problematic. “The big models are trained by going off the Internet and pulling all the human-generated training data, human-generated text and incremental video and audio and everything else, and it's just a lot of that,” Andreessen said.
Between April 2024 and 2023, 5 percent of all data and 25 percent of data from the highest quality sources were restricted by websites that violate the use of their text, images and videos in AI training, according to a recent study from Data. The Provenance Initiative.
The issue has become so big that big AI labs are “hiring thousands of programmers and doctors and lawyers to actually hand-write the answers to questions in order to be able to train their AIs—at that level of restriction,” Andreessen added. OpenAI, for example, has a “Human Data Team” that works with AI trainers to collect specialized data to train and test models. And many AI companies have begun working with startups like Scale AI and Invisible Tech that hire human experts with specialized knowledge across medicine, law and other areas to help fine-tune AI model responses.
Such practices fly in the face of fears related to unemployment driven by AI, according to Andreessen, who noted that the reduction in data supply has led to the unexpected hiring of AI to help train models. “There is a paradox in this.”