In March, the release of GPT-4 sent the VC ecosystem into a frenzy. Investors were all asking the same question: which new companies were going to make it big? Capital poured in. By one estimate, VCs invested almost five times as much into Generative AI startups in the first half of 2023 as they did during the same period last year (Source: Pitchbook). Several Generative AI startups took advantage of the fundraising rush, raising hundreds of millions of dollars without a single cent of revenue.
Now, it is time for us to have a say. TCV seeks to invest in companies which have proven product-market-fit, a history of execution, and sustained revenues. Over the years, we have made many AI investments, but we have not yet made any native Generative AI investments. That time is coming. So we wanted to offer our thoughts on the risks and opportunities in the AI space as we see them.
What are we talking about, in its simplest form
Recent AI tools are so impressive that it is easy to get carried away with the “what ifs.” We are asking ourselves: to what extent are we buying the hype? And to what extent are we seeing a paradigm shift in technology? So far, we see the following strengths, weaknesses and opportunities:
Just don’t believe the hype
LLMs are enormously powerful, but they aren’t a silver bullet (yet)
To focus the discussion further, in a recent internal exercise we assessed which business characteristics were a) weaknesses and b) strengths in the era of AI. The full table is shown below, but a few themes stand out:
Weakness 1: Publicly available data. Some companies draw on large bodies of freely available text and images. They aggregate it, curate it, and then deliver content to customers. This business model directly competes with many Generative AI models, which scrape the internet to train themselves. The problem is that Generative AI models do it better.
True, these companies will often say that, on top of the publicly available data, they are creating their own content. An AI chatbot would, though, make exactly the same argument.
Weakness 2: Routine cognitive tasks. In analyzing the labor market, economists classify jobs as being either routine or non-routine, and cognitive or non-cognitive. Routine cognitive jobs account for about a quarter of total jobs across the United States. These are jobs which require some element of brainpower, but where there is little creativity, accuracy does not have to be perfect, and the job on day 1 is similar to the job on day 2.
Some businesses are highly reliant on routine cognitive tasks. For example, personalization, translation, and localization services can be automated by AI at near-zero cost. So too can repetitive data entry and processing tasks. The same goes for content creation, if the product is something that merely has to be “good enough” rather than genuinely compelling.
Weakness 3: Old-school AI. There are numerous examples of AI systems being deployed at scale. Just consider the autocomplete function when searching on Google. Some building-management systems use AI to help with heating and cooling. Some worry that the providers of these services will soon be outcompeted by the vastly more capable tools associated with Generative AI.
Characteristics that might cause a business to be more vulnerable to AI disruption/disintermediation
Strength 1: Trust. A growing share of services in the modern economy could be described as YMYL – or, “your money, your life.” These are things where you cannot afford for things to go wrong, such as healthcare, education, and insurance.
In these industries, the barriers to adoption of AI could be high. Managers will be nervous to trust something that is almost human, but not quite human. In addition, these industries are often highly regulated. As a range of evidence shows, technological progress in highly regulated industries tends to be slow. If you’re still filling out paper forms when you go to the doctor, how likely is the practice to adopt Generative AI any time soon?
Strength 2: Network effects. Companies that rely heavily on understanding customer data have a virtuous compounding moat. These companies can focus on using AI to augment human capability through their own channels. Proprietary data and processes can be used to build better algorithms or models. These companies will drive AI use cases, rather than be displaced by the technology.
Strength 3: Human-to-human contact. If customers have an emotional connection to a brand, this often requires emotion, high-touch sales, or some other form of personal interaction. As of today, AI cannot fully replicate human connection – and may never be able to do so. The same is true where content is predominantly about shared experiences and social capital.
More trivially, AI is unlikely to disintermediate companies that rely on physical services. We believe tradespeople like electricians and stonemasons are well-positioned to weather whatever comes next.
Conversely, sustainable moats framework
Given these risks and opportunities, what should growth companies do today? This is a question that companies pose to us on a near-daily basis. First, the company should consider whether AI can provide a real advantage. Second, the company should explore the best path to a production use case. This often means combating human capital limitations, addressing data privacy concerns, or having some ability to accurately evaluate model performance.
Where we hear problems arise in the LLM toolchain
Source: Arize, TCV
Here’s how to think about incorporating AI. There are, we believe, many potential barriers to adoption, as the table below outlines.
LLMs and Generative AI have shown impressive potential, but there are limitations that may hinder widespread adoption
In the short term, companies can draw on publicly available LLMs – large language models – like GPT-4 to start thinking about proof of concept. That is all well and good, but the proper use of AI goes far beyond simply using the occasional chatbot. Instead, it involves the full-scale reorganization of firms, as well as their in-house data. What does that mean in practice? At a lower level, it means leveraging a publicly available model with context from the company’s own data and products. At a higher level, it means training or fine-tuning a foundation model, where the company maintains full control and ensures data privacy.
Selecting the right approach to LLMs
Source: Arize, TCV
Operationalizing AI efficiencies is also top of mind for our portfolio companies. It is not lost on us that the best performing companies will be those that continuously think about their cost structures and how to leverage new technologies to improve margins. Some of our portfolio companies are creating “synthetic P&Ls” to understand what their optimal cost structures could look like by leveraging AI, while other bestin- class companies are employing AI “SWAT teams” to ascertain how they might use AI internally and in their products.
We are still at an early stage. The world has quickly moved from traditional AI systems to Generative AI. Things are bound to continue to change, and over time our thinking about the risks and opportunities of AI will continue to evolve. We look forward to receiving your comments.