I read this post recently:
TLDR1:
Tavel outlines three scenarios that could unfold in the next 12-18 months:
1. Reduced Competition: A potential decrease in competition among LLM providers could alleviate pricing pressures, benefiting startups.
2. Innovation Focus: As LLMs evolve, innovation will increasingly rely on researchers to create unique offerings, moving beyond mere computational power.
3. Market Disruption: The most concerning scenario involves LLM providers potentially competing with their own API developers by creating advanced asynchronous agents capable of automating tasks traditionally handled by specialized applications.
To navigate this shifting landscape, Tavel advises startups to develop a network effect, capture proprietary data, and target overlooked verticals to maintain a competitive edge. She emphasizes the importance of being customer-focused to quickly adapt and add value in a rapidly evolving market.
This is something I’ve been thinking about for some time. If you consider how much money has poured into the big proprietary LLM creators, and how their margins will erode over time… they will need to do something else.
The way I think about it is by looking at the cloud providers. AWS originally offered much simpler services and APIs than it does today. If you think about the basics, it’s EC2, S3… but this quickly became commoditised and they needed to offer more value that could be delivered on top of these fundamentals. Today, you have things like Aurora, BigQuery, Pub/Sub, Kinesis etc where the cloud providers have built software from the ground up, to run on their clouds to provide additional value, where they can charge higher margins, while still increasing consumption of the fundamentals.
I foresee that OpenAI, Anthropic, Cohere, Mistral… will need to do the equivalent of this for AI, in order to adapt to the commoditisation of basic LLM calls in the future. All of the general purpose AI applications are in danger of having an offering from an AI provider competing with them. Coding bots/assistants, document generation, video generation, unstructured data ingestion… if many companies are funded to use AI in this way, then these will be the first things the AI providers will offer. It’s actually a signal for the AI providers, to know that a particular use case is in high demand and worthwhile for them to offer themselves.
In order to compete with the AI providers on these applications, you would have to provide a service or application that was much better than they were offering. This is similar to how Snowflake and Databricks exist today despite all the cloud providers having their own offerings. As they are better products/suites than what the cloud providers offer (except BigQuery which is comparable), they are able to win business and sell to those cloud providers’ customers.
Although, imagine: if it was AWS that had a data warehouse as good as BigQuery and not GCP - the smallest of the three cloud providers - Snowflake and Databricks would probably not be where they are today. Every time I see a company that is a GCP shop today, they use BigQuery as their data warehouse. When I see companies that have AWS or Azure as their cloud, they tend to use Snowflake or Databricks.
I think there is much more security in offering a niche AI application, where the specialised knowledge is high and appeal is narrower but more lucrative. Sarah’s point about having proprietary data as a moat is also true. Imagine a provider of healthcare data, which is not available publicly - they could use this data to offer a specialised AI application not possible for anyone else. Both of these effects are in play with what I’m working on at Cube. No-one else is going to be able to build specialised AI applications to interact with Cube in the way we can, as we get to control both sides of the interface in a unique way. We then get to generate a metadata moat from usage of our AI applications, too.
Examples of insecurity in a basic offering already exist. GCP looks to be becoming both a cloud and AI provider at the same time, with competitive LLMs and LLM applications. A lot has been said about GCP’s NotebookLM in terms of it being able to create podcasts from blogs. So, I gave it a try to see for myself, using this post. See the results below:
Up until recently, this kind of segment of AI application was being funded by VC and was booming. I can’t see that happening for long. You may see some really great players that offer a huge amount of features on top of the basic offering2 and this is why they can still compete, but I don’t think there is room for over ten of them to be big companies in any segment.
Another thing that concerns me is the number of AI applications that have been funded and founded during the recent AI Boom. Many of the products created are great3 but some of them are also simply snake oil. If people buy too much snake oil before real medicine presents itself, they may not trust the medicine. I’m hearing people say things like “AI doesn’t work”, “AI doesn’t provide any value”, “This is just another over-hyped era”… Yes, there is a lot of hype, but with some discipline and grit in building products centred around AI - doing it well even if it’s harder - there are really valuable products to be made.
I am concerned that if too much snake oil is produced, that we might destroy trust in real products. The consequences of this are really stark, too. So much money has been poured into AI and so many publicly-traded stocks have the value of AI baked in, that a significant drop in confidence in the value of AI could cause a financial crisis. It is widely thought that, without the effect of AI boosting the value of large listed companies over the last couple of years, there would have been a recession.
I think it’s avoidable - we’ve reached a point now where investors and founders should know what is a real product and what is far too thin or simply snake oil. Let’s get to work on real products of real value and fulfil the promise of AI.
https://www.perplexity.ai/search/summarise-https-www-sarahtavel-EHqgneSySDud_cTJE2N8lw
Imagine a Riverside or Goldcast-type company that offered the performance of an editing studio using AI, that could generate great content for marketing from blogs or recorded Zoom calls.
Taking stock, I use: Cursor (an AI-first IDE), Warp (an AI-assisted terminal), various AI notetakers for calls, Perplexity for research (Google is offering similar functionality now), Spotify AI DJ, Image generation (mostly in Substack as per above)… and I’m not going back. I want to have more automation in my tooling, not less. I’m looking forward to Apple Intelligence, as it will potentially be able to touch many interactions I have with my devices.