If answering an analytical question was free but still moved at the same speed as a request working its way through a data team ticket queue, we might ask a few more questions. But if we got an answer back as soon as we asked it, we’d investigate everything. Analytical bots would be in meetings what Google is at a bar: The immediate arbiter of any dispute. Moreover, just as it was Google—and not a free library card—that made us all amateur researchers and historians, it’s the instant feedback loop between question and answer that would finally make us all the “citizen analysts” we’ve been hyping for years.
I’ve been speaking to data team leads about how many ad-hoc requests they get, how well-used their dashboards are, how many users access them every day… I’ve been doing this to approximate how much a tool like Delphi will be used to educate many aspects of product development, pricing, growth estimates etc.
While I think this is helpful in order to understand a lower bound of usage, it’s more or less useless for estimating any kind of upper bound. It’s like trying to figure out how many Google searches people do today, based on some multiplier of combined Yellow Pages, Library and Microsoft Encarta usage from 1996. You could have estimated it would be higher for sure, probably that it would be orders of magnitude higher, but not exactly which order.
With this kind of B2C use case, the multiplier can be particularly high. You move from having limited data and access:
Your Yellow Pages was at home next to your landline, so you could only really use it when you were at home and in this place. Therefore, you would only use it when you really needed to do something like order a taxi or takeaway food (two other places where we’ve seen this consumption change dramatically).
Your library was a 10 minute walk away (but the bigger, better one was a bus ride), therefore you’d batch things you needed to do or get and do them at most once a week: homework research, a new fiction book to read…
Microsoft Encarta was on a CD-ROM, so you had to kick your family member off the PC, log in to your account, get past Windows 95 BSOD issues, find CD-ROM1 put that in first, browse through the topics, find CD-ROM3 (did I lend that to someone?), breathe on the back of CD-ROM3 and pray that doing this and wiping it on your t-shirt will compensate for the scratches on the back… You might do this once a day if you really needed to, for homework etc.
To being able to search for anything at a whim, with very low latency: you can search on the move, you can search in an underground tunnel, you can search whilst on Zoom, you can search whilst in bed, you can search without picking up your phone… When you can’t search because your train or car is passing through a dead spot, you get angry (withdrawal)!
Imagine if you had to pay some kind of nominal fee per search, like 10 cents per search, for example, and there was no free alternative… we might use it a bit less but probably not a lot less. The explosion in use makes it possible for the service to be run extremely efficiently to the point where it would quickly become almost free. Think about how quickly the cost of SMS messaging reduced… during the course of my university education I started on a contract where I paid ~15 cents a message, to having unlimited use by the end.
Artisanal Value
Search is also a use case where there is next to no artisanal value. As I mentioned before, I think with something like art - yes, you would generate more if it was instant to create on demand, but not that much more. When do I want art? Most often, when I want pictures to intersperse in content, or to have prints at home. My wife has covered every wall in our house with at least two pieces of art and we’re now full up - we often have a conversation where we see something new we really like but don’t buy it because we don’t have anywhere to put it.
I don’t think the owning of digital art that you can view when you want on a computer is that big an industry, although pinterest may disagree. So, our consumption of art is limited. We’re already maxed out. There isn’t a lot of difference between us buying AI-generated art and the prints we already like, own and occasionally buy. We do occasionally buy art directly from an artist, especially if we’re travelling or at an event… I doubt this will change with AI at all. We like meeting the artist or seller, looking at what they’ve got, then choosing something. That print of Klimt’s Tree of Life though… the next AI piece we like more could easily replace it.
Search is more or less the opposite case - we just want an answer and the best possible answer, as soon as possible. There is an upper bound - it would be rare for a human to want to know more than a thousand things a day, but that really is quite a lot. Occasionally, I hear that the elderly struggle with this - they were used to asking other people for help, and asking Google or some computer interface is alien to them… they enjoyed the conversation and social interaction, its artisanal value: for me that’s a last resort. However, I hear this less and less now. Search has changed who we are as people and also our culture. Those who identify with the previous culture will soon die out.
Will we have another bifurcation of people who identify with pre-LLM and post-LLM culture? I’m not sure, but I would bet against it. Anyone who is used to search, and the progression it has already made towards giving you a short written answer, will find it an easy slip to start relying on LLMs instead of having to manually sift through results. People will start to sigh and say: “I guess I’m going to have to do this the old-fashioned way,” when their pocket agent doesn’t give them a good answer.
There are also other examples where people enjoy the experience. When I was at Lyst (a fashion aggregator and search platform), we noticed a return to the shops after Covid restrictions ended. Some shoppers missed the experience of going to a shop and looking at things, touching them, trying before buying, having a human assistant suggest things to try… especially at the high end.
In business, artisanal value only has value if it has a viable ROI, as many are experiencing in this current economic climate… it’s not enough to have a “good” or “nice” company: it has to be a lean, mean growth profit-making machine. It’s almost certain that, at an operational level, the artisanal value won’t provide the ROI to be viable. If you want to know something basic that isn’t that complicated (beyond the mess of your data engineering stack abstracted away), having a human guide you to each answer doesn’t make sense.
Even at a tactical level, the level of ability that we’ve seen from LLM-powered tools to operate has impressed: “Which is better…?” - although there will still be many instances where having a human frame/challenge the question according to intention could prove valuable.
At a strategic level, LLMs will struggle to answer novel multi-phase, multi-layered questions where the person asking doesn’t even know the depth required in their answer. This is where artisanal analytics is here to stay for some time - parts of it may be answered using LLMs, but the composition and coherence of the work is difficult to automate. There is also a lot at stake with these kinds of decisions - these are the sorts of decisions where a company may decide to take huge financial risk. Spending the time and money to get the very best human analyst available to do solid analytical research makes sense here, and is still very cost effective. Even if LLMs do become capable, it will still be worth having a human use traditional methods to “kick the tyres” and think “outside the box” - strategy doesn’t change quickly enough and the cost of the human analyst labour is a rounding error in these situations.
Dimensions of constraint
Unlike search, which can be and is used whenever people are awake, insight-on-demand (IoD) will be used more heavily during working hours. However, it will also be scheduled for regular output, unlike consumer use of LLMs and search.
Unlike consumer use, where anything even daydreamed of will be queried, business use will usually be aligned to option exploration or decision-making. However, currently, the number of decisions made by businesses is constrained by human bandwidth. If you could ask for a system to optimise an activity, there is no reason for it not to be at a much higher cadence than is possible for a human.
For example, a marketing team may optimise spend on a daily basis, and sometimes intra-day if they see performance decline sharply; if an AI system was monitoring performance, it would course-correct to make relatively small performance improvements as often as it received new data to inform performance measurement.
Consumer use doesn’t usually have this kind of monitor/act/feedback and repeat pattern outside of financial services, but even this is only for the most wealthy consumers.
As you can tell by how many moving parts I see above, I’m not even certain that business use of IoD will be any less or more than consumer use of Search.
One constraint is the number of people who actually do knowledge work, which is a sizeable minority (1bn).
All I know is that IoD is much much larger than BI, especially once you venture into the why, what next, actions to take and subsequent automation.