There is a lot of concern around Generative AI and the ramifications it has for humanity. People’s views are on a continuum, ranging from welcoming and adopting the technology, to calling for a Butlerian Jihad, and all positions on the continuum are being represented, almost uniformly, at this time.
While I’m not an AI researcher, I’m fairly sure that LLMs won’t become sentient (no comment on the other systems being developed). I am also fairly sure they will reduce the need for some jobs, and hugely augment others, including mine.
Let’s go back to a time well before computers, well before we could engineer a synthetic intelligence…
Whenever I go on the small train ride around the zoo near my house (I have small children and an unlimited pass to the zoo, which is a 15 min drive from my house - you can guess how much time I spend there 🦒), the train passes the Gray Wolf enclosure and I am reminded, by the driver, that all breeds of dog were descended from this species.
Humanity has pre-trained (bred) and fine-tuned (trained) the wolf over millenia, to serve the purposes that we desire. It’s a kind of automation, before computers or even machines existed. Biological machines, with intelligence, that we have shaped.
We have ones that we use to gather sheep for us, that don’t attack the sheep and are intelligent enough to understand how to force large flocks of sheep into tight spaces. There is even a competition to produce and train the best here, kind of like Kaggle.
We have ones to help us police each other:
We have ones to help guide the visually impaired. Interestingly, this intelligence was pre-trained for retrieving during hunting, but fine-tuned for guiding the visually impaired:
We even have ones we use as weapons, again pre-trained for herding sheep, but fine-tuned for military use.
Moving away from dogs, we also have used animals almost like lamdba functions, for controlling rodent populations. As this form of intelligence poses us no threat - we have allowed it to operate and scale up and down independently, with some consequences in terms of unintended casualties 🐦:
We have also got this wrong at times, where the animals used have run amok and caused a huge amount of damage - the cane toad being the textbook example.
All of the uses above will probably have an AI approach in the coming decades if not sooner, but my point is that we have coexisted with intelligence other than our own and have harnessed it for millenia. It is slower to scale and limited in capability, but it’s not an alien concept to us.
Why the LLMs have made us think differently is in part due to the fact that we’ve never had another intelligence that could use language like we do, let alone better than we do.
Yes, AI could be used for herding sheep (Boston Dynamics’ next project 😂), guiding the visually impaired (Tesla FSD, speakers with a helmet and a couple of GoPros), policing (already happening), war (already happening, probably) and pest control (drone with thermal camera and a bb gun). However, LLMs offer us new possibilities that training or using animals was unable to give us. Any task using language can, in theory, also be offloaded to a system leveraging LLMs.
Imagine an email client that wrote draft replies for you based on your previous writing style and the context of the conversation. It could offer calendar slots, if the email received asked for a meeting/call and automatically accept invites from them if in these slots. I’m loving Superhuman, but it’s lacking a bit of AI magic. I could imagine the next big email/calendar client, that could displace them, offering features like the ones above and more.
I have recently started using Fireflies.ai and have been pleased by the quality of the transcription of calls and the summaries of the calls with action points and questions. There are many others offering a similar service.
There are now AI tools to help you:
Write copy, longform copy, shortform copy, copy for marketing, copy for social, copy for enterprise...
Make chatbots - the extent of this, and AI agents in general, has yet to be fully understood. You could imagine that, fine-tuned on documentation and with access to a stock ordering system, these could replace ecommerce websites. “I want a hat for less than $50, adjustable, with a mesh back, produced sustainably, in the colour red.”
Create art
Create video
Create websites
Do data analysis
To detect if content was AI created!
However, most, if not all of these, have a “human in the loop”, as there is an element of risk in allowing this kind of content to be published or presented without supervision. Much like shepherds were not replaced by sheepdogs, but augmented by them, we will augment all of the professions above with AI as a first, and possibly last, step.
This augmentation will improve over time as editing happens in the space the generation happens. When you correct or edit the content it generates, you’ve provided it with perfect training data to improve.
I think one trend that will happen is a shifting of the augmentation to the true consumer and away from the folks who stand in the middle of the value chain. If you’re going to have a “human in the loop”, why shouldn’t it be the one you’re selling to?
Creative work will get pushed towards the consumer, who will choose what they want without need of someone else to make designs for them. This is OK in creative and artistic fields, as there isn’t really right or wrong. There will still be traditional methods here too, but it will be considered artisanal and expensive. The middle will not hold out though, there will only be the best AI generated content at the low end and human generated at the high end.
Where accuracy is important, for example, with many types of writing (all of non-fiction), data work, anything involving money or law, anything B2B today… there will need to be some human arbiter of truth alongside the AI. In Data, that may be an analyst or analytics engineer.
Where there is a “human in the loop” it starts to look like man working with man’s best friend, except it might be a new best friend - and a BFF at that. I used to think that talking to Google in a way most likely to get good results was an art form. I’m fairly sure that learning to work, augmented with AI, is the future of work. I hope it leads to fewer people or animals being used as intelligent machines to fulfil services, and us just using machines to be machines instead.
There are many who are worried about legal issues and AI breaking the law. There was a similar discussion when self-driving cars were thought to be around the corner a couple of years ago. It was discovered that we had relevant law applicable here, the law governing horses and who would be liable out of breeder and rider in the event of an accident. I think this principle can be generalised; we have law to govern intelligence, it’s the law we have for people. AI and LLMs shouldn’t do things that we aren’t allowed to do - it is then a question of restriction, enforcement (which may need solutions engineered, and not entirely relying on self-enforcement) and liability assignment, not making new law.