Recently, I’ve been hearing that information security teams, who already typically conduct a vendor evaluation before onboarding and granting access to company data, have started requesting a second review when a vendor also offers AI functionality. Every vendor will eventually have AI functionality by the end of this year! Having two security evaluations is very inefficient in a world where every vendor will need it. A year or two ago, you may have been able to say that only a minority of vendors have AI features… not today and not ever again.
One interesting example was of a company where at least 20%1 of the company used Grammarly. Now, Grammarly isn’t some unheard-of vendor; it's well-known by nearly all information workers and used by a vast number of them, too. The company recently asked all workers who had Grammarly installed to uninstall it, because they realised that Grammarly has AI features… If you consider what the company is trying to protect itself from, it is attempting to avoid liability for a data breach by Grammarly, where it lacks adequate liability protection because employees are using the free tier of Grammarly and are not on an enterprise plan.
I understand why the information security team felt the need to tell employees to stop using Grammarly here. It’s not really going to secure data very much; the data has already bolted through years of use by employees. It is primarily a liability shift; the employees have been instructed not to use Grammarly, and therefore, the company has likely taken reasonable steps to avoid invalidating their insurance policy should a data breach occur.
However, my main problem is the reactive nature of the information security team here. Of course, an information security team cannot preempt every possible vendor use by employees, especially when it is a tool choice made by engineering, who will know their tools better than the information security team. Where tools are general tools used by all employees, then I think we should expect information security to become more proactive. For example, it is becoming common for employers to provide employees with enterprise ChatGPT accounts, as the level of unofficial usage occurring was so high that it is almost certain that there would be inappropriate data leakage. This is the type of tool that an information security team should proactively offer employees to prevent data leakage to a free-tier or non-corporate account.
If you look at the tools a typical employee uses —office suite, operating system, browser, email, and calendar —all of these will soon have AI features or integrations. It makes more sense for information security to be opinionated about which of these it considers secure from both a software and AI architecture perspective, and to propose which ones to use for procurement. Going halfway doesn’t make sense.
For example, imagine if your company used Google Workspace for email, calendar, office applications, and more. Enterprise Google Workspace accounts have Gemini built into nearly every app, and they come with a Gemini-specific app to use like ChatGPT, and NotebookLM - one of the most advanced AI tools available. It’s not good enough for an infosec team to say that they disapprove of the use of the AI features or AI applications that are available and accessible through workspace accounts. If they are available, then employees will use them, at least in the minority. Even where the majority tries to stay in line with the infosec policy, it would probably still happen accidentally.
Clearly, tools like Grammarly are widely used by employees, probably in all companies. It would make sense to get an enterprise account with pro features for all employees to use, instead of waiting for someone to ask for an infosec review when they may not have even realised it needed one.
The way employees are working is undergoing a change like we haven’t seen since the dawn of office computing. Whole tool categories are being upended or transformed with AI. The best employees want to maximise their capabilities with AI and make their teams lean in the same way. Information security teams are at risk of being an obstacle. Yes, they should be some of the time, but they could also do with being proactive and opinionated, especially during this era of rapid change.
If you think Anthropic fits your security profile better than OpenAI, consider purchasing a Claude enterprise subscription for your company to use. Give everyone access and instruct them to stop using ChatGPT, having provided them with an alternative. You can’t simply say no and offer no alternative; it won’t work. I also think that where vendors have already been given access to sensitive data, being concerned about the use of their new AI features is pointless. The vendors will already be ensuring that liability and risk do not materially increase from offering AI features, such as BYOLLM, and that they have the appropriate commercial licenses and insurance. Verify that vendors are incorporating these measures as part of their standard information security approval process, rather than having a separate one.
I say at least because the number of workers would have been calculated based on the installation of some combination of the desktop app and browser extensions on their corporate laptops. I doubt this would have covered any web use without an extension or all possible browser extensions.