CTOs are asked constantly about the latest technology craze. I am a skeptical “show me” kind of CTO. I remember the NFV craze and implemented a very early NFV product to see if it would succeed on the market (it worked, processes and inertia made it “fail,” not the technology). I have been following the evolution of GenAI, and like most people, saw promise in the technology but knew it needed time to mature. ChatGPT, in particular, just kept getting better, and before you knew it, it had reached a level where it could be used for things far more interesting than just answering questions. I was especially interested in the InQTel piece that Tom mentioned in his blog last week because it hypothesized that Defensive AI would likely cancel out Offensive AI. Then I saw the launch of two derivatives - WormGPT and FraudGPT, and suddenly, I got very interested in GenAI.
When presented with a new technology, I determine its impact on the customers I serve if it can do half of what it advertises (which is often optimistic). GenAI has the potential to upscale the capabilities of BAD hackers drastically, which is the most dangerous thing I can imagine. Anyone who grew up watching WarGames and Hackers saw high school students taking down major computer systems and thought, “I can do that!”. Then they found out how hard it was, even if they downloaded all the tools they could find online.
What if they didn’t have to do any of that?
Imagine I am a teenager who has a bad experience with a brand. To get revenge, I want to attack one of that brand’s manufacturing plants. I know nothing about the plant at all and am not a hacker. So, I ask my AI to find out who the critical network manager is at the plant, study their online profiles and style of writing, and then write targeted phishing emails to their employees to steal credentials. I can also ask the AI to code me some network reconnaissance tools based on the vendors that are used by the company (often available in vendor announcements, public RFP releases, etc) and to determine if any of that equipment has known vulnerabilities. Then, I wait for the phishing to work and use the tools to penetrate the network and see what I can do.
The UK’s National Cyber Security Center published a report, “The near-term impact of AI on the cyber threat.” I highly recommend that you read it, because it shows that the threat is “Near-Term,” and there really isn’t time to wait for the Defensive AI. They have a table in the report that really should scare any OT network administrator:
Source: UK NCSC: “The near-term impact of AI on the cyber threat”
The two most common first steps in an attack are reconnaissance and phishing, and the chart shows that these two items will get uplifted from AI. Malware tools exploit the vulnerabilities discovered by both vectors, and lateral movement happens once you get inside the network.
What if these attack vectors were simply not available to hackers?
That would cut off any attack at the knees, breaking the cyber kill chain.
And that is what we mean when we say BlastShield is AI-resistant. Phishing doesn’t work. Reconnaissance fails. Malware can’t be installed if the systems can’t be seen or reached. We are not using AI to defend OT networks, we are blocking the attack vectors that AI uses to start hacks.
The way to help Defensive AI win the battle (when it becomes a reality) against Offensive AI is to minimize the attack surface it has to defend. That is the task I am taking on at BlastWave, and I invite you to see what we can do to protect your OT network.
Experience the simplicity of BlastShield to secure your OT network and legacy infrastructure.