“AI is changing the world of cybersecurity”
In today’s digital age, cybersecurity is like a fortress wall, that protects our online world from intruders and dangers that are living outside. But guess what? Both the fortress walls AND the outside world are evolving faster than ever before. We live in an era of constant, fast change. And we must change too, or we’ll fall behind… or worse. There are new dangers out there, but also new ways to protect ourselves.
The importance of protecting yourself online
Whether you’re an individual or a business, protecting yourself online is crucial:
- Imagine someone gaining access to your bank accounts, leading to drained savings, ruined credit scores, and endless legal battles to reclaim your identity.
- Your personal information could be sold on the dark web, leading to constant harassment from scammers and fraudsters.
- Additionally, compromised social media accounts might damage your reputation, impacting your job prospects or relationships.
Neglecting your online security can cause serious problems that extend well beyond the online world, affecting your financial stability, mental well-being, and overall quality of life.
ChatGPT
If you’re following the news a bit, you’ve probably heard about AI & ChatGPT, and how powerful and cool it is, right? While most people are focused on the benefits that AI brings, we must not forget that “with great power comes great responsibility”.
What if this power is used against you, to attack your online presence, identities, financials, and data? That’s right, it can be very dangerous! Well, perhaps not ChatGPT, but the technology behind it is.
Let’s dive a little deeper into this technology, how it affects keeping our stuff safe, and how we can get ready for it.
AI, the Hero of cybersecurity
Ever heard about Autonomous AI or “AgentGPT”?You can see it as a small army of AI-powered digital helpers that can do a lot of work, all by themselves. These little guys can help cybersecurity teams with important jobs like:
- Testing for weak spots (penetration testing)
- Checking if users are really who they say they are
- Predicting where danger might come from
- Taking actions in case of danger
- …
Sounds amazing, right? Microsoft has actually built something like this, called “Microsoft Security Copilot”. This level of automation goes far beyond what we know today, and it empowers security teams in a way that was not possible before.
The dark side of AI
Just like AI can be used for good, it can of course also be used to do bad things.
Take the little AI helpers we mentioned earlier, but ask them to crack passwords, get into secure computer systems, pretend to be your friend, spreading bad software, or even crashing websites and servers.
I’ll make these risks a bit more clear in the following paragraphs.
Risk #1: Deepfakes
Imagine if you couldn’t trust your eyes and ears. With the rise of “deepfake video’s” is become really hard at times.You might have seen them on the internet already. If not, you should definitely Google the “Tom Cruise deep fake” or “Simon Cowell deepfake”.
Deepfakes are digital disguises created by computers. These disguises make someone look, speak and act like someone else. Or make it seem like they’re doing things, without actually doing it.
One of the leading names (in a positive way) in the deepfake scene is Belgian YouTuber, Chris Ume. But how are deepfakes potentially harmful for your business?
Deepfake security risk example
Imagine you’re working in HR. You’re interviewing a potential new database administrator – let’s call him Martin – over Zoom, Google Meets or Teams, which is quite common nowadays. His profile checks out, he passed all the reference checks and you decide to hire him. As a database admin, he will have access to highly sensitive customer data. He looks like a great fit for this new remote position, but there is only one problem …
The person you interviewed was in fact not Martin, but a cyber-criminal using Martin’s identity to impersonate him via Deepfake technology. Instead of having to hack your organization to access your customers’ data, you’ve simply handed him the keys to walk straight through the front door!
Risk #2: Voice Cloning
What if your finance department suddenly receives a phone call from “their boss”, telling them to transfer a large amount of money to a certain company? Or your tech support receives a call from their manager, asking them to reset his password?
When pointed out like this, it’s quite obviously a scam. But imagine someone who sounds exactly like your boss, shouting at you and threating with repercussions. Wouldn’t you do as he says?
Chances are very, very likely that you would. Because with AI-powered voice cloning software, anyone can be impersonated.
Risk #3: Verification Fraud
With tools such as DALL-E, Midjourney or Stable Diffusion, hackers can easily generate fake videos and photographs of you. These can then be used to pass identity and security checks, which pose very clear risks: Bank accounts can be accessed, transfers can be authorized, or even fake assets can be created to secure financial loans in your name.
Risk #4: AI supercomputers
Let’s talk about quantum computers, also known as “supercomputers”. As of now, supercomputers are not widely available for general use and are mainly found in specialized research labs of big companies like IBM, Google and Microsoft.
Now imagine that the little AI-powered helpers that we mentioned earlier had access to these supercomputers. They could cause serious problems when – for some reason – they’re used for evil:
- Huge threats to our existing cryptographic systems
- Compromise of digital signatures, enabling the forgery of documents and impersonation of individuals
- Expedite password cracking and reverse engineering
- Brute force attacks
- High speed vulnerability scanning and breaching
- …
In short: tasks that would take hackers weeks or months can be done in mere days, minutes or even seconds with these supercomputers.
So, in the wake of this revolutionary technology, we should re-evaluate and adapt our encryption methods and security measures accordingly.
But what if we Fight “Fire with Fire”?
We believe that the most effective solution is for companies to adopt AI and machine learning for general security and authentication purposes. Fighting AI, using AI. But what does that entail exactly?
You could, for example, invest in liveness detection to catch attempts by attackers who pose as someone else. This technology helps distinguish between genuine user interaction and unauthorized attempts to impersonate someone else, such as using a photograph or video to trick authentication systems.
Another option is to invest in AI security technologies, like for example Microsoft’s Security Copilot, Microsoft Sentinel or simply Microsoft Defender. They contain automated investigations and AI machine learning to enhance threat detection, response, and overall cybersecurity posture.
But implementing new technologies will not be enough. More adaption and awareness is also very important.
More AI Protection best practices
In addition to investing in AI solutions for cybersecurity, there are several best practices we advise you to follow:
- Make a risk assessment for your current procedures. Check how interviews for sensitive positions are carried out. Implement authentication techniques that can be used to verify the applicant’s identity or avoid doing only online interviews.
- Communicate deepfake awareness throughout your company. Help your employees detect deepfakes, with techniques such as glare, lip movement, facial hair, etc.
- Implement verification security guidelines so that “a regular phone call” is never enough to make substantial changes or decisions.
- Create a process about how you should secure the permissions of an AI bot that needs access to your critical systems. How do you control what type of information it’s accessing? How do you educate it on the sort of emails it can write?
But most importantly, you should regularly verify your current security posture by conducting a thorough cybersecurity assessment. How well are you protected right now and what should your top priority be?
Want more information about Cyber Risk Assessments? Reach out to us.
Conclusion
We are entering uncharted territory at a fast pace. All these new advancements in AI and quantum computing are both as exciting as they are scary. And they demand that we stay alert and prepared.
As cybersecurity professionals, it’s our duty to stay on top of these trends. And to keep learning and adapting to ensure that we harness the benefits of these developments, while also effectively mitigating the potential risks.