The widespread adoption of advanced AI systems has enabled both individuals and businesses to leverage these technologies for various tasks, such as content generation and code creation.
However, this accessibility has also created opportunities for cybercriminals to exploit AI for sophisticated attacks. Threat actors can use AI to automate and accelerate attacks, enhancing their capabilities in areas like malware development, password cracking, and social engineering.
Cybercriminals have increasingly employed AI for malicious purposes, including using AI tools like ChatGPT to write harmful software and automate multi-user attacks.
AI can also capture sensitive information from users’ devices and operate autonomous botnets with swarm intelligence. Kaspersky’s research on AI in password cracking revealed alarming results.
Alexey Antonov, Lead Data Scientist at Kaspersky, stated, “We analysed this massive data leak and found that 32% of user passwords are not strong enough and can be reverted from encrypted hash form using a simple brute-force algorithm and a modern GPU 4090 in less than 60 minutes.”
He added, “We also trained a language model on the password database and tried to check passwords with the obtained AI method. We found that 78% of passwords could be cracked this way, which is about three times faster than using a brute-force algorithm.
Only 7% of those passwords are strong enough to resist a long-term attack.”
AI is also exploited for social engineering, generating realistic phishing messages and deepfakes that make scams harder to detect. Additionally, adversaries target AI algorithms through prompt injection and adversarial attacks, potentially compromising AI-driven systems.
As AI becomes increasingly embedded in our lives, addressing these vulnerabilities is critical. Kaspersky continues to use AI technologies to detect and counter threats, ensuring robust protection against evolving cyber risks.
As AI becomes more integral to daily life, addressing these vulnerabilities is crucial. Kaspersky continues to use AI to protect customers, employing various models to detect threats and researching AI vulnerabilities to enhance security and counter offensive AI techniques.