Foreword
Since the dawn of the Jedi, in a galaxy far, far away, a wise person once said: “Think evil. Do good.” The premise was simple, but became the rallying cry for the Hacking Exposed franchise. To defend yourself from cyberattacks, you must know how the bad guys work. Your strongest defense is a knowledgeable offense. And so the red team and hacker mindset was born.
As defenders (through offensive knowledge) we got better at understanding and preventing attacks. But the bad guys got better too, especially at automating and building intelligence into their attacks to bypass the controls put in place by the defenders. Now, with the near ubiquitous use of AI and ML around the world, the bad guy is once again one step ahead, leveraging these technologies to their malicious ends. And around and around we go.
We are at the dawn of AI/ML's application to social engineering and we need to understand it better. Our adversaries are turning to language as a weapon, and they are weaponizing it through social intelligence and creating an illusion of conversation that can bypass 99 percent of human reasoning. And with automated systems like AutoGPT and Pentest GPT coming on line, the likelihood of a fully automated synthetic hack sequence using AI/ML is clearly upon us.
With the social engineering attacks of 2023 at MGM, Caesars, and Clorox, and Ethereum-based market maker Balancer, the world now knows what we as cybersecurity professionals have known for decades: that humans (users ...