From the first rumbles of hype for the latest culture-shattering AI tools, developers and the coding-curious alike have been using them to generate code at the touch of a button. Security experts quickly pointed out that, in many cases, the code being produced was poor quality and vulnerable and, in the hands of those with little security awareness, could cause an avalanche of insecure apps and Web development to hit unsuspecting consumers.
And then there are those who have enough security knowledge to use it for, well, evil. For every mind-blowing AI feat, it seems there is a counter-punch of the same technology being used for nefarious purposes. Phishing, deep fake scam videos, malware creation, general script kiddie shenanigans — these disruptive activities are now achievable much faster, with lower barriers to entry.