Businesses rely on AI/ML because it is capable of extremely fast processing, though it also has been proven to create massive blind spots, according to Ben Pick, senior application security consultant at nVisium.
“The decision tree algorithms used by AI/ML are based on assumptions and are frequently shown to have severe obstructions and oversights,” Pick said in an email interview.
There is also the complexity of additional use cases and capabilities for AI/ML that don’t necessarily overlap with security. AI/ML developers who are experts in that field are contributing to projects in addition to your normal team, which makes adding the security component to AI/ML more challenging.
These struggles to incorporate security into devices’ AI/ML functions falls perfectly in line with threat actors’ desire to find vulnerabilities through which to make their attacks. Because as much as AI/ML can assist in securing applications, it certainly cannot remove all risks.
“A hacker’s main goal would be to corrupt the inputs to confuse the decision-making algorithms,” said Pick. “This could lead to a piece of duct tape over a speed limit sign—causing an automated vehicle to speed up to an unsafe velocity—or facial recognition incorrectly identifying a person.”