RisingAttacK: New Method Renders Objects “Invisible” to AI Image Analysis Systems
Researchers at the University of North Carolina have developed a novel method for deceiving artificial intelligence systems tasked with image analysis. The technique, dubbed RisingAttacK, can effectively render objects invisible to AI, even when those objects are clearly present in a photograph.
The essence of the approach lies in imperceptible alterations to the image—changes so subtle that they escape human notice. Two pictures may appear identical to the naked eye, yet an AI model might detect a car in one and fail to recognize it in the other, despite the vehicle being plainly visible in both.
Such attacks pose serious threats in fields where computer vision systems are integral to safety. Malicious actors could, for instance, prevent autonomous vehicles from recognizing traffic lights, pedestrians, or other vehicles on the road.
The risks extend to the realm of medicine as well. Hackers could compromise radiographic devices, leading to erroneous diagnoses by AI-driven systems. Security technologies reliant on automated image recognition also find themselves increasingly vulnerable.
RisingAttacK operates in a series of calculated steps. First, it identifies all visual features within an image. It then isolates the most critical elements for achieving the intended deception. Though computationally intensive, this process enables the insertion of extraordinarily precise and minimal modifications.
“We sought to discover an effective method for breaching computer vision systems, as these technologies are frequently deployed in scenarios that directly impact human health and safety,” explained Tianfu Wu, one of the study’s authors.
The researchers tested their method against four widely used computer vision models: ResNet-50, DenseNet-121, ViT-B, and DEiT-B. The results were unequivocal—RisingAttacK proved successful against every system without exception.
The team emphasizes the importance of identifying such vulnerabilities. Only by acknowledging the existence of these threats can meaningful and robust defenses be devised. Accordingly, they are already working on countermeasures to neutralize such attacks.
The scientists are now exploring the feasibility of adapting RisingAttacK to target other types of AI systems, including large language models. This line of inquiry may reveal the true extent of the risks posed by adversarial manipulation.