Internet users are using AI tools to upscale and “enhance” blurry FBI surveillance photos of a person of interest in the Charlie Kirk shooting, but these AI-generated images are creating fictional details rather than revealing hidden information. The practice demonstrates how AI upscaling tools can mislead criminal investigations by inferring nonexistent features from low-resolution images.
Why this matters: AI upscaling has a documented history of creating false details, including past incidents where it transformed Obama into a white man and added nonexistent features to Trump’s appearance, making these “enhanced” images potentially harmful to legitimate investigations.
What happened: The FBI posted two blurry surveillance photos on X seeking information about a person of interest in the Charlie Kirk shooting.
The technical problem: AI upscaling tools don’t actually uncover hidden details in pixelated images—they extrapolate and infer what might be there based on training data.
Past examples: Previous incidents demonstrate the dangerous inaccuracy of AI upscaling in real-world scenarios.