The Threat of Deepfake Satellite Imagery

ALTADENA FIRE, LOS ANGELES, CALIFORNIA -- JANUARY 6,, 2025: Sequence image 13. Maxar BEFORE satellite image of homes and a neighborhood in Altadena, California. Please use: Satellite image (c) 2025 Maxar Technologies.

You might encounter a striking satellite image on social media, showing, for instance, a military installation ablaze. The critical question is: is it genuine?

Artificial intelligence’s advancement has dramatically simplified the creation of fabricated content. For a long time, satellite imagery has been a reliable source for verifying information for journalists, governmental bodies, and the public. However, this technology now jeopardizes what was once considered an unquestionable source of truth.

While an AI-fabricated satellite image shared online probably won’t single-handedly ignite a conflict or deceive the armed forces of a nation with ample resources, such as the U.S. (which possesses its own array of satellites for verification), these images can still serve as powerful instruments to sway public perception, thus eroding our overall information environment.

Numerous instances of deepfake satellite images have emerged this year alone.

Last June, Ukraine deployed drones to target Russia’s valuable long-range bombers. High-definition images of the aftermath quickly circulated online, depicting several destroyed Russian bombers (and a transport aircraft) reduced to scorched wreckage. Yet, this bold Ukrainian military action was also accompanied by images implying a greater operational success than the 10 Russian warplanes officially reported as destroyed.

Later in the same month, following U.S. and Israeli attacks on sites associated with Iran’s nuclear activities, another incident surfaced. One image portrayed a crowd surrounding a damaged Israeli F-35 jet, while a misleading video was falsely claimed to be footage from an Iranian missile’s internal sensors. These visuals conveyed the impression of a more robust Iranian military retaliation than Tehran genuinely executed.

After the four-day conflict between India and Pakistan in May, similar incidents occurred. Social media users from both India and Pakistan disseminated fabricated satellite images, aiming to show their respective armed forces had caused greater destruction than what actually happened.

Given that over half the global population utilizes social media, the dissemination of altered satellite images can be immense, with nearly instantaneous effects. We’ve already witnessed how a single fake image can influence real-world events: for instance, when a picture falsely showing a fire near the Pentagon last year caused the stock market to fluctuate until local authorities debunked it as a fabrication.

Despite experts in the field previously highlighting these dangers, contemporary fakes are becoming increasingly hard to differentiate from authentic images, while simultaneously becoming simpler to produce.

Previously, the underlying models of freely available online tools allowed for the creation of fundamental AI-generated satellite images. However, these were constrained, resulting in indistinct, zoomed-out final products. Nowadays, crafting high-quality fakes merely requires readily available software and the capacity to input a guiding prompt into your chosen AI.

Therefore, combating deceptive satellite images must be a collaborative, society-wide endeavor. Governments, media organizations utilizing such imagery, and commercial providers all have a role in helping their audiences recognize signs of manipulation.

Within the media landscape, news sources that incorporate satellite images into their reporting ought to provide or link to a clear explanation of their verification process, a method already adopted by some. Detailing how they corroborate satellite visuals with information gathered from the ground can significantly enhance reader confidence in reliable journalism.

Commercial providers, for their part, should, whenever feasible, offer resources or personnel to authenticate images purported to originate from them. Although third-party software designed to identify AI-generated images exists, it is flawed and constantly struggling to keep pace with increasingly sophisticated models capable of producing highly realistic visuals.

Information regarding how malicious entities might employ deceptive tactics could also be incorporated into governmental publications. For instance, the Swedish government’s brochure, “,” elucidates how foreign entities might utilize disinformation during periods of conflict and provides guidance on countermeasures. The Finnish government offers a guide featuring additional insights into influence operations and tools for analyzing photos and videos encountered during crises.

Other nations should adopt similar approaches. While the U.S. Department of Defense’s Emergency Preparedness Guide touches upon media literacy, it lacks comprehensive details on the types of falsified content adversaries might generate.

Evidently, deceptive AI-generated material is a pervasive issue, and satellite imagery will increasingly contribute to this problem. It is imperative that greater attention is paid to this form of misinformation and disinformation.