Kate Middleton Photo: Why the editing and manipulation controversy could be just the beginning


Apparently, he intended to put an end to the speculation. But the latest image released of the Princess of Wales has only led to more information.

As soon as the image was posted, people began noticing inconsistencies: a sleeve that seemed to disappear and blurred spots on the edges of the clothing. Many suggested it had been edited, and many UK and international film agencies became so concerned that they recalled the image and told the world they couldn't be sure it was real.

The day after its publication, a new statement attributed to Kate appeared in a tweet. “Like many amateur photographers, I experiment with editing from time to time,” she said. “I wanted to express my apologies for any confusion caused by the family photograph we shared yesterday. I hope everyone celebrating had a very happy Mother's Day. C.”

Areas of the photo that appear to be edited.

(Prince of Wales/Kensington Palace/PA Wire)

The post does not mention how the edits were actually made: what changes were made or what software was used to make them. While it has generated a lot of speculation about artificial intelligence, there is nothing to indicate whether or not it was used in the image.

But the suggestion that it was edited in the same way that “many amateur photographers” do could be an indication that doctored images are becoming more common and increasingly convincing. There is a long history of misleading images, but they have never been easier to create than now.

In fact, edited images are now so common that the people taking them may not even realize they are doing it. New phones and other cameras include technology that attempts to improve images, but can also change them in unknown ways.

Google's new Pixel phones, for example, include a “Best Review” feature that is a key part of their marketing. It's an attempt to solve a problem that has plagued photographs since people started using them to take portraits: In any set of photographs of a group of people, one of them is sure to blink or look away. Wouldn't it be nice to be able to put all the best parts together into one enhanced, composite image?

The Google Pixel 8 Pro was officially presented on October 4, 2023

(Google)

That's what the Pixel does. People can take a burst of similar photos and then the phone will put them together and find people's faces. They can then be exchanged: the face of a blinking person can be replaced by another image and it will be perfectly integrated.

Also recently, users of newer Samsung phones noticed that their cameras seemed to overlay different moons on photos they had taken. Users discovered that if they pointed their camera at a blurry image of the Moon, new details appeared that weren't actually there; It was only discovered after a Reddit investigation.

Controversy ensued, with Samsung admitting that its phones have a built-in “deep learning-based AI detail enhancement engine,” which can detect the Moon and add more details that weren't actually present when the image was taken. Samsung said it was built to “improve image details,” but some affected customers complained they were taking images of the Moon that they didn't actually take.

It has also become increasingly easier to change parts of a photo after taking it. Adobe has introduced a tool called “generative fill” in Photoshop: users can select part of a photo, tell an AI why they would like it swapped, and make that happen. For example, a mismatched sweater can be exchanged for a more attractive one in a matter of seconds.

The numerous controversies led to some conversations about what an image actually is. Photographs may never have been a simple matter of light hitting a sensor, but they have become much more complicated in recent years. The era of “computational photography” means that devices use their hardware to process images in ways that could make them more attractive but less accurate; Readily available editing tools mean that precise changes to photographs are no longer limited to the darkroom.

Much of the recent conversation around image manipulation has focused on generative artificial intelligence, which makes it easier to edit images or create them entirely. But concerns about fake images go back much further: Photoshop, the software so widespread that it became synonymous with misleading edits, was created in 1987, and the first fake image was created almost as soon as modern photography was invented. .

However, the rise of AI has raised new concerns about how fake images could damage trust in any type of image, and has generated new work to try to prevent that from happening. This has included a new focus on detecting and removing misleading images from social media, for example.

The same tech companies that are creating new tools that can edit images are also looking for ways to get people to spot them, too. Adobe has new tools called “Content Credentials” that allow users to highlight whether and how an image has been edited; OpenAI, Google and others are exploring adding invisible watermarks to images so people can check where they come from.

Some of the useful information is already hidden in the image files. Today's cameras include information in the files they create about what equipment was used to make them and when they were taken, for example, although it is easy to delete.

Traditional photo agencies have long had rules prohibiting any type of edited or misleading images. But they require those agencies to exercise some discretion: Fixing colors in an image is a central part of photographers' work, for example, and those agencies often distribute photographs from other sources that they can't necessarily verify, as was the case with the photography of Kate. .

The Associated Press, which was one of the first agencies to remove the image, says in its code of ethics for photojournalists that “AP photographs must always tell the truth.” “We do not digitally alter or manipulate the content of a photograph in any way.”

Those firm words are not necessarily as definitive as they seem. The AP does allow for “small adjustments in Photoshop,” such as cropping it or changing colors. But the purpose of these is to “restore the authentic nature of photography,” he says.

Similarly, the AP code actually allows images that “have been provided and altered by a source.” But it says “the title should explain this clearly” and requires that the transmission of such images be approved by an “experienced photo editor.”

The agency has similar rules on AI-generated images: They cannot be used to add or remove elements from a photo, and they cannot be used if they are “suspected or proven to be false representations of reality.” There was no indication that Kate's image had anything to do with AI, and neither the AP nor other photo agencies mentioned the technology in their statement, but regardless of how it was edited, it emerged in a world more in tune with AI than ever. ease. and the danger of misleading images.

Much of the work on these types of standards has been done over the last year or so, since ChatGPT launched and sparked new enthusiasm for artificial intelligence. But it has given rise to new standards about misleading images, new ideas about photographs that might have been taken decades earlier and new concerns about how easy it is to fool people. It may be easier than ever to create fake images, but that could have actually made it much harder to get away with it.

scroll to top