The tech giant admits it uses AI to ‘enhance’ snaps.

  • The latest Samsung phones have a “Scene Optimizer” feature on their cameras
  • It uses AI to “enhance” the characteristics of certain objects in the shot, such as B. the moon
  • One Reddit user dubbed the effect “Fake” and said it was applying a lunar surface texture



Samsung users are furious after discovering their smartphone’s “Space Zoom” camera tool uses artificial intelligence (AI) to “enhance” photos.

A Reddit user called out the tech giant for its “fake” photos after purposely blurring a photo of the moon and then taking a picture of that photo with his Galaxy S23 Ultra smartphone.

The resulting image was much more detailed, showing craters and surface textures not visible in the original.

User ibreakphotos says this proves Samsung’s AI “does most of the work” as “the optics aren’t capable of resolving the details you see”.

The company has responded, admitting that it uses AI to detect when the moon is on fire, but hasn’t ruled out the internet investigator’s accusation.

A Reddit user called out the tech giant after intentionally blurring a photo of the moon and then snapping a photo of that photo with his Galaxy S23 Ultra smartphone. Pictured: The blurred original photo
The image the phone produced after taking a blurry photo
Samsung has admitted that its “Space Zoom” camera tool uses artificial intelligence (AI) to “enhance” photos, but hasn’t said exactly how. Pictured: The original photo of the moon before the Reddit user blurred it

“Because the moon is tidally locked to earth, it’s very easy to train your model on other moon images and just beat that texture when a moon-like thing is spotted,” they wrote.

They also found that turning off the Scene Optimizer feature on the camera produced a blurry photo of the moon, as desired.

HOW DOES SAMSUNG’S SCENE OPTIMIZER WORK?

Samsung smartphones’ “Scene Optimizer” feature is said to use AI to “enhance” details in photos taken with the camera of certain objects, including the moon.

The algorithm has been trained to recognize photos of the moon so that it can recognize when it is being shown to the camera lens.

The camera then automatically adjusts its focus and the brightness of the image to make it as clear as possible before taking multiple pictures.

Finally, the AI ​​detail enhancement engine will “eliminate residual noise and enhance image detail even further.”

Samsung doesn’t elaborate on how it does this, leaving open the possibility that it uses its training data to create a realistic lunar surface texture that isn’t present in the original image.

In addition, if two blurred moons appear in the original image, the technology generates and applies a realistic moon texture to only one of them.

The user called the photos of Earth’s natural satellite taken with the Space Zoom lens “fake” as it “adds detail where there is none.”

“While some may think this is your camera’s ability, it actually isn’t,” they wrote.

“And it’s not sharpening, it’s not adding detail from multiple frames, because in this experiment all frames contain the same amount of detail.

“None of the frames have the craters etc because they’re intentionally out of focus, but the camera somehow miraculously knows they’re there.”

They accuse Samsung of being “deceptive” in its marketing, as the South Korean company claims the new lens can “focus on even more distant objects or people.”

Images of the moon – a distant object in low light – are a good test of a camera’s performance and capabilities and are therefore often used as an example in advertising campaigns.

However, since the post went viral, Samsung has responded, admitting that Scene Optimizer “uses advanced AI to recognize objects” and “enhances details.”

“With the launch of the Galaxy S21 series, Scene Optimizer was able to recognize the moon as a specific object during the recording process and applies the feature’s detail enhancement engine to the recording,” it wrote.

“When you take a picture of the moon, your Galaxy device’s camera system uses this deep learning-based AI technology and multi-frame processing to further enhance the details.”

The Reddit user noted that when the Samsung camera is shown two blurred moons, the technology only generates and applies a realistic moon texture to one of them
The Reddit user noted that when the Samsung camera is shown two blurred moons, the technology only generates and applies a realistic moon texture to one of them

The blog post goes on to say that the AI ​​algorithm has been trained to recognize photos of the moon, meaning it can tell when it’s being shown to the camera lens.

It then automatically adjusts its focus and the brightness of the image to make it as clear as possible before taking multiple pictures.

Finally, the AI ​​detail enhancement engine will “eliminate residual noise and enhance image detail even further.”

However, it does not elaborate on how it does this, leaving open the possibility that it uses its training data to generate a realistic lunar surface texture not present in the original.

ibreakphotos wrote, “There’s a difference between additional processing à la Super-Resolution when combining multiple frames to recover detail that would otherwise be lost, and here where you’ve trained a particular AI model on a series of lunar images to spot the moon and smack the moon texture on it (when there is no detail at all to recover, as in this experiment).’

The Scene Optimizer algorithm has been trained to recognize photos of the moon so that it can recognize when it is being shown to the camera lens

Other Reddit users are not happy with this. One commented that they were “disappointed” after being impressed with their Samsung’s camera.

Another said they “would have been ok if they had been more transparent”.

This is because the Scene Optimizer adds known lunar features that theoretically should appear in the photo, rather than fabricating false ones.

But others disagree, saying it “makes the whole endeavor of taking a picture of the moon pointless.”

“There are literally thousands of images available to download from the internet at much higher resolution for each phase of the moon,” wrote one Reddit user.

A Samsung spokesman responded to the allegations with: “Samsung strives to deliver superior photo experiences in all conditions.

“When a user takes a photo of the moon, the AI-based scene optimization technology recognizes the moon as the main object and takes multiple shots for the multi-frame composition, after which the AI ​​improves the details of the image quality and colors.

“No image overlays will be applied to the photo.

“Users can disable the AI-based Scene Optimizer, which disables automatic detail enhancements for all photos taken.”

The AI ​​detail enhancement engine will “eliminate residual noise and enhance image details even further”. Samsung doesn’t elaborate on how it does this, leaving open the possibility that it uses its training data to create a realistic lunar surface texture that isn’t present in the original image
Images of the moon – a distant object in low light – are a good test of a camera’s performance and capabilities and are therefore often used as an example in advertising campaigns

The debate over whether AI-enhanced or fully generated imagery can be put in the same class as photography is ongoing.

Last month, an Instagram photographer who gained thousands of followers thanks to his photorealistic portraits admitted he created them using the AI ​​software Midjourney.

He then received backlash from his followers, who felt that he “dishonestly misleads people” and made them think They were photos taken with a camera.

“The creative process is still very much in the hands of the artist or photographer, not the computer,” AI artist Jos Avery told Ars Technica.

He added that while it seems “correct” to disclose when an image has been generated by AI, the photographic industry has not always been open to elements of deception in the past.

said Mr. Avery: “Does that give away people who wear makeup in photos? What about cosmetic surgery?

“Every commercial fashion photo has a heavy dose of photoshopping, including the replacement of prominent bodies on magazine covers.”

AI-generated photos of “people” at a party look eerily realistic – until you take a closer look

Artificial intelligence can now allow people to invent a social life, creating images that suggest they attended a party that never happened and with friends that didn’t exist.

Twitter user Miles used the Midjourney AI system to create a realistic-looking collection that generated images of women smiling at the camera and men raising their cups in toast.

The pictures appear to be candid shots of friends at a party, but a closer look can give you nightmares.

The “people” are grinning with mouths full of teeth, hands growing out of their hips and tattoos looking like mold on their skin.

However, the excessive amount of fingers has the internet captivated, with one user saying they look like “a nest of alien appendages sprout to devour their host”.

Read more here

A Twitter user used the Midjourney AI system to create images of people at a party. Although they look realistic, many of the “humans” were created with more than five fingers on their hands

Leave a Reply

Your email address will not be published. Required fields are marked *