Samsung Says It Adds Fake Details to Moon Photos Above “Reference” Photos – Ars Technica

Enlarge / Samsung’s Galaxy S23 advert showing moon photography mode.

Taking a photo of the moon on a Samsung device returns a detailed photo of the moon. Some people are mad about it.

The problem is that Samsung’s software fakes some details that the camera can’t actually see, leading a Reddit user named ibreakphotos to accuse the company of “faking” moon photos. The user’s post claims to be able to trick Samsung’s moon detection and it went so viral that Samsung’s press site had to react.

Samsung’s incredible niche mode “Moon Mode” performs certain photo edits when you point your smartphone at the moon. In 2020, the Galaxy S20 Ultra was launched with a “100x space zoom” (it was really 30x) with that moon feature as one of its marketing gimmicks. The mode still features heavily in Samsung’s marketing, as you can see in this Galaxy S23 advert, which shows someone with a giant tripod-mounted telescope jealous of the supposedly incredible moon photos that a Galaxy phone is capable of can hold in pocket size.

We’ve known how this feature works for two years – Samsung’s camera app includes AI capabilities specifically for moon photos – although we got a little more detail in Samsung’s latest post. The Reddit post claimed that this AI system can be tricked, with ibreakphotos saying you can take a picture of the moon, blur and compress all the details from it in Photoshop, and then take a picture of the monitor and the Samsung phone will do that add detail again. The camera was allegedly caught making up details that didn’t exist at all. Combine that with AI being a hot topic and the upvotes for fake moon photos started rolling in.

On the one hand, using AI to color in details applies to all smartphone photos. Small cameras take bad photos. From a phone to a DSLR to the James Webb telescope, bigger cameras are better. They just pick up more light and detail. Smartphones have some of the smallest camera lenses in the world, so they need a lot of software to create photos that are anywhere near reasonable quality.

“Computational photography” is the term used in the industry. In general, many photos are taken quickly after you press the shutter button (and even Before You pull the trigger!). These photos are aligned into a single photo, cleaned, denoised, passed through a series of AI filters, compressed and saved to your flash memory as a rough approximation of what you pointed your phone at. Smartphone makers have to throw as much software as possible at the problem because nobody wants a phone with a huge, protruding camera lens, and regular smartphone camera hardware can’t keep up.

On the left, Redditor ibreakphotos snaps a picture of a computer screen with a blurry, cropped, compressed photo of the moon, and on the right, Samsung snaps up a whole bunch of detail.
Enlarge / On the left, Redditor ibreakphotos snaps a picture of a computer screen with a blurry, cropped, compressed photo of the moon, and on the right, Samsung snaps up a whole bunch of detail.

But aside from the lighting, the moon basically looks the same to everyone. As she spins, the earth spins and the two spin around each other; Gravitational forces put the moon in “synchronous rotation” so we always see the same side of the moon and it just “wobbles” relative to Earth. If you create an incredible niche camera mode for your smartphone specifically geared towards moon photography, you can do a lot of fun AI tricks with it.

Who would know if your camera stack is just lying and putting professionally taken, pre-existing photos of the moon into your smartphone image? This is exactly what Huawei was accused of in 2019. The company reportedly packed photos of the moon into its camera software, and if you snapped a photo of a dim lightbulb in an otherwise dark room, Huawei put lunar craters on your lightbulb.

That would be pretty bad. But what if you took a step back and just brought in an AI middleman instead? Samsung took a few photos of the moon, trained an AI on those photos, and then unleashed the AI ​​on users’ moon photos. Does that cross a line? How specific are you allowed to get with your AI training use cases?

Samsung’s press release mentions a “detail enhancement engine” for the moon, but doesn’t go into great detail about how it works. The article has some unhelpful diagrams about the moon mode and the AI, which basically boils down to “a photo goes in, some AI stuff happens and a photo comes out”.

In corporate defense, AI is often referred to as the “black box.” You can train these machine learning models to a desired result, but no one can explain exactly how they work. If you’re a programmer writing a program by hand, you can explain what each line of code does because you wrote the code, but an AI is only “trained” — it programs itself. That’s partly why why Microsoft is having such a hard time getting the Bing chatbot to behave.

Samsung's
Enlarge / Samsung’s “Detail Enhancement Engine” is fed with a number of pre-existing moon images.

The press release mainly talks about how the phone recognizes the moon or how it adjusts brightness, but those points aren’t the problem – the problem is where the detail is coming from. While there is no quote we can pull, the image above shows pre-existing lunar images being fed into the “Detail Enhancement Engine”. The entire right side of this chart is quite suspect. It says Samsung’s AI compares your moon photo to a “high-resolution reference” and sends it back to the AI ​​detail engine if it’s not good enough.

This feels like Samsung is cheating a bit, but where exactly should the line be for AI photography? You definitely wouldn’t want an AI-free smartphone camera – that would be the worst camera in its class. Even non-AI photos from a big camera are just electronic interpretations of the world. They are not “correct” references of how things should look; we’re just more used to them. Even objects viewed with the human eye are just electrical signals interpreted by your brain and look different to everyone.

It would be a real problem if Samsung’s claims were inaccurate, but the moon really is does Looks like it. When a photo is absolutely correct and looks good, it’s hard to argue against it. It would also be a problem if the moon detail was inaccurately applied to things that aren’t the moon, but photographing a photoshopped image is an extreme case. Samsung says it will “improve Scene Optimizer to reduce possible confusion that can occur between taking a picture of the real moon and a picture of the moon”, but should it even do that? Who cares if you can fool a smartphone with Photoshop?

The AI ​​black box in action.  It starts with a photo, a lot of things happen in this neural network, then a moon is detected.  Very helpful.
Enlarge / The AI ​​black box in action. It starts with a photo, a lot of things happen in this neural network, then a moon is detected. Very helpful.

The key here is that this technique only works the moon, which looks the same for everyone. Samsung can be very aggressive in generating AI detail for the moon because it knows what the ideal end result should look like. It feels like Samsung is cheating as this is a hyper specific use case that doesn’t offer a scalable solution for other issues.

You could never use an aggressive AI detail generator on a person’s face because every face looks different and adding details would make this photo look nothing like the person anymore. The equivalent AI technology would be if Samsung specifically trains an AI on it your face and then used this model to enhance photos that were recognized where you were. One day, a company might offer hyper-personalized, at-home AI training based on your old photos, but we’re not there yet.

If you don’t like your enhanced moon photos, you can simply turn off the feature – it’s called “Scene Optimizer” in the camera settings. Just don’t be surprised if your moon photos look worse.

Leave a Reply

Your email address will not be published. Required fields are marked *