Categories
Uncategorized

iPhone 12 Camera: Mega Anamorphic Zoom

Rumors are making the rounds that the iPhone 12 or iPhone 13 — one of the upcoming iPhones — will finally increase the megapixel count on the camera system to a number greater than 12. With 64 being the flavor of the week.

I’m doing a whole series on the future of the iPhone. So, hit subscribe to make sure you don’t miss any of it.

Now, megapixels are all about quantity. They don’t always help, and can even hurt, image quality. So what is Apple even thinking?

Well…

Megapixels

Camera sensors are made up of pixels, just like screens are. But, instead of displaying images to you, these are all about capturing them for you.

A megapixel is just a million pixels. In other words, when you multiply the horizontal number pixels by the vertical number of pixels, the result is higher than a million.

For 12 megapixels, the current iPhone camera spec, it’s greater than 12 million.4032 x 3024 = 12,192,768 to be specific.

Now, more megapixels aren’t necessarily better. Because you also have to factor in the size of the sensor.

See, the more you chop up a sensor, the greater the number of pixels — and megapixels — you get, but the smaller each of those pixels becomes. And the smaller the pixel, the worse it performs when it comes to capturing light. Which mean, the worse photos it takes.

In other words, they’re super high resolution, they’re just super low quality at the same time. You get more but the more sucks.

What you really want to do to improve image quality is increase the sensor size, which increases the pixel size. Not pixel count. Pixel size. That way, each pixel can take in more light. That’s the kind of more that sucks less.

Within reason. End of the day, you want the best balance possible given the size constraints of the device, and the distortions and aberrations you can get if you push anything too far in any one direction.

And, for the last few years, Apple figured the best balance was holding the count at 12 megapixels while slowly improving just about everything else.

Pixel Binning

Now, some other phone makers have been really aggressive about increasing megapixel counts instead, especially in markets where they feel people care more about having the highest possible number on a spec sheet.

Recently, Samsung even put a 108 megapixel sensor in the Galaxy S20 Ultra. To keep those pixels from being too tiny, Samsung is using a fairly massive sensor. But, they’re also using pixel binning.

That’s a process where a group of pixels are averaged together to try and get a result that’s better than the sum of the individual parts. In this specific case, binning 9 pixels down to 1, or 108 megapixels back down to 12.

Same with things like quad Bayar filters, which means the pixels are color filters — red, green, blue, green — in 2 by 2 grids.

You can shoot the full 108 megapixels if you really want to, and in extremely bright, outdoor, well lit situations, you’ll get usable results, but for indoors or lower light, that binning and that filtering will kick in, hard.

Image stacking

What Apple — and for that matter, Google — have been doing instead is something called image stacking.

Instead of shooting a ton of pixels and binning them down, the iPhone shoots a ton of exposures in rapid succession and then the image signal processor, the ISP on the A-series chipset, churns through them, pixel by pixel, and picks the best parts of each and every exposure, for each and every pixel.

It started with simple high dynamic range — HDR. Take a photo, take an over exposure, take an under exposure, combine them together and you get a photo with more detail in both the highlights and shadows.

But now, thanks to machine learning, the ISP can detect textures like clouds and ropes, even faces and parts of faces, and then process everything slightly differently to render out the best possible results for each and every element in the scene.

The advantage to pixel binning is speed. It’s doing less work on more data. The ISP is fast, but it’s still doing way more work but on less data.

It’s not hard to imagine that if Apple did both, bin down 4 to 1, from 64 to 16, or something like that, and then apply Smart HDR staking, semantic rendering, tonal mapping, and all the rest of the machine learning, they could potentially get even better results faster.

Zoom

The second part of the iPhone camera rumors involves zoom, which is something I’ve been complaining about for a while.

Huawei and Samsung have added periscopes to their camera systems to greatly increase optical zoom. Basically, because phones just aren’t think enough for traditional zoom lenses, inside of having them stick half an inch out of the phone, they lay them on their sides, trading depth for width. Then they use mirrors to redirect the light. Like a periscope.

Google, by contrast, took their HDR+ image stacking technology and souped it up to make better digital zoom. Pretty much using the minute movements of the phone while taking a photo to capture extra data, and then computing the hell out of it to make the zoom look way sharper, and the colors way more accurate, then anything the sensor could capture by itself.

Apple could do something similar and adapt Smart HDR into Smart Zoom for the wide angle camera. Then, they could use a better telephoto, not even a periscope because that’s probably overkill when it comes to return on optics with current technology, and add Smart Zoom on top of it. That could work out to 3x optical, much better 10x digital on the iPhone 12. Maybe even 5x optical, even good 20x or more digital on the iPhone 13.

That way you could capture your kids playing sports, your pets in the park, all the sights when you’re traveling — you know, when we can do all that stuff again. One day.

And like I’ve said numerous times before, it’s an area where Apple is currently behind, and not just other camera phones, but cameras in general.

Anamorphic

The last of the most recent iPhone camera rumors involves a completely new kind of camera. Not a 2 megapixel macro, sorry Marques!

No, an anamorphic lens.

Now, there are historical reasons for why anamorphic lenses exit but it’s safe to say they remain in use today primarily for the way they look. They’re not round and so the bokeh they capture — the blurry lights in the background — aren’t round either.

And the lens flares you can get… well, if you’ve ever seen JJ Abrams’ Star Trek movie, you know all about the lens flares you can get with anamorphic.

So, if anamorphic lenses in the modern age are less about practicality and more about artistry, why would Apple of all companies include one?

Maybe, maybe I say ever so trepidatiously, so could an anamorphic camera. Think about it, if Apple of all companies is planning or even considering an anamorphic lens, it would be precisely for the artistry.

I mean, just close your eyes, can’t you just imagine the keynote demo?

Maybe it could be done computationally as well, and on the main wide angle camera, but either way, it’d be something incredible fun to see.