Buttery Bokeh Portrait Mode for video. Higher quality ProRes formats. New, context-aware filters. And that’s in addition to better 5G, smaller notches, adaptive 120hz refresh rates, next-generation A15 silicon, and all the other leaks already swirling around the iPhone 13, iPhone 12s, iPhone Mother of Dragons, iPhone The Dark Knight Notches… or whatever Apple decides to call the next set of iPhones coming our way in… Holy wow, just a few weeks now! Let's break it all down!
2020 felt like it took… forever… and now 2020jr is going by in the blink of eye. From, it’s way too soon to talk about new iPhones, to… when did it become almost September again and can I get these last 18 months back!
But either way — anyway — sometime in the next 4 to 6 weeks, Tim Cook is going to take over our streams and screens and Good-Morning Yote fresh new iPhones into our brainstems.
And according to Bloomberg’s Mark Gurman, those fresh new iPhones will include three fresh new photography features.
The first is Portrait Bode for video, or what he says Apple is internally calling Cinematic Video. A term Apple’s previously used for stabilization — keeping video looking smooth and steady.
Now, Apple won’t be first to blurry background video. Samsung tried it a few years ago with… very mixed results. And I’m sure some of you are already typing away to tell me Nokia actually invented it back in 1812.
Depth of field, bokeh, is just one of those things traditional cameras with big old lenses have always done and tiny camera phone lenses could just never compete with. Not on the physics.
But, instead of big sensor, or big glass, Apple’s been able to throw big compute at the problem. Mostly using dual camera systems to derive some depth data from the small differences between them, and segmentation masking, which just means using machine learning to figure out the subject and apply the blur to everything else. And, of course, Apple insisted on doing it in real time in the view finder, with zero shutter lag, so what you saw was what you shot.
At first, this was so computationally expensive it would redline an iPhone 7. But as Apple improved their silicon, they were able to add lens models to produce more realistic bokeh, dynamic aperture so you could adjust the amount of bokeh even after you took the shot, and thanks to LiDAR on the iPhone 12 Pro, which collects much, much better depth data, night mode portrait mode.
But they still didn’t have the raw processing power to apply it to 4K 30, much less 4K 60 frames per second of video. Not even 1080p it sounds like. Not well enough to ship at least. Until now.
Which is why some of us still use traditional cameras for our a-roll, which is just the talking head stuff, and b-roll, which is all the glamor shots of people, products, and more.
The iPhone is already used in a bunch of specialty production but if they can nail computational lens effects in video, or even just start nailing, it’ll be one less reason to lug around the big camera kits, at least for some stuff.
You know, we’d still need it for the big zooms, until Apple tweaks SmartHDR into SmartZoom.
ProRes is the name for a family of high quality video codecs that Apple uses in Final Cut Pro, and some cinema cameras like Arri and BlackMagic and accessories like Atomos and Sound Devices record in directly. They currently vary in resolution up to 8K, and in quality from various levels of lossy compression up to ProRes RAW.
So, ProRes is gorgeous, and is a dream to edit with, but it’s also hella thirsty. Like just take all your drive space hella thirsty. And that means it’ll be super interesting, really an open question, as to what exactly and how exactly Apple implements it.
Mark says Apple will offer HD, so 1080p, and 4K.
In my perfect world, it’s be a highly optimized version of at least ProRes 4:2:2, with a 1 terabyte storage option to hold… a lot of it… and a Thunderbolt 4 port to pull it all off with, so I don’t have to use Lightning or ad-hoc Wi-Fi, you know, like an animal.
But, I don’t every time get the perfect world I want.
The part about filters may have some pro-level pros salivating at the thought of LUTs, or the look up tables used to transform the flatter, higher dynamic range log or raw video files into… everything from standard color profiles to other cinema camera styles to vintage or classic film looks, or just otherwise transform them to taste.
But what it sounds like is more of an evolution of current photo filters, now for video, but with all the more advanced, machine-learned computer vision tech, like scene understanding, so it can tell what’s a sky, what’s a tree, what’s a face, what’s an object, and semantic rendering, so the filter isn’t applied uniformly but contextually to each element in the scene.
Mark says there’ll be filters to cool down and warm up video, for example, or to add more dramatic shadows and contrast, while preserving proper white balance.
And I’d love that… but I’d also love LOG and real-time LUTs. What? I have room in my heart for all of it. Give me all of it.