Categories
Uncategorized

iPhone 13 ProRes — New Video God Mode!

Rene Ritchie (RR): It's here. It's time. Apple ProRes is hitting the iPhone 13 Pro and ProMax right now. 422HQ, to be exact. Six gigabytes per minute, to be even more exact. But what does all of that even really mean? And how would you use it?

Let's do this Alex Lindsay style, the man who knows as much about ProRes as anybody outside Apple, and who you may know from ILM, Industrial Lights and Magic, and Queen Amidala's ship, or streaming pretty much every major tech event in big tech. We've seen ProRes in a bunch of products. We've seen it in Black Magic cameras, the Ninja and the Samurai displays record in ProRes. What does it mean having the ability to record native ProRes on an iPhone?

Alex Lindsay (AL): It's unbelievable. I mean, you know it's, the thing is is that the iPhone can capture a lot of footage that is actually fairly usable. One of the big problems that we had was storage. We weren't able to capture at a data rate that made sense for production. You constantly had this problem of not really being able to get the full quality that the iPhone was capable of because it just couldn't save it to something like Apple ProRes.

RR: Yeah, that's the thing that really blew my mind here is that two years ago, WWDC, they announced the new Mac Pro and it had a ProRes accelerator and afterburner card in it. And now just two years later, we have these Silicon blocks on the A15 chip that are doing ProRes decode, encode, hardware acceleration, and this whole new storage controller and storage system that can record fast enough to do it. And it feels like in two years, Apple has taken a lot of a low-end Mac Pro and shoved it into my phone.

AL: Oh, it's incredible. You know, and it's, the thing is is that the phone now is something that you could shoot real footage with. Now, does it replace shooting with a Black Magic camera or an Airy camera? No, there's a bunch of other things that they do that is great. We hear a lot of filmmakers or people trying to market it overselling that idea. Being able to have interchangeable lenses, being able to have all the creature comforts, and it's a larger chip, it's gonna have better low light performance. It's gonna have a lot of other things. These larger cameras are larger for a reason. That said, insert shots, documentaries, where it might be hard to bring one of those cameras into a country or into a location. You know, dealing with training, corporate, and some films. All of these could be using the footage that they're shooting with the iPhone because the camera itself is unbelievable.

RR: Yeah, and we've even heard sometimes in big Hollywood productions, they take shots with an iPhone as just a way of sort of speccing it out, and then sometimes end up actually using them because the iPhone's really good at high dynamic range, especially when you're transitioning from low light to bright light. Normal cameras don't adjust to it that fast, and Apple's got it to the point where the iPhone really can handle a lot of the stuff you throw at it.

AL: It really can. And we've definitely seen that, and one of the frustrations was, well, we didn't get in Apple ProRes, you know? And so the thing is is that, so you have this horrible little H.264 that you can't, you know, H.264, H.265, that you can't do the final color adjustments that you wanted to do with the rest of the film because it was in this cheap format. So the fact that you can do your tests with a phone and know that, push comes to shove, you at least have more data than you would have had before.

RR: ProRes is Apple's high-fidelity codec, but what does that really mean in terms of what you're getting over H.264, H.265.

AL: Yeah, well, I mean, one of the things is that when we look at when we're actually compressing footage and all of it is compressed, so ProRes has some version of compression, H.265 has got another version of compression. All of these things are taking the raw data and trying to make it smaller. And ProRes is just a lot gentler about it. So there's a couple of things that we do in, when we see H.264 and H.265, there's two things that we generally do to make them smaller. First, we do what we call spacial compression. So we take every frame, or we take certain number of frames, and we say, well, these colors are like each other. And this is a very complex, that's the big difference between age H.264 and 65, is the science behind that. And, you know, H.266 is already on its way. People are already talking about it, but these are refinements to how we take those pixels and combine them in a mathematical formula that allows us to reproduce them at a smaller rate, but we still throw away data. That's how you do that. And you see that when you see things posterized or when you see things that don't look as detailed as they could, that's because we had to throw some, the data had to go somewhere. And where you really see that is in complex images. So take pictures of a, a video of a forest, take video of water, take video of those types of things. It's just too much data. Now, the two, the other way that we do compression is that we do what we call groups of frames, or GOPs. And so these, what we call, and 265 and 264 tend to be kind of long GOP, you know, solutions. And so what we do is we say, well, for these, it's not changing too much. Let's take this and call it a key frame, and then 60 frames later, we'll take another frame. We're gonna hold those whole frames, but in between, we're gonna have what are called B frames and P frames, that are predictive frames and bi-directional frames, that are looking at the data on either side. And they're guessing, and just tracking the data that was changing. One of the things that's different about ProRes is it carries the whole frame. So it's, every frame is there, it isn't, you don't have to jump from key frame, to key frame, to key frame to do that. In addition to that, the data rate on most ProRes is obviously much, much higher than what you have with H.265 and H.264. And so it can, it can carry a lot more detail. It just has more data to work with. If you're working with HQ or 422, there's just tons and tons of data there to work with. And so, and again, the place to test this is take your phone, shoot H.265 of a forest, then take it and shoot it with, one with H.265 and one with ProRes. And you're gonna see an enormous amount of data that, you know, what was kind of swirly little designs in H.265 will be leaves in H.2, in Apple ProRes, you know? And so those are the kinds of things that you want to look at. And so it's, H.265, by the way, and H.264 are revolutionary, they have changed the way we look at video, it's, they are, as a delivery format, they're unbelievable. But as a production format, they're not so great.

RR: You just mentioned ProRes, a 422 HQ, which is what Apple's gonna be delivering in the camera app. We've also seen Filmic Pro use the APIs that Apple's providing to do ProRes, you know, proxy and LT. And we've also like, on a computer you can use 4444, I forget how many fours, four fours in there, what are the, what are the differences when you just hear those numbers?

AL: Absolutely. So, proxy is really designed as a preview format. It's never designed to be used for anything. LT, which is kind of a lighter version, is just more compression. And that's really what we're talking about. All of these have 10 bit, for instance, instead of eight bit, but they, but they're more compressed, you know, as you go down. So 422 is kind of a good baseline. LT is a lighter version of that baseline 422. HQ is a higher, it's got more, a higher bit rate. So that's not that it's only 10 bit, but I would say that HQ is the lowest Apple ProRes that you should do 10 bit, because 10 bit requires more data rates. So even though the other ones support them, the compression that you're applying to it will oftentimes overshadow any of the extra bits that you had for it in the color channels. And so HQ is where you really start paying attention to 10 bit, and so HQ and below is 422. And what that means is that you have two color pixels, it's really, Y-U-V, you have four color pixels in the Y, and then two color pixels in the U and the V. So essentially what that means is that your color is half the resolution of your luminance of the black and white information in your image, because we really cue on luminance for sharpness. We throw away a lot of data and it still works for us because that black and white information is there. So 422 takes advantage of that, and that lowers the size of the files. 444 is also 12 bit. So it also has yet, and two more bits to the color. It also carries an alpha channel, so that alpha channel can control transparency. So, and that's really valuable if we want to key something over top of something else. And where you, you don't really use that when you're shooting footage. It's when you're actually compositing computer generated footage over top of something else, you can save it out as ProRes four by four, and now what you're gonna be able to do is get the transparency, really fine transparency of smoke or, you know, transparent images can be captured, and all of that's in the alpha channel. And then XQ is a higher version of that. And when it comes to pushing for, towards HDR, all of those bits make a difference.

RR: So where do you see this making the biggest impact? Like there's probably not, you know, what you're gonna shoot every birthday party in, but for, especially for pros who are using these as tools, where do you think they're gonna make the biggest impact?

AL: Documentary footage is gonna be something that is really interesting coming out of the iPhone. A lot of times we want to shoot a lot of great footage, but we have to get into places that we can't take a big camera because it gets you into trouble, or it's hard to get there, or it's all these other bits and pieces. And so I think that news gathering, documentary footage, it makes a big difference to have really high quality footage. I also think that with training in a lot of corporate video, you could actually shoot something and produce something very high quality with these phones. Most of the training that I do, when we're doing internal training that we're gonna send to clients of how to use a product or a kit that we've sent them, I just do that with my iPhone. And I've been doing that already because it's just so fast and compact, shoots a great image, but my biggest complaint was, well, it would have been nice to fix that up just a little bit, do a little bit more color correction, have a little bit better low light. All those things now are being addressed inside the current phone. So I think that it has a lot of options there, and I'm gonna be shooting my kids' birthday parties with it. But here's the thing, is that you always want to think, when you're taking the birthday parties, when you're taking that graduation, you want to know that that footage is something that you're proud of down, way down the road. You know, so 20 years from now, do you, you're gonna look back and go, I really wish I'd shot that with the best footage I could, you know, I had a phone, that's all I had, all I needed. And all I needed to do is have an extra drive and I would've had all this great footage.

RR: Yeah, it's so good that the only thing, and I know it's not reasonable to want like multiple Thunderbolt controllers in a phone, but it's gotten to the point where now, if I'm shooting, it's six gigabits per second of ProRes HQ into a phone, I just want Thunderbolt.

AL: You really want something faster. You know, that that is a big problem if you're actually using it in production. We were talking about it, 'cause we were talking about the idea of using this phone. And we really were like, well, we might actually have to use a couple of them because we just wouldn't be able to get the data off as fast as we were shooting it. And so, yeah, it does add up pretty quickly, even with a one terabyte storage capacity. It's still problematic. And so that, if I was gonna use these actually in production, I probably have two of them so that I can be pulling footage off while I was shooting with the other one.

RR: I think especially now that we're beginning this big transition to HDR, and I expect that to be our universe going further, and just having this kind of footage, I think it's gonna make a big step up for that too.

AL: Yeah, I think that the phone's capability, both in the screen and in the camera towards HDR, is really starting to affect people on how they look at their own TVs, how they look at other content where standard dynamic range isn't enough. And the reason is is that you're looking at it all the time. Like, even when you look at my phone and you look at the footage I shot, it looks really bright and it looks very vibrant 'cause it's got Dolby vision. And then you look at the TV, you know, someone else's TV and it's very drab. You know, and it doesn't look like what you want, and you're like, what's wrong. Like I have a problem, I have a couple, I have some stuff that's still left over that's standard dynamic range or STR, and I keep on looking at it like, what's wrong with this image, you know? And I try to fix it, and I'm trying to figure out what's going on there, and then I realize, oh, it's just an STR monitor, so it's okay.