‘Great Photos! You Must Have a Great Camera!’

“Great photos! You must have a great camera!” If you take your craft seriously, the odds of having heard these words are quite high. Audiences associate good images with great cameras, and for the longest time this (almost) accusation has bothered photographers who felt their skills were downplayed. But the interesting bit is that we’re walking towards making the “great cameras = great photos” equation true! And they fit in your pocket.

Speaking of great cameras, here’s an Arri Alexa with a cinema prime!

Before I started for real with photography and cinematography – more than ten years ago – I used to play with a Sony compact camera. Back then I believed that great photos could only be achieved with great cameras. Mine lacked everything I associated with great photos: shallow depth of field, wide dynamic range and beautiful color science (I did not know these terms back then).

When I got my first DSLR in 2008, a Canon Rebel XTi, I started to learn that a good camera indeed makes things better, but it won’t prevent you from taking plenty of crappy photos – as most of mine were. I’ve had this thing where I look at the total number of images shot on a given project and the number of images I process and export out of Lightroom. Back then, this used to be a 25:1 ratio. These days I’m at 3:1.

Over the last ten years, I’ve improved my photography skills considerably while also improving my gear – from the XTi I went to a 7D, then to a 5D MkIII and lastly to a Sony A7sII. Every time I switched cameras I remember being blown away by the new capabilities and improvements on the image – color reproduction, full frame sensor and low light sensitivity. Each one of my cameras was stronger than the ones preceding it. That was never enough guarantee some photos wouldn’t turn out bad anyway – out of focus, poorly lit, too contrasty, too shallow depth of field, too much depth of field, and so on.

During this trajectory, I took more than a few photos I’m proud of, and many times I heard the bothersome, “Woah! This is such a great photo! Your camera must be amazing!”, as well as its reverse when people saw me working: “With a camera like that I bet all your photos turn out flawless”. Many of these people were close enough friends that I was able to explain the camera is just a tool and without someone behind it to push the right buttons, the quality of the photos is not guaranteed.

During my learning process, I also watched the rise of smartphones. I used to write a column for a photography magazine back in Brazil (2012-2013) and I saw several big photographers arguing about the validity of an image taken with a phone by an untrained photographer. This was a particularly hot topic in the journalism community. Regular folks (non-photographers) would be closer to a story when it broke, snapping photos on their phones and recording precious developments in real time – way before a photographer got to the scene.

The pros would get up in arms about the media outlets using low-quality, phone-shot images. “These are not good photos!” they’d say, and “Then you should’ve been there faster”, magazines, newspapers and TV channels would reply. Phone cameras and lower entry-prices for digital cameras represented the democratization of photography, an extreme boom in popularity. Everyone was now a photographer – but not everyone was able to make a living out of it, sometimes not even the established photographers from before the boom.

Until recently it was easy to tell when a photo was taken using a phone or an actual camera. In its latest iterations though, through the use of dual-lenses and/or machine learning and automated processes, smartphones experienced an unparalleled upgrade in the images coming out of their cameras. This is where optical photography lines start to blur as we introduce the powers of computational photography.

Wikipedia has the perfect definition:

Computational photography … refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film-based photography, or reduce the cost or size of camera elements.

Smartphones are taking advantage of their strong processors to bring in serious upgrades over their optically limited cameras.

The latest iPhones (7 Plus, 8 Plus, X) use two lenses — a wide-angle and a telephoto — in order to create a depth map of the scene in front of its lenses. With said map, it’s easy to realistically simulate out-of-focus areas like a full-sized camera would. The difference is the depth map gives you the freedom to manipulate that data in ways your camera wouldn’t. You can change the lighting of the scene to some extent, you can create impossibly shallow (yet accurate) depth of field, as well as you can change your focus point after pressing the shutter.

None of this is particularly new — the Lytro camera kicked around with a similar concept ages ago — but it has never been so accessible and easy to play with. Apps like Focos and Anamorphic allow cost under $5 and allow you to fiddle the results of your dual-camera shots.

Before: An original photo
After: The photo using a depth map in Focos.

Apple’s approach can be seen as conservative when compared to the solutions implemented in Google’s Pixel 2, which relies on a single-lens camera and the full force of its artificial intelligence. The Pixel 2 stacks and aligns up to nine photos taken on a burst in order to achieve maximum dynamic range as well as to create its own depth map based on the camera movement and the parallax in the scene. Not only that, its AI has been taught what a person looks like and as soon as they find something that fits the bill, they’ll make sure that part of the shot is in focus. This leads to amazing photos coming out of a fairly inexpensive, light and multi-functional device when compared to a full-size camera. Plus, the photographer doesn’t need to make any decisions. Photos taken by my 7-year old niece and taken by me can look just as good with the press of a single button.

If you want to read more about the technical wonders coming out of smartphones, Rishi Sanyal’s article “Why smartphone cameras are blowing our minds” on DPReview has been a great source of inspiration for my own article.

This makes now a time when someone can say “Great photos! You must have a great camera!” attributing the quality of the images solely to the equipment used and not be wrong!

At the same time computational photography levels the playing field of day-to-day photography, it makes other skills stand out. For example, framing and lighting are things machines are not good at just yet, among other subtleties we pick up while honing our craft. It goes to say if you’re only able to take good photos because you have a good camera, things are about to get tough!

Just to paint a clearer picture, all the photos in this post were taken with an iPhone 8 Plus and a Google Pixel 2.


About the author: Tito Ferradans is a cinematographer, VFX artist, and anamorphic enthusiast from Brazil who’s currently living in Vancouver, Canada. The opinions expressed in this article are solely those of the author. You can find more of his work on his website, Vimeo, Facebook, and Flickr. This article was also published here.

Discussion