Nearly a century has passed since French inventor Joseph Nicéphore Niépce took the first photograph in 1926, and since then, photography has been based on a simple law of physics: to take a better photo, you need a larger lens and a larger surface on which to project the image. These days, we also call this surface the sensor.
Technology and the rise of digitalization that comes with it, has changed many things in our lives, impacting photography as well, forcing the rewriting of the rules of photography. Today, cameras integrated into mobile phones—everyone we all own—can now rival professional-grade cameras.
So, what’s changed now that light-gathering sensors haven’t gotten any bigger and lenses haven’t gotten any bigger?
Let me give you the answer I’ll give you later: software.
Welcome to the age of computational photography…
What is computational photography (CP)?
When taking a photo using the traditional method, you press the camera’s shutter button. The lens in front of the camera’s sensor opens, light hits the sensor, and whatever is in front of the camera hits the sensor, and you capture the photo.
As you can see in practice from the motion photo function on new-generation smartphones, when you press the shutter button on your smartphone’s screen, your phone doesn’t just take a single photo. This can be a few seconds of video, or several consecutive photos.
Some of these frames are dark because they didn’t capture enough light, while others are blurry because you moved the camera at that moment. The processor analyzes the data that makes up all these frames, analyzing each pixel, discarding the bad parts of the photo and combining the best parts to create that stunning image.
In other words, computational photography (CP) can be defined as the art of correcting optical imperfections with software.
Stacking and HDR+
The magical methods and techniques used in photography are also the primary inspiration for CP. The first of these is Stacking and HDR+, considered the heart of modern photography.
Due to their insufficient dynamic range, it’s difficult or impossible for small sensors to simultaneously detect very bright and dark areas. To solve this, the camera takes photos at very short and moderate exposures, and using software, these are overlaid, or stacked. The brightest areas are taken from the darkest frame, and the darkest areas from the lightest frame, resulting in a frame that reveals all the details.
Making night photography more accessible
Photographing at night is a very old tradition, and in the past, you either used a flash or a long exposure to capture good nighttime photos.
Long exposures involve holding the shutter down for a long time, keeping the aperture in front of the sensor open for a long time. This method requires the camera to remain still while the photo is taken. A sturdy tripod is essential for this, as the longer you hold the shutter, the longer the aperture remains open, and the more light enters the sensor. Consequently, photos are overly bright and blurry because the sensor records everything it sees.
Mobile phones, instead of using long exposures that keep the shutter open for a long time, use multiple short exposures over a very short period of time. Even if your hand shakes while taking the photo, the images are aligned with the software, removing the noise from the pixels and resulting in a clear and bright image.
Now, everyone can take photos with depth of field
Bokeh, the blurring of the background in photos using specialized techniques related to professional cameras and focal points, is a physical property of wide-aperture lenses. The bokeh effect in photos taken with mobile phones is entirely artificial.
To create these and similar effects, a phone uses two cameras, either with stereoscopic vision, a LiDAR sensor or a technology like Dual Pixel, to scan the scene in 3D. This scanning allows the software to determine which objects are in front and which are behind in the frame. It crops out the foreground object, blurs the background, and superimposes it on top, creating an artificial bokeh effect.
Semantic rendering
The processor in your small device no longer just detects light; it also detects what you’re photographing. This technology, called semantic rendering, analyzes the data that makes up the photo, processing each pixel while also determining what’s in the frame.
If you’re taking a photo of a cloud, for example, semantic processing reduces what’s considered noise while leaving the texture intact. If you’re taking a portrait, it sharpens the pixels around the hair to make it appear more prominent, and attempts to reduce facial blemishes while preserving skin tone.
We can define semantic rendering as a recipe applied individually to each area of a photo.
…
CP and all the technologies that come with it stretch the concept of realism in photography somewhat, but from another perspective, they democratize photography.
This has eliminated the need for people to acquire expensive equipment or to be familiar with complex shooting techniques for taking and editing photos, or to be familiar with photo processing programs and spend time using them.
Being familiar with techniques and methods and knowing how to use a program is always an advantage, but this advantage applies to a small group of people worldwide.
For most people, simply touching a specific area of the screen seems sufficient to capture a beautiful photo; the software within the device handles the rest.

Was it recorded as it was, or was there manipulation?
There’s a question that comes from the future and whose answer lies hidden in the present: was visual history recorded as it was, or was there manipulation in photographs?
Photography is, in a sense, the art of recording things as they were. Should we transform what we already have, rather than creating a new method of visual communication, perhaps this is the real issue we need to discuss.
CP stands in a good position as a tool for overcoming the physical limitations of hardware and for democratizing certain methods and techniques. Therefore, obtaining well-taken photographs is no longer as difficult or unattainable as it seems. This is a good thing for a father living anywhere in the world who wants to improve a family photo he took with his children.
Everyone enjoys photos that are as close as possible to what the human eye sees, and everyone wants to take such photos, but the most important things we sacrifice here are the concept of reality and authenticity.
One of the most important details to be aware of is that CP not only enhances photos; it sometimes reconstructs details that aren’t present in the photograph by extrapolating them. The most obvious and controversial example of this is Moon Mode. In this action, some phones overlay a high-resolution moon photo from a database onto a captured moon photo. Such practices are attempts to eliminate the “documentary” nature of a photo.
The uncontrolled proliferation of technologies like CP, which are not yet subject to any regulation, could lead to images ceasing to be photographs and transforming into algorithmically generated digital content.
This poses a significant risk that could undermine the credibility of visual history preserved after this period.
What is true beauty? Who decides if something is beautiful?
With the proliferation of social media, another important topic we’ve been discussing is the perception of beauty and what true beauty is.
Artificial intelligence models have learned the concept of a “good photograph” from popular images and, using this data, they refine and adapt each shot to the concept of beauty.
This perception of beauty I mentioned leads to a standardized template of saturated colors, smooth faces, and minimal shadows.
This tendency towards standardization interferes with photography as an art form and with the photographer’s freedom to express themselves, even to make mistakes.
Taking a city photo in backlight, trying to brighten the dark areas of a gloomy portrait, or trying to add joy to a photo, is a machine learning exercise that leads to a kind of homogenization and the loss of the human touch.
…
It’s clear that CP will represent both a creative and significant turning point in the future of photography. Software that transcends the limitations of optical hardware not only reconstructs captured scenes with greater clarity, brightness, and accuracy, but also transforms what we see when we look at visual material and what we perceive in it.
What lies before us is the possibility that photography will no longer be a mere recording tool, but rather a piece of content that can be interpreted, calculated, and, if necessary, re-edited.
Therefore, those who work with photographs will face new challenges: not only adhering to new techniques, but also addressing the accuracy of what the photograph depicts, its ethical boundaries, and the delicate balance between what is expected from photography and what is achieved.
We will be discussing computational photography more, and as we do, the meaning of photography as we know it today will change…
References and further reading…
- Adobe launches a new ‘computational photography’ camera app for iPhones
- CS 448A – Computational photography
- NeRF: Representing scenes as neural radiance fields for view synthesis
- Computational photography: The production of perpetual targets
- Burst photography for high dynamic range and low-light imaging on mobile cameras
