top of page

The Challenge of Capturing Sunglass Lens Views on Camera

Updated: Jul 12


In this Article:

Introduction


The ultimate prize in the sunglass industry is the ability to see how a lens will affect the view.


Opening up a website and comparing different lenses for yourself would be an amazing tool to use.


However, it cannot be done. Let me explain why.



TL;DR


A short summary, in case you just want the explanation without all the details.


A camera has a sensor to record what comes through the lens that cannot adjust brightness and color rendering nearly as well as our brains.


The pupil behaves in the same way as the adjustable camera aperture, regulating the right amount of light entering the eye. However, it can adjust to the perfect light allowance faster, more frequently, and more accurately than any camera can. Rods and cones in our eyes, responsible for nighttime and daytime vision and color vision, allow our eyes to adapt to various lighting conditions in detail that camera sensors cannot replicate. Adding sunglasses significantly alters the amount of light reaching the camera or our eyes, but our eyes and brains can quickly adapt, while the camera needs to be significantly recalibrated.


More importantly, a camera has "white balance" (adjusting color temperature to capture true colors, making whites appear truly white) but not color memory. Color memory refers to how we remember how objects are supposed to look and adjust subconsciously so they appear correct. Unlike a camera, which captures a full-color image at once, our eyes and brain build and update the image over time. Technically we all see color slightly differently because our memories and brains are different. Putting on a sunglass lens that alters colors will confuse a camera, while our brains will quickly adjust to rebuild our color vision to the appropriate balance. This is also why it's important to wear a new lens for a few minutes when trying them on, to give time for our brains to customize the colors correctly for us.


We can also perceive the difference in clarity and color rendering between different lens materials and coatings. A camera cannot discern the difference between glass, nylon, polycarbonate, or plastic lenses. The contrast for us can be very noticeable.


Therefore a camera view will never be an accurate representation of how we would see the sunglass view with our eyes, and that's why no brands use them to show it. They all use before and after pictures edited in image processors. That's the best we can produce currently, to recreate the picture and color from our memories. But even then, our conscious memory will never do a good enough job rendering the colors and lighting as compared to our subconscious. Even if the colors were perfectly on point, the limitations of screens to view the image wouldn't do justice to what our eyes can do. The view will always be better in person, so if you like how you see it in the picture you can be sure you will like it better in the real world.


View with sunglasses


If you want to understand this in greater detail, read on.


Adjusting to Changing Brightness

When we go outdoors in the middle of a clear day, the amount of light around us is overwhelming, and that's why we squint.


How much brighter is it outside than inside (assuming a sunny day vs a brightly lit office)?

Would you believe it's over 100x brighter!?


Even outdoors in the shade, it is over 50X brighter than indoors.


Our eyes adjust so quickly to the difference that we have a poor sense of the magnitude of light change between indoors and outdoors.


(It's not that people are terrible at perceiving brightness, it's just that brightness perception (like loudness) is logarithmic. So if a person says it is 2x brighter outside, and a linear measurement instrument says it is 100x brighter, that is roughly consistent.)


When we put on a pair of sunglasses, there is an immediate reduction in the amount of light entering our eyes. Most sunglasses have a visible light transmission (VLT) of 10-20%, meaning the amount of light coming through the lens is only 80-90% of what we were seeing before.


Our brains and eyes very quickly adjust to this.


For example, no camera has Region-Adaptable ISO which would let more or less light in depending on where it is in our vision and how much light our brain thinks it needs. This is why photographers have always had to modify light or burn-and-dodge. Our retinas can vary their sensitivity by region. They drop the sensitivity for a bright sky and increase it in a dark foreground. Look out a window and close your eyes. See the fuzzy negative image? That's the map of where your eye has varied its sensitivity (an unsharp mask) to let you see out the bright window and inside your house, both at the same time with full contrast, which no camera can do.


While the iris structure is known for providing the color or pigment of the eyes, it is also made up of miniature muscles that work in tandem with your pupils. As the gatekeepers, these two regulate the right amount of light entering the eye. In dim lighting, the muscles relax, allowing the pupil to dilate allowing more light to enter. In bright environments, the muscles contract causing the pupil to constrict, minimizing the amount of light needed to focus.

Breakdown of eye

Once the light enters the eye through the circular pupil, it is filtered through the retina onto light-sensing cells that line the back part of the eye called photoreceptors. These are divided into two groups according to their shape and function: 

·         Rods: responsible for nighttime vision, has a low resolution but is more abundant. 

·         Cones: contribute to daytime vision, are responsible for color vision and are fewer in number. 


Combining these two cell receptors’ functions allows our eyes to adapt to various lighting conditions–with the retina switching the workload between the rods and cones based on the amount of light that the pupils let in.


A camera does have several ways of adjusting the amount of light coming through, but none are sophisticated enough to replicate what the eyes are capable of.


A camera regulates how much light comes in through several mechanisms:


1. Aperture: The aperture is the opening that controls the amount of light that enters the lens. A larger aperture (smaller f-stop number) lets in more light, while a smaller aperture (larger f-stop number) lets in less light.


2. Shutter Speed: The shutter speed controls the duration of the exposure. Faster shutter speeds allow less light in, while slower shutter speeds allow more light in.


3. ISO: The ISO setting controls the camera's sensitivity to light. Lower ISO settings (e.g., ISO 100) are less sensitive to light, while higher ISO settings (e.g., ISO 6400) are more sensitive.


4. Neutral Density (ND) Filters: ND filters reduce the amount of light entering the lens without affecting the color or quality of the light.


5. Polarizing Filters: Polarizing filters can reduce glare and reflections, which can also reduce the amount of light entering the lens.


6. Lens Hood: A lens hood can reduce lens flare and glare, which can also reduce the amount of light entering the lens.


7. Camera's Auto Exposure (AE) System: The AE system automatically adjusts the aperture, shutter speed, and ISO to achieve the proper exposure based on the lighting conditions.


8. Exposure Compensation: This feature allows the photographer to intentionally overexpose (brighten) or underexpose (darken) the image, which can also regulate the amount of light coming in.


A skilled photographer can use some or all of these to get the lighting as close to perfect for a shot. However, the photographer will always be comparing what the camera is producing to what they are seeing with their eyes in real time. The image we make in our heads is always the standard on which we judge any picture.


Adjusting to Changing Colors


Our eyes and brains have the ability to interpret color in a far more advanced way than cameras do.


Our visual system has mechanisms to maintain color constancy, which is the ability of our visual system to maintain a consistent perception of color despite changes in lighting conditions. We can do this in a relative, subjective, and ongoing manner, for each detail in our visual scene.


The human vision system is complex, involving both photo-receptors and neurons in the eye, along with intricate processing in the brain. The main receptor of information from the retina is the lateral geniculate nucleus, located in the thalamus near the center of the brain. This nucleus routes information to the primary visual cortex and other visual areas. The primary visual cortex processes this information into a three-dimensional model of the world. For this question, areas V2 and V4 are important.

Parts of brain related to sight

Cells in V2 process orientation, spatial frequency, color, depth perception, and motion. Together with the primary visual cortex (V1), V2 registers wavelength, hue, and luminance. At this level, the brain likely performs a "white balance adjustment" similar to a camera, normalizing each channel, though it is more complex and dynamic.


It's V4 that does the real trick. V4 handles color consistency based on color memory. We remember how objects are supposed to look and adjust subconsciously so they appear correct. For instance, we know roses are red, snow is white, and certain fruits have specific colors. Research supports this, showing that humans have "memory colors," for example, for skin tones, which is why accurate white balance is crucial in portraits.


The fovea, the part of the retina creating the sharpest image and most sensitive to color, covers only one or two percent of the retina. Unlike a camera, which captures a full-color image at once, our eyes and brain build and update the image over time, with the fovea being crucial for high visual acuity tasks like reading and recognizing faces.


With cameras, current object recognition and processing are insufficient to implement this effectively. The tool most used is called "white balance".


In non-technical terms, white balance is how warm or cool the overall colors in your photograph look.


Your camera is pretty good at reproducing color because it has the ability to analyze the scene and compensate for overly warm or cool colors.


Usually, colors in your photos will look pretty close to the way they looked in real life. However, your camera is easily confused and can sometimes make the colors too warm or too cool.


The most obvious place to spot this problem is the parts of your scene that are, or should be, white. When you take a photo by candlelight, sometimes the whites will look kind of yellow or orange.


On a cloudy day, or when you’re in heavy shade, the whites might look a little blue.


This is called a “color cast,” and it happens because the color of the light source varies.


This warmth or coolness in the colors is referred to as “color temperature”.

warm color cast

Your brain automatically deals with color cast, so it’s not something you’ll notice unless you’re looking out for it. However, your camera isn’t as smart as you and often needs your help.


Most cameras default to the “Auto” white balance setting, which works pretty well, most of the time.


In auto white balance mode, your camera examines the scene you’re trying to photograph and chooses a color temperature (in Kelvin) it thinks will work best.


However, your camera can easily get confused if the scene:

• Doesn’t contain any colors which are white, or close to white.

• Contains mostly one color (e.g. a lot of green grass, white snow, blue sea or sky.)

• Is illuminated by multiple light sources with different color temperatures.

• In our case, if you add tinted sunglasses in front of the lens.


Though we can alter an image in post-processing, it is nearly impossible to match what our brains can produce. Without a known neutral reference, we rely on simple auto-white balance, presets, and manual adjustments, using our own memory colors. Automatic white balance could work similarly by comparing scene information to a database, like "matrix metering." However, current object recognition and processing are insufficient to implement this effectively. Maybe one day this will change.


Adding a tinted sunglass with subtle color-enhancing light filtering technology makes this difference even more pronounced. Our eyes and brains may take minutes to process the differences in color cast to remake our vision in a way our brains feel comfortable with, but a camera can be completely overwhelmed with the color shift.


Adjusting for Differences in Lens Material and Coatings


Another issue that cameras cannot handle is the subtle differences between lens materials and coatings.


We can perceive the difference in clarity and color rendering between various lens materials and coatings. Even though some plastics or other materials are almost as clear, glass has superior chromatic aberration. This makes colors more authentic and pleasing to look at.


A camera cannot discern the difference between glass, nylon, polycarbonate, or plastic lenses. The difference in perception is too subtle for the sensors, as we have magnitudes better depth perception and "resolution".


This is especially true of looking at the image on a screen, which is the goal of capturing it on camera. Screen resolution has progressed rapidly in recent years but it will never have the "resolution" and clarity of seeing with your eyes. What we see on any monitor will always be compared to what we see with our eyes, and it will never be a perfect copy. With sunglasses on a screen, we are looking at a screen showing us what a camera sees through a sunglass lens, an extra two steps removed from looking through the sunglasses yourself.


Conclusion: Seeing the Sunglass View with a Camera


In conclusion, it's impossible to accurately show how a lens will look through a camera or computer screen. Our eyes and brains work together in a way that's hard to replicate with technology. While cameras can take great pictures, they can't match the complexity of our vision. We here at Sunglass Science try our best to show and describe the view in the most accurate way we can, but nothing beats the real thing. So if you're thinking of getting new sunglasses, don't just rely on digital images - try them on and see for yourself.

Comments


bottom of page