“Humans are still necessary because there are things that cannot be quantified. And maybe that was always the point to us. We are here to experience the things which cannot be described.”
— Mark Russell, Vanishing Point #1
On a recent trip to Puerto Rico, I stood at the precipice of El Castillo, one of the highest points of San Juan, and reeled.
As I stared out at the sea, I had the overwhelming perception of space: the vastness of the ocean spread before me, the many miles of sky and surf in my field of view. I felt tiny—no, compressed: crushed to a single point, swallowed by the distance and weight of all that open air.
I wanted to capture that unusual feeling, so I did as anybody living in the 21st century might do: I took out my phone and snapped a picture.
That photo’s about as sensuous as dishwater. As an observer, you can probably intuit the distances involved, but certainly you can’t feel them in the same way as I did. The grass, water, and sky are, for lack of a better word, flattened into color blocks equidistant from each other.
Which is fine. That’s how it goes. It’s normal, even, for a photo that has been twice disassembled and reassembled, first by my phone and now by your computer screen. We’re used to this sensory downgrade by now, but it’s worth stating for the record that our technology, as marvelous as it is, is simply not as marvelous as humans at perceiving environments.
Nor will it likely ever be.
Light, and How We Perceive It
Sight is an incredibly complex system. I’m not just talking about discerning shape or color. Somehow, our brains know roughly how far away an object is, simply by looking at it; even if we don’t know the exact distances involved, we can feel how far it is, in our guts. Same goes with texture, contrast, state of matter, speeds—in some cases even temperature.
We are able to perceive all those characteristics (and more) by viewing reflected light. The Sun is a full-spectrum light source, meaning it emits every kind of electromagnetic radiation from X-rays to radio waves. Those light waves bounce off an object and into our eyeballs; our retinas convert it into electrical signals, which our brain then interprets. Et voila: We see.
Each object we see reflects a wide spectrum of light. For example, a red apple isn’t just reflecting “red” light, as defined by the wavelength of 650 nanometers (nm), but it also reflects 600 nm light and 700 nm light and everything in between, as well as shades of yellows (in the 570s), greens (520s), oranges (590s), and even blues (480s). Every infinitesimal change in shape and texture leads to a different banquet of wavelengths reaching our eyes:
Our brain compares and contrasts all those many wavelengths of light to decode cues about our environment. A red apple that is darker on one side and lighter on the other has depth. If that apple is stippled with illumination of varying intensity, then it has rough texture. If its colors are faint in some spots and intense in others, then the apple is transparent; if it is ever so slightly redder on one side and bluer on the other, then it has motion. And so on.
While our retinas can only process light in the visible range (roughly 380 to 700 nm in wavelength1), it’s more than enough for us to navigate our world.
So when we read a physical comic book, our brains are unpacking and processing a vast matrix of information encoded in the light reflected from its pages. We aren’t just seeing the art printed on the page. We perceive the texture of the page itself; the glossiness of its paper; the pressure of its ink against the grain. These details imbue each panel with additional subtle contexts of weight, depth, and space for the contents within the panel. As a result, the medium becomes an indelible, inextricable aspect of the comic book—in other words, the medium is the art, and vice versa.
Reflection vs. Emission
That’s not a knock on digital comics by any means, but it’s worth understanding that digital screens just don’t activate our brain systems in the same way as physical comics do.
There’s a good reason for that: Digital screens emit light, rather than reflect it. Computer monitors, TVs, tablets, and other digital screens are nothing more than light boxes, beaming wavelengths into our eyes. And they are capable of emitting light in three colors only: red, green, and blue.
More precisely, the diodes in LED digital displays generate three extremely narrow bands of light, centered around (roughly) 630 nm (red), 530 nm (green), and 450 nm (blue). LCD displays use white light phosphors, then pass it through red, blue and green filters to selectively allow or block light. Either way. the end result is the same: three very narrow bands of wavelengths of light emitted from the screen. Thus, to create a color or image, the digital screen combines those three wavelengths in whatever relative intensities are needed for your retina to perceive the color in question (i.e., the RGB value).
While our retinas collect emitted light from a screen much the same as they might collect the reflected light bouncing off objects, our brains process this light very differently, because the range of wavelengths collected is much smaller. By nature, it carries less information. What’s more, light emitted from a screen also tends to originate from its source with higher intensity than reflected light, with less variation in wavelength and brightness across a given surface area.
As such, we perceive digital media as more saturated and vibrant, with contrasts between borders that appear more intense. Colors appear deeper, brighter—but not necessarily richer.
We can’t parse many useful context clues about distance, texture, and motion from emitted light, because that light doesn’t contain enough variance in wavelength and intensity to give us that information. How far away is the art displayed on a phone screen? What texture is it? Our brains simply don’t have enough data to figure it out.
We Really Do Read Differently on a Screen
Once the retina receives the light-coded information, it’s up to the brain to disentangle it and make sense of it. Here we find again that physical media offers some advantages; studies show that we comprehend and remember what we read better when we read from a printed source than from a screen.
There’s a few reasons for that. For starters, the ability to read isn’t innate to humans like crying or breathing; we have to learn how to do it, each and every one of us. Which means that, unlike interpreting distance or texture from reflected light, the brain doesn’t come equipped with a special dedicated set of cortices just for reading words.
Instead, the brain must create its neural system for reading by co-opting the connections and cortices used for other functions. For example, we recognize letters by using the lobe that recognizes faces; we interpret grammar by using the lobe that produces speech; and we break words into sounds by using the region that gives us spatial awareness.
Yet our brain actually reads printed and digital words differently. When we read digital words, the brain uses different neural connections, ones that prioritize speed over comprehension. That’s because so much of what we read digitally is “short-form”: Think texts, tweets, headlines—they’re all short blasts of information, rarely more than 280 characters or 20 words in a line.
Sifting through this firehose of information requires our brains to digest a lot of disparate ideas quickly. Thus our brain relies on the functions for scanning and skimming, allowing us to read words faster and less thoroughly than we would in print. We get through more words faster, but we engage less of the long-term memory or comprehension capabilities that our brain has built up for print. (I guess I should’ve asked you to print this essay out before reading it!)
Lost Without My Mental Map
Reading on a screen also inhibits the creation of “mental maps”. That is to say, in order to recall a fact later, our brain remembers where we learned the fact just as much as what that fact is. For example, you might forget what your computer’s login password is, but you remember that you wrote it down on a yellow sticky note pasted to the right-hand side of your desk. Or you might remember that a superhero died in the last issue somewhere in on the middle of the page; you can see the place on the page where the death happened, as much as you can remember the panel in which it occurred.
When you read words or comics on a screen, however, there’s no sense of place for your brain to orient to. A website is one long infinite scroll; a social media feed never ends. The location of text is constantly scrolling, constantly moving, with no set orientation. Yet all this information appears in the same physical location in space-time: on a screen inches or feet from your face.
Your brain simply doesn’t know how to create a mental map from that… so it doesn’t. Meaning you have less recall power over any particular fact you learned on a digital screen; and less ability to connect facts within a spatial web to encourage comprehension. The fact, the story, the superhero’s death—all gone, like tears in the rain.
Lest you think this only holds true for words, studies have also shown that our brains also generate different mental responses for digital artwork than physical ones. That’s not to say we like or dislike art better when we see it in real life versus on a computer. Rather, our cognition of it – interest, confusion, surprise, boredom—changes on a mental and emotional level.
One study found that our emotional response to a physical painting was ten times stronger than that of its digital reproduction. Physical artwork activated the region of the brain associated with consciousness and memory, but digital versions did not; and subjects’ attention lingered more on the physical artworks’ visual points of focus, creating “sustained attention loops” that helped deepen their engagement and comprehension of the piece.
Just Something About Physical
Look, this is not a knock on digital comics. I understand why digital comics are popular. Who could argue against having tens of thousands of titles right at your fingertips, including out-of-print back issues or issues only released overseas? Or the convenience of carrying around a single e-reader (or your phone!), versus a disheveled stack of singles? Digital access to comics is objectively great.
But there really is something to a physical comic book, and it’s not just in your head—or, rather, it’s entirely in your head. A physical comic book stokes the evolutionary fires of your brain, allowing it to interpret a wealth of intelligence from the smallest perceptible changes in light. It gives your mind the opportunity to delight in the joy of doing the work it was made to do. We evolved to navigate the world via mental maps and interconnected rainbows of color, and it feels good, so good, to experience that which cannot be described.
The rest of our bodies (and brains!) can react to light of all wavelengths. Consider how UV light improves one’s mood and cognition; or how infrared light can stimulate brain activity