Digital Deluge: The Problem of Replacing Books with Code

At the beginning of the 21st century, we embarked on a vast experiment, replacing the physical repository of our collective memory — books, maps, recordings — with computer code. How do we guarantee that this uncontrolled experiment with human memory will turn out well for us?

Man climbing glowing orbs
Shutterstock

Weekly Newsletter

The best of The Saturday Evening Post in your inbox!

SUPPORT THE POST

Man climbing orbs
Shutterstock

Over 40,000 years ago, humans discovered how to cheat death. They transferred their thoughts, feelings, dreams, fears, and hopes to physical materials that did not die. They painted on the walls of caves, carved animal bones, and sculpted stones that carried their mental and spiritual lives into the future. Over generations we have created sophisticated technologies for outsourcing the contents of our minds to ever more durable, compact, and portable objects. Each breakthrough in recording technology — from the creation of clay tablets 6,000 years ago to the invention of papyrus scrolls, printing, photography, audio recording, and now ultracompact, portable, and extremely fragile digital media — has added to the vast stores of knowledge that hold the key to our success as a species. In the digital age, we are dramatically expanding our capacity to record information, freeing us to pursue our curiosity at will and seek answers to ever more ambitious questions.

But every once in a while, we outsmart ourselves, and we have to scramble to catch up with our inventions. This is such a moment. The carrying capacity of our memory systems is falling dramatically behind our capacity to generate information. Since the creation of the World Wide Web in the 1990s and the growth of social media in the last decade, we feel increasingly overwhelmed by information. At the same time, we are intrigued — if not downright infatuated — with the power and promise of this abundance. We demand more and more — Big and Bigger Data. Yet it seems the more information we have, the less we feel in control of what we know. How do we catch up with ourselves now?

This is not the first time humanity has felt overwhelmed by the riches created by our ingenious inventions. Every innovation in information technology, going back to ancient Mesopotamians’ invention of cuneiform tablets, precipitates a period of overproduction, an information inflation that overpowers our ability to manage what we produce. Having more knowledge than we know what to do with while still eager to acquire more is simply part of the human condition, a product of our native curiosity.

But this moment is different in quality as well as quantity. We can no longer rely on the skills we have honed over millennia to manage our knowledge by managing physical objects, be they papyrus scrolls or paperback books. Instead, we must learn to master electrical grids, computer code, and the massive machines that create, store, and read our memory for us.

The consequences of going digital for the future of human memory came into sharp focus for me in 1997 while leading a team of curators at the Library of Congress to assemble a comprehensive exhibition of its collections for the first time. The library had just acquired its one-hundred-millionth item. From this abundance, we were to select several hundred items that would tell the 240-year story of the Library of Congress and, by extension, the American people.

We had much — too much — to choose from. Home to the United States Copyright Office and faithful to its founder Thomas Jefferson’s vision of creating a universal and comprehensive collection of human knowledge, the library has records in virtually every medium capable of carrying information, from rice paper and palm leaves to mimeographed sheets and onionskin paper, whalebones and deer hides, audio wax cylinders, early television kinescopes, silent movies on nitrate film, maps on vellum, photographic negatives on glass plates the size of tabletops — and, of course, computer code on tape, floppy disks, and hard drives.

To tell the story of how the Republic was born, for example, we displayed the Rough Draft of the Declaration of Independence, crafted over a few days in July 1776 by Thomas Jefferson and edited by Benjamin Franklin, John Adams, Roger Sherman, and Robert Livingston. It is written in the eminently legible hand of Thomas Jefferson. Yet several passages are boldly struck through with lines of heavy black ink and emended with the changes made by Adams and Franklin. The sight of Jefferson’s venerated text so vividly edited always draws people up short. They are startled to see that the most famous phrase in this most famous document — “we hold these truths to be self-evident, that all men are created equal” — is not what Jefferson wrote. He wrote that the truths are “sacred and undeniable.” The words we know so well today are in fact a correction suggested by Benjamin Franklin. The jarring yet oddly familiar sight of the Declaration of Independence in full Track Changes mode makes self-evident the disagreements among the Founders and the compromises they agreed on. The original document renders the past strangely new — the events dramatic, the motives of the actors complicated, the conclusion unpredictable.

Every innovation in information technology precipitates an information inflation that overpowers our ability to manage what we produce.

As a historian, I was familiar with the excitement of working with original documents. I also knew how stirring — at times emotional — it is to work directly with originals. A physical connection between the present and past is wondrously forged through the medium of time-stained paper. Yet what I remember most vividly is the impact of the Rough Draft on tourists. Many of the visitors had stopped by the library simply as one more station on a whirlwind circuit of the capital. They were often tired and hot and not keen on history in the best of circumstances. But this was different. They would grow quiet as they approached the exhibit case. They lowered their heads toward the glass, focused on lines of text struck through to make out the words scribbled between lines, and began to grasp what they were looking at. Their reactions were visceral. Even dimly lit and safely encased in bulletproof glass, the Rough Draft emanates an aura of the “sacred and undeniable.”

It was then that I started to think seriously about the future of memory in the digital age. What would my successor show in 200 years’ time — or even 50 years? How would people feel that distinctive visceral connection with people from the past if the past had no undeniable physical presence? At that time, web pages lasted an average of 44 days before changing or disappearing altogether. We seemed to be moving at breakneck speed from a knowledge economy of relative scarcity of output to one of limitless abundance. By latest count in 2015, the Library of Congress had well over 160 million items, already a startling increase over the 100 million it counted in 1997. But relative to what circulates on the web, its collections could be described as if not scarce, at least tractable. One data-storage company estimates that worldwide, web data are growing at a rate that jumped from 2.7 billion terabytes in 2012 to 8 billion terabytes in 2015.

How are we to keep from being drowned in the data deluge? In the past, the materials used for writing, the human labor of copying, the costs of disseminating and providing access to books, atlases, photographs, films, and recorded sound were very high. The expense of maintaining vast and redundant stores of physical artifacts meant it was costly to collect them and invest in their long-term access. The question had always been: “What can we afford to save?”

Now, suddenly, those filters are gone and information travels at the speed of electrons, virtually free of friction. Now everyone with a computer can publish their own book, release their own movie, stream their own music, and distribute what is on their hard drive or smartphone across the globe instantaneously. The question today is: “What can we afford to lose?”

Boy holding a lantern
Shutterstock

Though this seems a daunting question, we have a lot of information from the past about how people have made these choices before, in other periods of information inflation — and there have been many. They routinely follow every innovation in recording technologies. It happened when Sumerians first invented writing to store information about grain harvests and found themselves puzzled by where to put so many clay tablets. It happened when Europeans invented printing and the marketplace filled up with competing and contradictory versions of canonical texts, like the Bible. It happened again when we created audio recordings on platters that would break if handled roughly. Each innovation prompted a rethink about how to use these astonishing new powers of communication. And each advance required a very costly retool of the information infrastructure already in place. Creators, publishers, librarians, and archivists all scrambled to catch up. But it was always worth the price, no matter how high it seemed at the time, because we gained the freedom to reimagine our collective memory, confident that we could capture so much more of the human experience.

Over generations, we perfected the technologies of recording and created more resilient and compact media to hold our knowledge. Yet quite abruptly, at the beginning of the 21st century, we are replacing books, maps, and audiovisual recordings with computer code that is less stable than human memory itself. Code is rapidly overwritten or rendered obsolete by new code. How do we guarantee that this uncontrolled experiment with human memory will turn out well?

Culture evolves in fits and starts. History is studded with false promises and dead ends, experiments that work for a while and then prove unfit as circumstances change. But there are also moments of rapid change, inflection points when forces coalesce to accelerate and alter the trajectory of events. The digital era is merely the most current installment in the unfolding saga of our desire to know more about the world and ourselves.

But the computer is not an accurate model for the brain. Scientists now understand that natural memory — the kind that hedgehogs and humans have, as opposed to the artificial kind we use for storing information, like books and silicon chips — is the primary mechanism animals rely on to adapt to their environment. Memory is the entire repertoire of knowledge an animal acquires in its lifetime for the purpose of survival in an ever-changing world — essentially everything it knows that does not come preprogrammed with its DNA. Like a traveler packing for a week trying to squeeze all necessities into an overnight bag, the brain ­compacts big loads of information into small spaces by combining and compressing similar information through elaborate networks of association.

We keep our mental model of the world up to date by learning new things. Fortunately, our memory is seldom really fixed and unchangeable. As we take on new roles and responsibilities in life, such as parent, partner, worker, or citizen, we shed old ones — child, student, or dependent. Like muscles, memories weaken with time when they are not used. Just as in the art of packing, in which what we leave out is as important as what we put in the bag, so too does the art of memory rely on the art of forgetting.

What this means for the digital age is that data is not knowledge, and data storage is not memory. When distracted — for example, by too many bright shiny things and noisy bleeping devices — we are not able to learn or develop strong reusable memories. We fail to build the vital repertoire of knowledge and experience that may be of use to us in the future. And it is the future that is at stake. For memory is not about the past. It is about the future.

Human memory is unique because, from the information stored in our brains, we can summon not only things that did or do exist, but also things that might exist. From the contents of our past we can generate visions of the future. We know there is a past, a present, and a future, and in our heads we travel freely among these time zones. This deep temporal depth perception is unique.

Collective memory — the full scope of human ­learning, a shared body of knowledge and know-how to which each of us contributes and from which each of us draws sustenance — is the creation of multiple generations across vastly diverse cultures. Digital networks make our collective memory accessible across political and linguistic boundaries. Everyone with access to the internet can turn personal memory and learning into shared knowledge, ensuring that the collective memory of humanity continues to be culturally diverse as it grows exponentially.

The past as well as the future of this collective memory is being fundamentally reshaped by digital technology. What happens is in our hands. We face critical decisions as a society and as individuals about how to rebuild memory systems and practices to suit an economy of information abundance. It is rare that any generation is called upon to shape so much of the world that future generations will inherit.

We are now several decades into this uncontrolled (and uncontrollable) experiment, and have yet to catch our breath. We are moving in opposing directions — quickly adapting and domesticating the digital world while at the same time expanding into unknown territories. The faster we move, the less predictable our path becomes. In 1997, when I saw that we will not have libraries and archives full of hard-copy “rough drafts” of present-day history, it seemed we could not adapt quickly enough to avoid the loss or corruption of the past. But since 1997, the power of our machines to extrapolate a wealth of information from even fragments of the past — from bird specimens and glass plate negatives to broken lacquer discs and ships’ logs — tells a different story. We are beginning to learn how much we can afford to lose and still come to know our own history.

Today, we see books as natural facts. We do not see them as memory machines with lives of their own, though that is exactly what they are. As soon as we began to print our thoughts in those hard-copy memory machines, they began circulating and pursuing their own destinies. Over time we learned how to manage them, share them, and ensure they carried humanity’s conversations to future generations.

In a beautiful poem, Czesław Miłosz wrote: “I imagine the earth when I am no more:/Nothing happens, no loss, it’s still a strange pageant/Women’s dresses, dewy lilacs, a song in the valley./Yet the books will be there on the shelves, well born,/Derived from people, but also from radiance, heights.”

Now we can — we must — develop the same skills we acquired over generations to manage and take responsibility for digital memory machines so that they too outlive us. Whether we do or not is now in our hands.

From When We Are No More: How Digital Memory Is Shaping Our Future, by Abby Smith Rumsey, Copyright © 2016, published by Bloomsbury Press. Reprinted with permission. 

Abby Smith Rumsey is a writer and historian focusing on the creation, preservation, and use of the cultural record in all media.

This article is featured in the September/October 2017 issue of The Saturday Evening Post. Subscribe to the magazine for more art, inspiring stories, fiction, humor, and features from our archives.

 

Become a Saturday Evening Post member and enjoy unlimited access. Subscribe now

Reply

Your email address will not be published. Required fields are marked *