July 19, 2017
Making Sense of Rapid Technological Change
Technology is changing our world at an astonishing pace. In the span of a few short years, the internet, mobile devices, and social media have transformed how we communicate and get information about the world. This has opened up new vectors for the spread of information, real and fake, and added new voices to society’s collective discourse. The colonization of the physical world by the internet, with billions of devices coming online and joining the “Internet of Things,” similarly has created new opportunities for productivity and entertainment, as well as new vectors for attack against cyber-physical systems. Even as we adapt to a world that is more interconnected and transparent than ever before, we also must anticipate changes that technology may bring. Rapid advances in artificial intelligence foreshadow a world in which purpose-built machines can accomplish a wide range of human tasks, displacing some forms of human labor. The result could be tremendous gains in productivity, but also major economic and societal disruption. Improvements in genomics and synthetic biology hold the promise of even more radical transformations, with advances in human performance, the eradication of diseases, and even human life extension.
Rapid advances in artificial intelligence foreshadow a world in which purpose-built machines can accomplish a wide range of human tasks, displacing some forms of human labor.
How should we think about the future? Amidst such dizzying changes, anything can seem possible. On one level, this awareness of the array of possible futures can be valuable, freeing us from pre-existing assumptions about what the future may hold and allowing us to think more creatively. Opening our minds to new ideas can allow us to seize opportunities or notice trends we might have missed, and prepare us mentally for the surprises that are sure to come. On another level, however, the sheer vastness of possible futures can be paralyzing. How can we discern which futures are likely? How should we hedge against a set of possibilities that seem infinite?
This challenge is particularly acute in the defense field, where militaries make billion-dollar investments on decades-long time horizons and therefore must think critically about the future. The U.S. Navy is currently building the USS John F. Kennedy aircraft carrier, which will be commissioned in 2020 and will stay in the force until 2070. How will warfare evolve over this carrier’s half-century lifespan? The scale of these investments ($13 billion for a new carrier) means the question of what the future will look like has very real consequences for decisions today.
Beyond “Game-Changers”
Informed by the knowledge that warfare can sometimes change in profound and disruptive ways that upend traditional ways of fighting, the defense futures field is awash in imagining “game-changers,” but almost anything can be a game-changer under the right circumstances. As an intellectual exercise, pondering how warfare might change can be incredibly valuable, but defense planners need to be able to weigh the likelihood of these possible futures to inform investments today. It is of course true that no one can predict the future, but every year the United States government makes a bet on the future of warfare to the tune of hundreds of billions of dollars in defense investments. Buying an aircraft carrier, a fighter jet, or some other item constitutes an implicit prediction about the future of warfare.
Three years ago, then-CNAS CEO Bob Work (now Deputy Secretary of Defense) launched a project at CNAS to examine how technology could change the future of warfare. There are three perspectives, or lenses, we can apply when looking at the future. While none are crystal balls, they can help planners think through possible futures in a more rigorous way.
1. Proliferation of the Now
William Gibson, the science fiction author who coined the term “cyberspace,” has said, “the future is here – it’s just not very evenly distributed.” Some of the most important changes in the future will come not from a new technology, but from a larger number of people having access to something that already exists.
Imagine looking at an early automobile at the turn of the 20th century. The immediate implication was clear: faster transit. One could travel to a neighboring town for lunch and be back in the same day! But it would have been harder to predict the second and third order effects that would come from everyone having automobiles: superhighways, suburbs, gridlock, smog, road rage, and climate change. Similarly, there can be great value in trying to anticipate the complications that will ensue when everyone has access to a technology. In some cases, like nuclear weapons, anticipating these risks can help motivate efforts to stem proliferation. In other cases, such as drones, the technology may be too diffuse to halt proliferation, but anticipating these challenges can help prepare for the changes that are coming.
Over the next several decades, billions more people will come online, gaining access to the sprawling, evolving, raucous social experiment that is the internet and social media. The inclusion of the rest of humanity into the digital age will have profound economic and political consequences, empowering individuals and accelerating productivity and innovation. Medical treatments will also become available to a wider segment of the population as costs decrease over time. Some technologies benefit from scale in a way that can lead to qualitatively different effects when deployed en masse. Social media is of little value, for example, if only a few people have it. Data, in particular, has tremendous value when aggregated and analyzed. Sequencing a patient’s genome has medical benefits today, but as the price drops the number of human genomes sequenced will rise, creating an unparalleled dataset on human genetics that can be analyzed for broad patterns of disease and health. These are just a few examples of ways in which the proliferation of existing technology can lead to significant societal changes.
2. Trends
The pace of change matters a great deal. Looking at underlying trends in technology, demographics, resources, climate change, and other areas can help inform the likelihood of certain futures. Surprises can and will occur, but trends can point to which possible futures are consistent with the path we are on and which futures would require a discontinuity.
The most rapid technology changes today are happening in areas based on information technology. While the exponential pace of computer chip advancement (Moore’s Law) has begun to taper off, new computing techniques like deep neural networks have shown rapid gains, and engineers are experimenting with a range of new approaches to chip design to go beyond Moore’s Law. There are similar exponential growth curves seen in internet bandwidth and data creation and storage.
There is value in investing in a diverse array of technologies to hedge against surprise.
Other technology areas are advancing, but not at the same rate. Battery energy density, for example, is improving but gains are incremental, not exponential – there is no Moore’s Law for batteries. (This is why your phone and laptop are vastly more powerful than a decade ago, but you still need to plug them in every night.) Similarly, laser technology is improving, but slowly. Looking at trends can help planners discern which technological leaps are likely to pan out and which are likely to remain outside our grasp decades from now. Swarming robots, directed energy, hypersonics, and cyber weapons might all be “game-changers,” but they are not all equally likely to occur.
There is value in investing in a diverse array of technologies to hedge against surprise, but when planners bet big on how the future of conflict is likely to unfold, they should pay attention to underlying technology trends. Current trends point a world of increasingly intelligent and networked machines, but machines that are still limited in their physical attributes such as speed, range, and power. The most disruptive changes are likely to occur in areas driven by information technology – robotics, networking, artificial intelligence, data analytics, cyber operations, and electronic warfare.
3. What If?
The downside to looking at trends is that they can blind us to disruptive surprises that inevitable occur. Mid-20th century science fiction like The Jetsons and Star Trek are amusing today precisely because they naively extrapolated trends in space travel but missed the surprise of the internet. The Star Wars universe has faster-than-light travel, but no decent way to store data.
Examining “what if” certain developments were to occur, even if their timing cannot be predicted now, has tremendous value in hedging against surprise. What if military exoskeletons had sufficient power to be operationally viable? What if quantum computers suddenly undermined existing encryption methods? What if synthetic pathogens could be cheaply developed? What if artificial intelligence was capable of general-purpose problem solving? These tools are most valuable, however, when placed in the context of other trends, possible reactions to these developments, and an understanding of the technological leaps required to make this disruptive shift occur.
How Society Adapts
One of the biggest pitfalls in exploring future scenarios is assuming that the political, economic, and social institutions that will cope with future challenges are the same as today. People and institutions adapt. They may not always be quick to change, but pressures will inevitably drive adaptation – perhaps in good ways, perhaps in bad ways. Much of the uncertainty about the future is not about the technology itself, but how we respond to it. How will we react to fatalities from autonomous vehicles? How will we respond to climate change? How will we cope with a world rife with disinformation and propaganda?
Understanding the incentives that shape how people and institutions adapt requires a multi-disciplinary approach, connecting engineers, policymakers, ethicists, lawyers, and other experts. The key variable in understanding the future is rarely technology alone, but how humans use it, perceive it, and adapt to it. This is why bringing together a diverse array of stakeholders to anticipate change and create solutions is so essential. It takes a range of communities across society working together to cope with the changes to come.
More from CNAS
-
How Can the Trump Administration Strengthen U.S. AI Leadership?
With a new administration just around the corner, now is the time for the US to strengthen its position as a global leader in AI. Even with changing leadership, there remain n...
By Paul Scharre
-
How China’s Antitrust Tactics Undermine U.S. Tech Leadership
If the United States fails to address this threat, it risks not just losing ground in the technology race, but ceding control over the rules that govern it....
By Ruby Scanlon
-
Sharper: Tariffs
The incoming Trump administration has signaled that tariffs will be a central pillar of its economic strategy, with significant implications for international trade, the Ameri...
By Eleanor Hume, Charles Horn & Gwendolyn Nowaczyk
-
Technology to Secure the AI Chip Supply Chain: A Working Paper
Advanced artificial intelligence (AI) systems, built and deployed with specialized chips, show vast potential to drive economic growth and scientific progress....
By Tim Fist, Tao Burga & Vivek Chilukuri