December 01, 2011
U.S. Government Turns to Crowdsourcing for Intelligence
The Pentagon and U.S. intelligence community spend billions of dollars each year trying, with mild success at best, to predict the future.
They organize elaborate wargames, develop computer algorithms to digest information and rely on old-fashioned aggregation of professional opinion.
Past intelligence failures have been costly and damaging to U.S. national security. Trying to avoid previous pitfalls, agencies are on a constant treasure hunt for new technologies that might give them an edge.
The Intelligence Advanced Research Projects Activity in February solicited industry proposals for how to improve the accuracy of intelligence forecasting. Under the auspices of the Office of the Director of National Intelligence, IARPA invests in research programs that provide an “overwhelming intelligence advantage over future adversaries.”
Applied Research Associates, a New Mexico-based firm, has launched a program it hopes will improve upon the traditional methods of gathering expert opinion by using computer software that could make better-informed predictions. The system chooses the best sources of information from a huge pool of participants.
ARA won the bid and started working on its Aggregative Contingent Estimation System, or ACES, in May.
The firm’s southeast division, headquartered in Raleigh, N.C., has teamed with seven universities to devise a method of farming out global intelligence questions to the general public through the Internet. It began collecting crowdsourced opinions in early July.
Crowdsourcing is a method of problem solving where a task is doled out to an undefined group of people through an open call to participate.
“Anyone can sign up; the more the merrier,” said Dirk Warnaar, ARA’s principal investigator for the ACES project. Participants interested in a range of topics, including politics, military, economics, science, technology and social affairs are invited to register at:www.forecastingACE.com.
“You can look at the crowd as people who are on the ground in real-life situations who have the best information,” Warnaar said. “Think of it like a large group of foot soldiers providing feedback.”
The crowd should eventually be able to provide more accurate predictions on global conflict in a time of increased uncertainty, Warnaar said.
“We don’t want to rule anyone out,” Warnaar said in an interview. “The term ‘expert’ is very poorly defined. Some research has shown that some experts are so close to the subject matter at hand that they can form biases and may not be good forecasters.”
Aggregated results are being tested over four years with ACES predictions being constantly measured against real-world events, Warnaar said.
The process is not limited to national-security or global conflict; questions of social and economic futures are also being posed to the 1,800 or so participants currently signed up to offer insight. So far about 50 questions have gone through the process, with 14 having resulted in predictions that will be evaluated and measured by IARPA.
But the capability of the program is limited to what questions are asked, Warnaar said. The system is not well suited for such events as predicting the earthquake that hit Japan in March, causing a tsunami and subsequent nuclear meltdown.
“In a case like that, we probably wouldn’t have thought to ask whether an earthquake would hit Japan. But there are still a lot of questions out there we could ask based on current events and extensions of those” that could impact national security.
The goal is to demonstrate better accuracy in predicting near-term and middle-term events than an opinion poll by the end of the four-year experiment. In the first year, Warnaar is seeking to achieve a 20 percent improvement over traditional polling methods. If its predictions turn out more accurate, the program will be made available to government decision makers.
Questions from informed policy makers could then be fed into ACES and predictions would be based on weighted answers from program participants.
Two of the scenarios tested through the new program include: Would the military repeal Don’t Ask, Don’t Tell — its policy barring homosexuals to serve openly — before the end of September 2011? Will Ali Abdullah Saleh step down as Yemen’s president before that same deadline? Gays can now serve openly in the military while Saleh is still clinging to power. ACES correctly predicted both results, the first at 66 percent and the second at 80 percent.
Some of the ongoing trials are quantitative: How many U.S. troops will be in Iraq on New Year’s Eve? Others are questions of probability: Will Israel launch strikes against an Iranian nuclear facility within the year?
“In the past, the way we have approached problems is putting a group of people together and asking them how to solve it,” Warnaar said. “With the power of the Internet, we can ask a much larger group of people about the problem and how to solve it.”
Crowdsourcing has proven ideally suited to solving quantitative questions, said Warnaar. But he admitted the process of predicting probability, particularly in events presaged by complex social and economic climates “tends to be much more difficult.”
The system should become more accurate as more questions are fed into it, he said. Using each participant’s written conclusions, the program automatically weights individual predictions based on their degree of accuracy. As predictions are collected, the system learns whose opinions to take more seriously and lends their future participation greater authority.
While the program won’t be out of beta testing for another three years, Warnaar hopes it will eventually help military planners and decision makers in Congress foresee security challenges, he said.
Some experts are skeptical. Mark Herman, a war-games planner with 30 years experience and an executive vice president at Booz Allen Hamilton, puts little stock in computer modeling. The ACES program is a new twist on an old theory, he said.
“I have very little confidence that any computer model can predict a conflict,” Herman told National Defense. “Models are fairly good at calculating things forward, but they’re not good at forecasting discontinuity. It’s almost impossible to accurately predict a future war unless you’re starting it.”
Normalizing opinion and weighting probabilities is called the Delphi technique, named for the mythical ancient Greek oracle.
“It’s exactly what has not worked on Wall Street,” Herman said. “It sounds like another form of trend analysis.”
With maybe 50 or so ongoing conflicts in the world, the question instead becomes whether to get involved in any of them and if so, in which ones, Herman said. The military also has to defend itself from foreign aggressors. Balance, according to Herman, is more important than clairvoyance in designing and equipping a fighting force.
“You want to build and retain a range of equipment to handle a bunch of things that could happen,” he said. “If you prepare only for irregular warfare, that equipment will be woefully unprepared for a regular war.
“You’re looking for proportionality within our available budget, in what we can afford while balancing our national interest and foreign-policy goals.”
Richard Danzig, former Navy secretary and chairman of the Center for a New American Security, a think tank in Washington, D.C., said the Pentagon should not even attempt to predict the future. In a recent CNAS study, Danzig posits that past failures should inform forecasts.
Among other global events throughout the nation’s history, defense planners failed to foresee the breakup of the Soviet Union in 1990. Nor were the rapid rise of China, the 9/11 attacks or the long-term involvement in Afghanistan and Iraq on anyone’s radar, Danzig wrote in the study.
“Based on both the department’s track record and social science research, we should expect frequent error in decisions premised on long-term predictions about the future,” Danzig wrote. “This high rate of error is unavoidable. Indeed, it is inherent in predictions about complex, tightly intertwined technological, political and adversarial activities and environments in a volatile world.”
Danzig proposes that the nation “prepare to be unprepared.”
“DoD leaders do need to make assumptions about how the world works, but they also can do a better job of coping with the likelihood that many of their assumptions will prove wrong,” Danzig wrote.
The U.S. military’s preparation for future conflicts — weapon design and investment, training and strategy — relies too heavily on getting it right, which increases the risk it will be unprepared, he said.
Take for instance the M1 Abrams main battle tank, of which about 9,000 were built for the Army and Marine Corps. The Abrams was first fielded in the 1980s when the nation’s primary enemy was still the Soviet Union. It was designed for the possibility of fighting a ground war in Europe, but the Soviet Union collapsed.
The United States eventually found itself involved in two wars in the arid Middle East where tanks — designed to deliver firepower while shielding advancing infantry across open ground — were ill suited for counterinsurgency operations. They are also starved for fuel in countries without adequate infrastructure to deliver it to the battlefield, Danzig wrote. Introduction of improvised explosive devices caused heavy casualties in those wars to troops riding around in other vehicles that were too lightly armored to withstand blasts.
But while the effectiveness of IEDs reached a lethal climax in 2004, defense officials waited more than two years to rush mine-resistant ambush-protected vehicles, or MRAPs, into battle, according to the report.
Because it can’t predict the future with certainty, the Defense Department should become faster and more flexible while planning for conflicts in the short-term, Danzig wrote.
Warnaar feels the ACES program will provide a more potent predictive tool, not a crystal ball.
Wargames and modeling can also provide low-cost forecasting tools, Herman said.
He gave the example of China’s territorial claims to the South China Sea, where it is contesting maritime borders with neighbors Taiwan and Vietnam.
“The question here is not whether there is the potential for conflict, because there already is; it’s whether [China is] going to do anything about it,” Herman said. In deciding whether the U.S. military should intervene in a possible conflict in Southeast Asia, “is where intelligence and modeling really come into play.”
“By doing these things, you get insights into whether you are prepared to fight a war predicted in the exercises.”
Wargames and modeling do sometimes result in surprisingly accurate predictions, as did Desert Crossing in 1999, which tested a theoretical toppling of Saddam Hussein and the aftermath of regime change in Iraq. But decision makers have to be listening for them to make a difference.
According to Desert Crossing, the U.S. military would quickly topple Hussein’s regime with a resource-heavy conventional strike, which it did in 2003, Herman said. It also predicted that the vacuum left by Hussein could devolve into sectarian violence and that Iran would be the ultimate beneficiary of regional instability. The former certainly came to pass while it remains to be seen whether Iran will benefit after U.S. forces finally make their exit.
“If we had made better decisions after rolling on Saddam based on that exercise, maybe things would have turned out differently,” Herman said.