In June of 2015, The US Army Research Laboratory released a paper entitled Visualizing the Tactical Ground Battlefield in the Year 2050: Workshop Report. In it, the ARL identified seven interrelated capabilities they thought would set the future battlefield scenario apart from the current one. The items they uncovered were: augmented humans; automated decision-making and autonomous processes; misinformation as a weapon; micro-targeting; large-scale self-organization and collective decision-making; cognitive modeling of the opponent; and the ability to understand and cope in a contested, imperfect information environment.  Two things these seven items have in common are the steady march toward Artificial Intelligence (AI) and automation, and the desire to integrate a “human-in-the-loop” with each one of these emerging technologies.

With greater intelligence and autonomy comes the need to discuss and debate how these new technologies may alter the way in which we, as human beings, think about the future of war and security. Purdue University is celebrating its sesquicentennial in 2019 with a year-long Ideas Festival focused on major topics at the intersection of technology and society. Discovery Park is contributing to the Ideas Festival and tackling these important questions of our time by hosting thought provoking conferences such as our Ethics, Technology, and the Future of War and Security Symposium.

On May 14, as part of Purdue’s 150th Anniversary Ideas Festival, Discovery Park had the honor of welcoming experts from across the country and from industry, academia, and government to discuss the ethical, legal, and social implications of a cyber-, autonomous tech- and AI-driven war. Through wide-ranging panel discussions punctuated by thought-provoking keynote addresses, the sphere of future warfare and security was dissected and discussed. The general tone was recognition of the inevitability of advancement, but tempered by the idea that this technology must have the seeds of ethics and morality sown deep into the soil of development. Speakers and panelists concurred in the hope that these advanced technologies may be used to make future wars more humane and even preventable.  Inherent in this vision is the knowledge that a truly global conversation and robust debate is essential to define now the rules and boundaries of future wars. How will different cultures interpret the social, ethical and legal issues that arise from a fully autonomous, AI-driven society capable of waging war and compromising an adversary’s security by these means?

As we have discussed often in the pages of this blog, the United States is in what arguably might be one of the most difficult and consequential technology races of its history. Great power competitors and other state and non-state actors alike are all making great use of today’s global technology race to advantage their economic and national security, sometimes in ways that might be considered abhorrent by liberal democracies the world over.

An overriding theme was that the problems we’re facing aren’t new, but technology is forcing us to look at them through a different, modern lens.  The rules of war haven’t changed, but the tools of war have, which requires an in-depth global conversation about ethics, morality, and law.  

Patrick Wolfe, Frederick Hovde Dean of the College of Sciences at Purdue opened the symposium by reminding the audience that AI-related algorithms have been around for over forty years. However, it is only in the last decade that the fusion of these algorithms with low-cost, high performance commodity computing and massive amounts of data from an ever growing number of ubiquitous sensors and internet-connected devices has made the current exponential expansion in the application of AI to societal concerns a reality. He warned, however, that the current set of algorithms is largely evolutionary relative to those of the 1970’s, which were designed to be rather opaque, and are indeed rather primitive still. Much progress is needed if AI is to be trusted as a decision-making augmentation capability for consequential human endeavors beyond picking a book or a movie online, such as for example, waging war.

Paul Scharre, author of Army of None, was on hand to provide history on the rise of autonomous weapons and posed two critical questions: Are autonomous weapons illegal? Are autonomous weapons ultimately destabilizing? The answers, as you can imagine, are not cut and dried. Scharre points to the historical precedent for banning certain weapons of war and, makes a compelling case for autonomous weapons undermining stability, pointing out that runaway autonomous weapons have the capacity to push nations to the brink.  Ultimately, it boils down to policy-making and rule-setting, and we are already behind the curve.

Author and strategist P.W. Singer had a different take on security, providing insight into the cyber-threat and pulling information presented in his book, Like War. He stated, “We live in a world where virality trumps veracity,” explaining that in this modern, digital age, where currency is the number of likes and shares instead of dollars and cents, the truth is often buried beneath layer upon layer of lies. “Coughing in everyone’s face,” — spreading untruths through shares, knowingly or unknowingly — is the modern plague.

“The best way to prevent a great power war is to be prepared to win one,” stated Robert Work, former Deputy Secretary of Defense.  While claiming that the United States will not back fully autonomous weapons, but will instead a human-in-the-loop in the decision making process, he reiterated the importance of robotics, AI, and autonomy. He noted that increasingly sophisticated narrow AI systems incorporated into a Third Offset Strategies will be vital to the defense of the nation, and that the U.S. must lead the way without hesitation.  Advanced AI as an augmentation tool, not as a human replacement, is a critical element in both the preparation and the deterrence of war.

Heather Roff, from the Johns Hopkins Applied Physics Laboratory and a Nonresident Fellow for Security and Strategy at the Brookings Institution, asserted that there are many wide-ranging ethical problems that come with using AI in warfare and even outside the warfare arena. She shared an example from a military personnel point of view: AI could analyze a constant stream of soldiers’ physical and mental health data to promote, deploy, or otherwise discriminate among categories of individuals in a manner that could easily be inherently biased.  These potential biases and their effects on the individuals should be carefully investigated and considered.

The capstone of this conference was the keynote speech from Director of National Intelligence, Dan Coats. Director Coats reiterated the need for research and advancement of AI, but cautioned that the United States is not the only player on the field, and our greatest adversaries are making tremendous strides in advancing these technologies, and focusing foreign influence campaigns against our liberal democracy and the American people.  They seek to undermine the freedoms of U.S. citizens, but the U.S. intelligence community is well positioned to counter these attacks. One of six initiatives created by the Office of the Director of National Intelligence is the hiring of “a right, trusted, and agile workforce,” to ensure the U.S. intelligence community remains the best in the world. He also stated there is currently underway an unprecedented reach to private industry and academia to partner on crucial Intelligence issues.  Purdue University is stepping up to be a critical contributor to this mission, and to provide the cross-disciplinary expertise required by the U.S. intelligence community. 

Purdue has long been a proponent of interdisciplinary research of the type that drives this discussion of science, engineering, and society. The mission of the Purdue Policy Research Institute (PPRI), which co-hosted this event is to foster high-impact, interdisciplinary research to solve wicked problems. As a part of this symposium, the PPRI hosted an open-call essay competition to provide an opportunity for students from across all disciplines to consider the topic at hand and provide a creative take on their own perspectives. The winners of the competition were Jessica Eise, Purdue University Communication student, who submitted a creative writing piece that describes a potential outcome of a crucial programming error; Kevin Schieman, University of Notre Dame Philosophy student, who submitted an opinion piece on embracing values when developing autonomous weapons policy; and Muriel Eaton, Purdue University Medicinal Chemical Molecular Pharmacology student, who provided a fictional podcast made to sound like a recording for a time capsule.

Interestingly, the Organisation for Economic Cooperation and Development (OECD) recently unveiled principles for the innovative and trustworthy development and application of artificial intelligence. For nearly a year, the Trump administration worked closely with OECD partners and standards are now set amongst member democracies that are practical, flexible, and value-based.  The OECDs current recommendations on privacy, digital security and responsible business conduct are now buoyed by these new standards in this rapidly-evolving field. This commitment from the United States and the OECD member countries to priorities like removing barriers to innovation and discovery, prioritizing long-term R&D, building the AI workforce, and fostering public trust are consistent with the discussions and finding of the conference, and precisely what’s needed to further and rapidly advance responsibly in this critical arena.

Please stay tuned for more information as we will soon be posting a video link to our Symposium on Ethics, Technology, and the Future of War and Security and encourage you to hear for yourself what the experts have to say. As Director Coats, Secretary Work and others said, the United States must ensure that we have the most qualified and best trained workforce to serve our national security, and that we retain technological superiority over our adversaries at all time. Purdue is leading the way!