Pen#6 Algorithmic Risk and Military Intelligence: An Australian Perspective

By Theo Squires

Kodak, which once sold 90% of all film used in the US and even invented the digital camera in 1975, filed for bankruptcy in 2012 because it did not recognise the power of the digital age[1]Rupert Neate, “Kodak falls in the creative destruction of the digital age”, January 20, 2012, Analysing the disruptive potential of new technologies is essential to avoiding failures of imagination and ‘Kodak moments’ of creative destruction. A similar fate awaits other organisations unable to adapt quickly enough. According to the World Economic Forum, the “staggering confluence of new technologies” comprising the Fourth Industrial Revolution—artificial intelligence (AI), nanotechnology and quantum computing, among others—will disrupt almost every human endeavour[2]Klaus Schwab, The Fourth Industrial Revolution (Geneva: World Economic Forum, 2016), 7.. Militaries invest heavily in developing new capabilities to retain their advantage—rarely a smooth process. Incorporating military AI in kill chains, for example, raises concerns over ‘killer bots’[3]Daisuke Wakabayashi and Scott Shane, “Google Will Not Renew Pentagon Contract That Upset Employees”, The New York Times, June 1, … Continue reading—just one way in which rapid technological progress has overtaken ethical, legal and technical frameworks. 

War has not been exempted from any of the previous industrial revolutions—and failing to innovate in the enterprise of war could mean defeat, death and ruination rather than a loss of market share, job lay-offs and bankruptcy. The rapid rate of technological progress, the dual use of many technologies, and the eroding Western technological advantage compared to the ascendancy of China present new strategic challenges. Good military intelligence at all levels from tactical to strategic is necessary to support decision making and avoid succumbing to disruptions. Yet the same disruptive technologies will be increasingly influential in the practice of military intelligence. The US is already incorporating AI into intelligence, surveillance and reconnaissance (ISR) and command-and-control (C2) systems through the Joint Enterprise Defense Infrastructure[4]Sydney Freedberger Jr, “Big data for big wars: JEDI vs China & Russia”, Breaking Defense, August 12, … Continue reading (JEDI) and Project Maven[5]Rahul Kalvapalle, “What is Project Maven?”, Global News, April 5, 2018, Proponents of AI in ISR predict that “intelligence preparation of the battlefield can be nearly instantaneous”[6]Christian Heller, “The Future Navy—Near-term Applications of Artificial Intelligence”, Naval War College Review: Vol. 72, No. 4, Article … Continue reading. Offloading some cognitive burden in collecting and analysing information is vital as the velocity of information accelerates. China recognises this and intends to lead the world in AI, including its applications to ISR, by 2030[7]Fabian Westerheide, “China—The first artificial intelligence superpower”, Forbes, January 14, … Continue reading—essential to gaining the edge in the algorithmic warfare of the future[8]Peter Layton, Algorithmic Warfare: Applying Artificial Intelligence to Warfighting (Canberra: Air Power Development Centre, 2018), 47.. Australia and her allies cannot afford to cede advantages by failing to anticipate, develop and adopt new technologies, and their required supporting workforce, throughout the military enterprise. However, adopting AI into ISR poses unique challenges and risks to the ADF. 

This paper will assess the potential of AI in military intelligence. First, it will consider which aspects of military intelligence are suitable for automation. Second, it will address the fundamental risks, including the problem of programmed bias, and the difficulties that inscrutable algorithms, known as black boxes, pose to intelligence analysis . Finally, this paper will recommend how Australia should approach AI within ISR to mitigate algorithmic risk by finding the areas most suited to automation. In particular, the author will argue that Australia and the ADF lack the depth of AI talent to support a single-service approach to adopting AI within ISR: the solution must be Joint. 

What elements of ISR can AI perform?

Why automate? Because automation is efficient and the military must seek every efficiency that translates to war-fighting advantage, particularly in the age of hypersonic missiles, autonomous weapons systems and cyberwarfare. Of course, militaries are not the only enterprises seeking the efficiency of machines. Robotics and computing have massively increased the productivity of labour. Manual repetitive labour in manufacturing has long been automated. From 1990 to 2010, the number of industrial robots in use in the US increased from approximately 50,000 to 100,000; meanwhile, manufacturing output per person doubled over the same period[9]Kathleen Bolter, “What Manufacturing Can Teach Us About How Automation Impacts Jobs” , Kentuckiana Works, June 18, … Continue reading. Manufacturing is suitable for automation because robots are good at tightly defined, repetitive tasks. Robots of this variety do not require advanced AI because there is little dynamism in their input and output. Similar automation is already widespread in military applications. For example, intelligent combat systems such as Aegis direct missiles[10]Paul Scharre, “Are AI-powered Killer Robots Inevitable”, Wired, May 19, 2020, and industrial control systems on modern warships safely automate what once required dozens of sailors. In military ISR, sensors often automatically collect data from the operational environment, removing much of the cognition from the collection process. These are all examples of automating routine manual labour.

What about cognitive labour? Researchers of the University of Oxford in 2013 found that even cognitive labour, such as that handled by associate lawyers and tax auditors, is likely to be automated due to advances in machine learning[11]Carl Frey and Michael Osborne, The Future of Employment: How susceptible are jobs to computerisation? (Oxford: Oxford University Press, … Continue reading. Sometimes, machine learning can completely automate an entire process such as content identification in applications such as Shazam, reverse image search and facial recognition[12]Arvind Narayanan, “How to recognise AI snake oil”, Princeton University, November 18, 2019, This work is routine and cognitive. AI already outperforms expert doctors at diagnosing certain medical conditions from x-rays and magnetic resonance imagingMcKinsey Global Institute Skill Shift: Automation and the Future of the Workforce (Boston: McKinsey Global Institute, 2018) AI also shows promise in detecting spam, copyrighted material, hate speech and in automated essay grading and content recommendation. In the military, AI that can interpret satellite imagery to identify aircraft sitting on runways or ships at sea will augment traditional imagery intelligence analysts[13]Patrick Tucker, “US Government to Restrict Sale of AI for Satellite Image Analysis”, Defense One, January 6, … Continue reading. In 2017, BuzzFeed reported that it had trained an AI to process publicly available data from to identify ‘spy planes’ operated by the government agencies and contractors in the US. These surveillance aircraft were not registered to a recognised agency or on publicly declared missions[14]“BuzzFeed News Trained A Computer To Search For Hidden Spy Planes. This Is What We Found.”, Buzzfeed Github, last modified August 8, … Continue reading. From four months of all air traffic in and over the US, the AI identified 69 planes based on characteristics such as speed, altitude and transponder detections. The journalists then conducted additional research on those planes to confirm the AI’s predictions[15]Peter Aldhous, “We Trained A Computer To Search For Hidden Spy Planes. This Is What It Found”, Buzzfeed News, August 8, … Continue reading

Figure 1: The Intelligence Cycle

Is AI suited to interpreting events, normally within a complex and unique geo-strategic environment, to provide predictions on the future—the essential role of intelligence? This is non-routine cognitive labour. Intelligence goes beyond solely processing and identifying information, as shown in Figure 1 above. Professional analysts find it difficult at the best of times because it is hard to get the right information and due to fundamental foibles and biases in human cognition identified by researchers such as Kahneman[16]Daniel Kahneman, Thinking Fast and Slow (London: Penguin Random House UK, 2012)and Heuer[17]Richards Heuer, The Psychology of Intelligence Analysis (Langley: Center for the Study of Intelligence, CIA, 1999). In theory, AI, with more computing power than humans, and immunity to certain flaws such as fatigue, boredom and stress, could perform this function more effectively. Computers can beat humans at chess and Go, which—as any player will attest—are complex, non-routine cognitive tasks. 

There are immediate challenges to applying AI to military intelligence problems. The process of machine learning requires large, quality datasets and rules. AI is good at chess because it can learn from tens of thousands of matches, all bound by the same rules. Go, with a larger board and scope for play than chess, including 2×10170 legal positions, took far longer for AI to master; nonetheless, AI successfully trained on a large dataset, partly because there were rules and a clear definition of victory. Contrast military intelligence at all levels. Data are never complete—the fog of war—and are subject to deception. They comes from many sources, each limited, and are usually time-stale. There are almost always unseen factors and relationships at play between strategic, operational and tactical forces that defy modelling—unpredictability that is inherent in any human system. The same situation never happens twice—unlike chess or Go, which begin at the same start state every time and have a defined end point. Finally, it is often difficult to determine success in intelligence. An analyst assessing the odds of war starting between China and India in 2021 may put it at 30%. If there is no war (or, indeed, if there is) was the analyst’s assessment sound and correct? Intelligence analysis defies reduction to the predictive algorithms that power YouTube and Google.

“It is often difficult to determine success in intelligence. An analyst assessing the odds of war starting between China and India in 2021 may put it at 30%. If there is no war (or, indeed, if there is) was the analyst’s assessment sound and correct? Intelligence analysis defies reduction to the predictive algorithms that power YouTube and Google.”

AI struggles even when given large, high quality datasets and objective metrics of success. For example, AI has been used to predict criminal recidivism, job performance and risk of terrorism, but AI experts are justifiably cautious[18]Narayanan, “AI Snake Oil”. In one study of the ability of AI to predict social outcomes for at-risk children, researchers trained AI on the dataset from Princeton’s Fragile Families & Children Wellbeing Study. The dataset included 4,242 families with 12,942 discrete variables tracked over 20 years[19]“About”, Fragile Families, Princeton University, accessed January 21, 2020,  . Hundreds of machine learning experts from around the world competed to develop the best AI for making predictions based on the data. The best AI models turned out to be hardly superior to simple 4-variable linear regression[20]Narayanan, “AI Snake Oil”. The problem with predicting the future, it appears, is not just access to data or quality of data, but more fundamental.  

This may be that having access to more information, beyond a certain minimum level, generally does not improve the accuracy of an analyst’s estimates, although it can make analysts over-confident in their assessments[21]Richards Heuer, The Psychology of Intelligence Analysis (Langley: Center for the Study of Intelligence, CIA, 1999). The Pareto Principle states that 80% of the effects come from 20% of the causes; analysts must therefore identify the ‘vital few’ rather than trivial many[22]Jim Chappelow, “Pareto Principle”, Investopedia, last modified August 29, 2019, While AI helps collect and present the vital few pieces of information to analysts, it is folly to think if advanced machines can collect and process enough big data, intelligence can predict the future. Black swan events, such as 9/11, defy prediction and the lack of relevant past examples in datasets makes training AI difficult[23]Ben Dickson, “How the coronavirus pandemic is breaking artificial intelligence and how to fix it”, last modified July 30, … Continue reading. Yet these black swans, or strategic surprises, are usually most important to militaries. 

Figure 2: Anomaly detection through tracking AI model error[24]“Artificial Intelligence and the Covid-19 Black Swan”, Kantify, accessed September 15, 2020,

Although AI may not predict black swans, it may help detect anomalies that usually precede them. One can program this anomaly detection function even without black swans in the dataset. For example, Microsoft develops ‘Premonition’, an AI that collects and processes information from insects to provide warning of emerging pathogens[25]“Project Premonition”, Microsoft, accessed September 28, 2020, One of the main mechanisms of machine learning is backpropagation. Backpropagation is AI refining its own models through continuously comparing its predictions to reality in a dataset[26]Simeon Kostadinov, “Understanding Backpropagation Algorithm”, Towards Data Science, last modified August 8, 2019, The AI learns by reducing error. Once trained to predict within normal parameters, AI can detect anomalies by monitoring for sudden increases in model error, per Figure 2. This may alert analysts, mitigating the creeping normalcy bias. However, such AI must be trained carefully to minimise bias built into programming and datasets.  

The myth of objectivity: programmed and learned bias

Many organisations pursue ‘data-driven’ decision-making (DDDM) and data-centric cultures as an antidote to subjectivity[27]“The advantages of data-driven decision-making”, Harvard Business School, last modified August 26, 2019, . AI enables DDDM through collecting and processing ‘big data’. This approach, championed by tech giants such as Amazon and Netflix, has clear benefits and appeals to military decision makers frustrated by byzantine information management systems. The US Army began developing ‘Army Leader Dashboard’ in 2017 to incorporate almost a thousand unique data sources—primarily logistic, human resources, financial management and training—into a unified, consumable format for commanders[28]Ellen Summey, “Creating Insight-Driven Decisions”, Army AL&T, Summer 2019. The project exposed systemic problems in how the US military (equally, the ADF) produces and stores data, but presents a promising model for future digitisation within the military. Similar projects are underway within ISR. Bringing diverse data sources into one system is an early step in incorporating more AI into ISR and DDDM to military intelligence and operations as it streamlines the flow of data and allows analysts to focus on analysis rather than research. Collating, processing and linking data all relies on advanced algorithms, using AI’s strength in content identification. However, while sharing some technical challenges with Army Leader Dashboard, such as how data is siloed within the military, applying big data to ISR has subtler risks. 

Datasets are critical to machine learning. Bad datasets make bad AI. Microsoft introduced AI chatbot ‘Tay’ to Twitter in 2016. Tay, intended to interact with the world as a normal American teenage girl, had Twitter as its dataset. Within 16 hours, Microsoft removed the chatbot as it became a racist, misogynistic troll[29]Hope Reese, “Why Microsoft’s Tay AI bot went wrong”, Tech Republic, March 24, 2016, Google had a similar problem in 2015 when Google Photos mistakenly labelled some black people as gorillas—because the dataset it learned on was overwhelmingly white[30]Conor Dougherty, “Google Photos Mistakenly Labels Black People ‘Gorillas’”, New York Times, July 1, … Continue reading. Programmed bias may have been detected in code review, but this was learned bias and the problem was only identified after the algorithms went operational. What about AI in a military intelligence context, where the dataset is not only subject to inadvertent compromise but also the intentional denial and deception? Or, what if analytical biases (such as interpreting adversary actions through an own-force perspective) have crept into the programming or dataset? AI within ISR will be further challenged by the availability of certain types of data. For example, signals and imagery intelligence is relatively abundant, automated and machine-readable compared to human intelligence. This could result in algorithms weighting data sources inappropriately, a phenomenon known as the availability bias[31]Richards Heuer, The Psychology of Intelligence Analysis (Langley: Center for the Study of Intelligence, CIA, 1999), 148

The limitations of AI partly stem from human limitations. The very human flaws that AI are intended to alleviate—particularly bias and fallacious reasoning—are also inherent in the creation, training and use of the AI. This is problematic if AI appears objective: intelligence is not objective. Recklessly applied, AI may significantly degrade the ability of intelligence to discern truth from chaos and, worse, it may make commander’s over-confident in a process that has always been as much art as science. Yet, objectivity is not the sole determinant of value; analysts and their customers simply must know and be able to challenge AI’s inputs to the intelligence process. Unfortunately, AI presents significant challenges to that kind of interrogation. 

Black boxes in intelligence

ISR is transforming because of digital technologies. Sensors collect more data more persistently. The bandwidth required to transmit the volume and complexity of raw data has driven innovation in intelligent sensors which collect, process and fuse prior to transmission. Ultimately, this will result in networks of intelligent UAVs, satellites and ground-based sensors that provide high quality, fused information to C2 nodes for further analysis. Further, these sensors will be adaptive: for example, if one satellite in the constellation is lost, the network will autonomously make up for shortfalls in coverage[32]Nathan Strout, “Hypergiant is building a reprogrammable satellite constellation with the Air Force”, C4ISR Net, July 27, … Continue reading

A risk arises in pushing the interpretation and fusion of information to sensors. While delivering fused information directly to analysts is efficient, both cognitively and in terms of ICT bandwidth, it creates ‘black boxes’ in the intelligence process. A black box is a system with opaque internal workings. Intelligence analysts must assess the veracity and reliability of information, both in their own analysis and peer-review. If data are collected and processed by black box intelligent sensors, the analyst must also assess the black box itself. But the nature of machine learning is opaque and deeply technical—often beyond the capacity of even AI experts to explain fully[33]Théo Szymkowiak, “The Artificial Intelligence Black Box Problem & Ethics”, Medium, last modified November 2, … Continue reading. Short courses on AI or data science are insufficient. From the Australian perspective, the ADF does not specifically sponsor intelligence professionals to pursue advanced degrees in data analytics or computer science. The problem of AI black boxes is not unique to the military, with a Deloitte study finding that “the methods for monitoring and troubleshooting [AI] lag adoption”[34]“Managing the black box of artificial intelligence (AI): future of risk in the digital era”, Deloitte, accessed September 14, … Continue reading. For example, neuroscience depends on functional magnetic resonance imaging (fMRI), which assesses brain activity by differences in oxygenated haemoglobin molecules in blood in the brain. This is interpreted algorithmically, forming the basis of thousands of neuroscience studies—however, the neuroscientists themselves rarely have insight into the machine intelligence that interprets the fMRI scans. A study in 2016 found that these algorithms produced false positives up to 70% of the time, well beyond the 5% factored into the studies. A fundamental algorithmic flaw in the black box cast doubt on up to 40,000 published studies going back years[35]“Computer says: Oops”, The Economist, last modified July 16, 2016,

This risk is compounded in the military, where black boxes are likely to be classified and intelligence analysis does not always undergo the valuable peer-review process of academia. There are also additional ethical questions if AI-generated intelligence feeds a kill web, particularly if the kill web is automated as may occur with UAVs or naval combat systems. Algorithmic risk cannot be eliminated but it must be considered and mitigated in all phases of the intelligence process. Therefore, intelligence professionals will need technical skill sets and a more robust peer-review process involving data scientists in addition to traditional subject matter experts. Adopting the latest technologies is thus not as simple as acquiring advanced hardware and software—a supporting workforce and organisational structure is essential. 

Managing AI within ADF ISR

Australia lacks its own Silicon Valley and is unlikely to trailblaze in AI. Although well-educated[36]“Educational attainment of 25-64 year-olds, 2019”, Stats, OECD, accessed January 27, 2020,, Australians are significantly less likely to study Science, Technology, Engineering or Mathematics than students in other OECD countries, including the US[37]“Share of population by field of study, 2019”, Stats, OECD, accessed January 27, 2020, Australia’s AI talent is thus limited and in high demand, requiring innovative approaches for the ADF. 

Defence Science and Technology’s Strategy 2030 emphasises partnering more, nationally and internationally; this is necessary, but will prove difficult[38]Department of Defence, More, together: Defence Science and Technology Strategy 2030 (Canberra: Department of Defence, 2020), 4. So far, the US has been cautious involving even close allies in sensitive AI projects[39]Patrick Tucker, “US Government to Restrict Sale of AI for Satellite Image Analysis”, Defense One, January 6, … Continue reading. Australia needs deeper integration, both to benefit from American expertise and to scrutinise the black boxes that we import. Public-private partnerships are also needed, otherwise private-sector demand for labour will crowd-out the military. In 2015, Uber hired 40 experts from Carnegie Mellon University’s National Robotics Engineering Centre. This significantly strained the lab’s research, resulting in cancelled contracts with the US Department of Defense[40]Mike Ramsey and Douglas MacMillan, “Carnegie Mellon Reels After Uber Lures Away Researchers”, The Wall Street Journal,May 31, … Continue reading. However, public-private partnerships can be difficult to execute. Project Maven, which used AI to process UAV footage for strike targeting[41]Rahul Kalvapalle, “What is Project Maven?”, Global News, April 5, 2018,, made headlines in 2018 when Google employees petitioned against it—Google then exited the project[42]Daisuke Wakabayashi and Scott Shane, “Google Will Not Renew Pentagon Contract That Upset Employees”, The New York Times, June 1, … Continue reading. Trust is critical and could be built through more dynamic personnel management, such as allowing for mid-level and senior transitions between private, public and uniformed service for niche capabilities. Maximising the benefit of specialist reservists will also help. But it is unlikely that the ADF, or any single service, will organically possess the expertise to manage AI. As with many niche capabilities, to enjoy economies of scale, maximise our scarce AI-literate workforce, and prevent single-service bias, the ADF must adopt a Joint approach to AI talent management. 


Predictions about technological advances frequently prove wrong. In their 2004 article ‘Why People Still Matter’, Levy and Murnane suggested that driving in traffic was not susceptible to automation as “executing a turn against oncoming traffic involves so many factors that it is hard to imagine discovering a set of rules that can replicate driver’s behaviour”[43]Frank Levy and Richard J. Murnane, The New Division of Labor: How Computers Are Creating the Next Job Market (New York: Princeton University Press, 2004), 13-30. Within six years, Google modified a fully autonomous Toyota Prius. Likewise, AI will become influential and integrated within ISR regardless of any technological pessimism. Perfection is unnecessary if its limitations are understood. 

So, what are the likely applications of AI to military intelligence? Supplementing, not substituting, humans. Computer-human teaming that automates the routine elements of ISR will allow analysts to ask the harder, non-routine questions of why, how and what next. Mitigating algorithmic risk will demand specialised data and computer scientists within the ISR enterprise; not traditional skill sets within the ADF. The ADF must not be tempted to leave this critical task to generalists or suffice with cursory courses due to skilled manpower shortages: a national, international and Joint approach to ISR is vital. Intelligence customers must also be AI-informed. Military intelligence will never be a machine, such as ‘Lieutenant Commander Data’ in Star Trek or ‘INTELLIGENCE’ in Team America, that can autonomously and objectively ingest data, analyse and produce fused intelligence products. Assuming otherwise imperils the very essence of the ISR enterprise. 

Theo Squires is has previously presented at the Defence Entrepeneurs Forum and Sea Power Conference on the technological and economic factors of war.


Leave a Reply