Un regard (décalé ?) sur Intelligence Artificielle et Défense/Sécurité

L’intelligence artificielle véhicule un nombre important de mythes et de fantasmes, en particulier en matière de défense : « l’intelligence artificielle surpassera l’homme » ; « les robots tueurs sont pour demain » ; …

IA déf 1L’IA est anthropo-morphisée (si,si) à outrance. Le militaire fait peur. La robotique fait peur. L’intelligence artificielle fait peur. Alors les robots tueurs, pardon les SALA (le monde de la défense adore les acronymes, en l’occurrence les systèmes d’armes létaux autonomes) polluent la réflexion et empêchent de concevoir l’IA comme un domaine scientifique et d’ingénierie.

On est très très loin de la supraconscience artificielle que de nombreux auteurs (et notamment le courant transhumaniste mené par le célèbre Ray Kurzweil) agitent comme un chiffon rouge. L’IA est certes un domaine en explosion, mais ce n’est pas la révolution de l’humanité qu’on veut nous faire accroire.
IA déf 2

Restons raisonnables et rationalisons les raisons pour lesquelles l’IA revient à la mode.

Remarquons d’abord que certains concepts réapparaissent et sont présentés comme neufs alors qu’ils ne font que renaître sous une nouvelle appellation, tel le deep learning qui « ré-invente » les réseaux de neurones des années 50.

L’IA revient à la mode grâce à la convergence de la disponibilité de puissances de calcul très importantes, d’une quantité de données considérable (en constante et rapide augmentation) et d’algorithmes agrégés. Songeons que 2 milliards de téraoctets de données sont créés chaque jour alors que l’ensemble des ouvrages de la BNF ne représentent « que » 10 milliards de téraoctets ! Ces big data (caractérisées par les 4 V de volume, véracité, vitesse, variété) posent des questions de société mais nécessitent d’avoir une éducation minimale pour les comprendre et les utiliser.

Dans le monde civil, l’IA commerciale se développe actuellement via les objets connectés pour lesquels il est nécessaire de permettre une adaptation à l’utilisateur, une facilitation des interactions, une gestion automatique de l’interactivité. Dans ce domaine, la France possède une certaine avance en particulier grâce à son niveau en mathématique, algorithmique et ingénierie. Remarquons aussi que, délaissant les activités liées au numérique (traitement des signaux), les GAFA achètent maintenant de l’« IA ». Leur financement est tiré entre autres par les véhicules autonomes et la robotique (vision par ordinateur)), l’interaction avec utilisateur (analyse sémantique/assistant personnel/speech to speech, …) et l’analyse de données au sens large, y compris big data.

Les vrais enjeux de l’IA dans le monde de la défense et de la sécurité sont principalement liés à la robotique de soutien, à la simulation pour l’appui aux opérations, aux capteurs « intelligents » et à l’analyse de signaux, et pas, en priorité, aux systèmes d’armes.

La robotique de soutien concerne en particulier le portage (souvent de charges de plus de 60 kg sur le terrain), l’ouverture d’itinéraire (en terrain hostile voire miné), la surveillance, ….  Autant de tâches nécessitant des capacités d’analyse de l’environnement, d’autonomie, de vision, et pour lesquelles les techniques d’IA rendues « embarquables » sont utiles.

L’IA permet également de modéliser le comportement d’entités humaines sur le terrain., Il est ainsi possible de réaliser des simulations des entités adverses, de jouer le déroulement d’une opération, d’en inférer un ou des modes d’action et de détecter les écarts par rapport aux prévisions. Tout cela à l’aide des données simulées ou enregistrées issues de différents capteurs installés sur différents systèmes (terrestres, avions, drones, … voire satellites). C’est notamment le domaine de la simulation constructive.

Donner de l’autonomie et de l’intelligence à des capteurs sur le terrain, c’est à dire réaliser le traitement, l’analyse et la classification dans le capteur lui-même, permet de transmettre à l’opérateur ou au décideur des informations qualifiées et pertinentes et ainsi de ne saturer ni les réseaux de transmission ni les exploitants.

Pour prendre un exemple, il existe aujourd’hui des capteurs intelligents, telle une caméra thermique miniaturisée, de faible consommation, et embarquant une intelligence artificielle qui permet le traitement automatisé et en temps réel des images. Il devient ainsi possible de réaliser de la détection et du suivi d’images, de la détection de geste ou de mouvement, ou d’extraire des caractéristiques de haut niveau permettant d’implémenter une identification automatique de cible d’intérêt et un traitement de l’image correspondante. L’essentiel des opérations peut ainsi être réalisé au sein du capteur lui-même : toutes les opérations sont effectuées localement, sans devoir surcharger la bande passante du réseau, ni devoir transmettre des informations en vue d’en faire l’analyse sur un serveur distant. En découlent une économie de temps, un gain de sécurité et d’efficacité. On peut ainsi imaginer qu’un drone aérien soit capable de réaliser l’interprétation automatique et immédiate des images qu’il capte, sans devoir faire appel à une liaison vers un segment sol.

Enfin (même si cette liste est loin d’être exhaustive), l’IA peut être employée à des fins de renseignement : analyse « intelligente », notamment des signaux faibles grâce à des techniques de « big data » (pensons à la monstrueuse quantité de données disponibles sur les réseaux sociaux comme en termes de transmission électronique de l’information).

L’IA n’est pas la « superpuissance en devenir » régulièrement annoncée dans les media. Outre les problématiques éthiques et légales, elle doit encore se confronter à des défis techniques pour atteindre sa pleine maturité : apprentissage non supervisé et embarquabilité de la technologie pour n’en citer que deux. Dans le domaine de la défense, nul doute qu’elle sera à l’origine d’améliorations capacitaires dans tous les domaines. Loin de l’image du Terminator et des clichés transhumanistes, l’IA s’avère être un outil, un « couteau suisse » au service de l’opérationnel. Comme tout outil, il convient d’apprendre à s’en servir…

Que les lecteurs intéressés n’hésitent pas à aller voir le blog VMF214 dédié à l’innovation technologique de défense : https://blogvmf214.wordpress.com.

Cet article est d’abord paru dans le numéro d’août 2016 de Grand Angle, la revue de la CGE (Conférence des Grandes Ecoles), dans un dossier spécial sur Intelligence artificielle et Défense. Il a été écrit en collaboration avec Emmanuel Chiva, directeur général adjoint d’Agueris, conseiller des études à l’IHEDN et responsable de la commission R&T du GICAT.

IA Déf 3

Ce contenu a été publié dans Uncategorized. Mettez-le en favori avec son permalien.

3 pensées sur “Un regard (décalé ?) sur Intelligence Artificielle et Défense/Sécurité

  1. Personne ne peut réellement concevoir à l’heure actuelle jusqu’où pourra aller l’IA et jusqu’où l’homme décidera d’utiliser ces nouveaux outils, que se soit pour la défense/sécurité de nos pays ou les autres domaines de nos vies.
    Et c’est là que réside le danger ! Pas dans l’outil lui même.

    Si l’homme ne limite pas leur utilisation pendant qu’il est encore temps, les interactions entre tous ces algorithmes seront tellement multiples, et que des algorithmes d’algorithmes prendront nombres de décisions avant la décision finale alors scientifiquement simplifiée que l’homme pourra et croira prendre, sans même aborder ici le sujet de la dépendance. Qui dominera qui ? Et n’est-ce pas déjà un peu une réalité ?
    Évidemment beaucoup plus insidieux qu’une armée de droïdes façon Starwars, sur un champ de bataille.

    La perspective d’immense profit dans la recherche sur l’IA et les enjeux engendrés sont inquiétants face à la faiblesse humaine devant des nouveaux jouets, l’argent et le pouvoir.

    Et pourtant …

    Ci-dessous, un passage de « La force de l’intuition » de Malcom Gladwell, que votre article m’a rappelée. Je n’en ai malheureusement pas la traduction française disponible, mais je ne pense pas que cela soit un handicap pour votre blog.

    “In the spring of 2000, Van Riper was approached by a group of senior Pentagon officials. He was retired at that point, after a long and distinguished career. The Pentagon was in the earliest stages of planning for a war game that they were calling Millennium Challenge ’02. It was the largest and most expensive war game thus far in history. By the time the exercise was finally staged—in July and early August of 2002, two and a half years later—it would end up costing a quarter of a billion dollars, which is more than some countries spend on their entire defense budget. According to the Millennium Challenge scenario, a rogue military commander had broken away from his government somewhere in the Persian Gulf and was threatening to engulf the entire region in war. He had a considerable power base from strong religious and ethnic loyalties, and he was harboring and sponsoring four different terrorist organizations. He was virulently anti-American. In Millennium Challenge—in what would turn out to be an inspired (or, depending on your perspective, disastrous) piece of casting—Paul Van Riper was asked to play the rogue commander.

    The group that runs war games for the U.S. military is called the Joint Forces Command, or, as it is better known, JFCOM. JFCOM occupies two rather nondescript low-slung concrete buildings at the end of a curving driveway in Suffolk, Virginia, a few hours’ drive south and east of Washington, D.C. ……..
    JFCOM is where the Pentagon tests new ideas about military organization and experiments with new military strategies.
    Planning for the war game began in earnest in the summer of 2000. JFCOM brought together hundreds of military analysts and specialists and software experts. In war game parlance, the United States and its allies are always known as Blue Team, and the enemy is always known as Red Team, and JFCOM generated comprehensive portfolios for each team, “covering everything they would be expected to know about their own forces and their adversary’s forces. For several weeks leading up to the game, the Red and Blue forces took part in a series of “spiral” exercises that set the stage for the showdown. The rogue commander was getting more and more belligerent, the United States more and more concerned.
    In late July, both sides came to Suffolk and set up shop in the huge, windowless rooms known as test bays on the first floor of the main JFCOM building. Marine Corps, air force, army, and navy units at various military bases around the country stood by to enact the commands of Red and Blue Team brass. Sometimes when Blue Team fired a missile or launched a plane, a missile actually fired or a plane actually took off, and whenever it didn’t, one of forty-two separate computer models simulated each of those actions so precisely that the people in the war room often couldn’t tell it wasn’t real. The game lasted for two and a half weeks. For future analysis, a team of JFCOM specialists monitored and recorded every conversation, and a computer kept track of every bullet fired and missile launched and tank deployed. This was more than an experiment. As became clear less than a year later—“when the United States invaded a Middle Eastern state with a rogue commander who had a strong ethnic power base and was thought to be harboring terrorists—this was a full dress rehearsal for war.
    The stated purpose of Millennium Challenge was for the Pentagon to test a set of new and quite radical ideas about how to go to battle.

    In Operation Desert Storm in 1991, the United States had routed the forces of Saddam Hussein in Kuwait. But that was an utterly conventional kind of war: two heavily armed and organized forces meeting and fighting in “an open battlefield. In the wake of Desert Storm, the Pentagon became convinced that that kind of warfare would soon be an anachronism: no one would be foolish enough to challenge the United States head-to-head in pure military combat. Conflict in the future would be diffuse. It would take place in cities as often as on battlefields, be fueled by ideas as much as by weapons, and engage cultures and economies as much as armies. As one JFCOM analyst puts it: “The next war is not just going to be military on military. The deciding factor is not going to be how many tanks you kill, how many ships you sink, and how many planes you shoot down. The decisive factor is how you take apart your adversary’s system. Instead of going after war-fighting capability, we have to go after war-making capability. The military is connected to the economic system, which is connected to their cultural system, to their personal relationships. We have to understand the links between all those systems.”

    With Millennium Challenge, then, Blue Team was given greater intellectual resources than perhaps any army in history. JFCOM devised something called the Operational Net Assessment, which was a formal decision-making tool that broke the enemy down into a series of systems—military, economic, social, political—and created a matrix showing how all those systems were interrelated and which of the links among the systems were the most vulnerable. Blue Team’s commanders were also given a tool called Effects-Based Operations, which directed them to think beyond the “conventional military method of targeting and destroying an adversary’s military assets. They were given a comprehensive, real-time map of the combat situation called the Common Relevant Operational Picture (CROP). They were given a tool for joint interactive planning. They were given an unprecedented amount of information and intelligence from every corner of the U.S. government and a methodology that was logical and systematic and rational and rigorous. They had every toy in the Pentagon’s arsenal.
    “We looked at the full array of what we could do to affect our adversary’s environment—political, military, economic, societal, cultural, institutional. All those things we looked at very comprehensively,” the commander of JFCOM, General William F. Kernan, told reporters in a Pentagon press briefing after the war game was over. “There are things that the agencies have right now that can interrupt people’s capabilities. There are things that you can do to disrupt their ability to communicate, to provide power to their people, to influence their national will…to take out power grids.” Two centuries ago, “Napoleon wrote that “a general never knows anything with certainty, never sees his enemy clearly, and never knows positively where he is.” War was shrouded in fog. The point of Millennium Challenge was to show that, with the full benefit of high-powered satellites and sensors and supercomputers, that fog could be lifted.
    This is why, in many ways, the choice of Paul Van Riper to head the opposing Red Team was so inspired, because if Van Riper stood for anything, it was the antithesis of that position. Van Riper didn’t believe you could lift the fog of war. His library on the second floor of his house in Virginia is lined with rows upon rows of works on complexity theory and military strategy. From his own experiences in Vietnam and his reading of the German military theorist Carl von Clausewitz, Van Riper became convinced that war was inherently unpredictable and messy and nonlinear. In the 1980s, Van Riper would often take part in training exercises, and, according to military doctrine, he would be required to perform versions of the kind of analytical, systematic decision making that JFCOM was testing in Millennium Challenge. He hated it. It took far too long. “I remember once,” he says, “we were in the middle of the exercise. The division commander said, ‘Stop. Let’s see where the enemy is.’ We’d been at it for eight or nine hours, and they were already behind us. The thing we were planning for had changed.” It wasn’t that Van Riper hated all rational analysis. It’s that he thought it was inappropriate in the midst of battle, where the uncertainties of war and the pressures of time made it impossible to compare options carefully and calmly.
    ……

    Millennium Challenge, in other words, was not just a battle between two armies. It was a battle between two perfectly opposed military philosophies. Blue Team had their databases and matrixes and methodologies for systematically understanding the intentions and capabilities of the enemy. Red Team was commanded by a man who looked at a long-haired, unkempt, seat-of-the pants commodities trader yelling and pushing and making a thousand instant decisions an hour and saw in him a soul mate.

    On the opening day of the war game, Blue Team poured tens of thousands of troops into the Persian Gulf. They parked an aircraft carrier battle group just offshore of Red Team’s home country. There, with the full weight of its military power in evidence, Blue Team issued an eight-point ultimatum to Van Riper, the eighth point being the demand to surrender. They acted with utter confidence, because their Operational Net Assessment matrixes told them where Red Team’s vulnerabilities were, what Red Team’s next move was likely to be, and what Red Team’s range of options was. But Paul Van Riper did not behave as the computers predicted.
    Blue Team knocked out his microwave towers and cut his fiber-optics lines on the assumption that Red Team would now have to use satellite communications and cell phones and they could monitor his communications.
    “They said that Red Team would be surprised by that,” Van Riper remembers. “Surprised? Any moderately informed person would know enough not to count on those technologies. That’s a Blue Team mind-set. Who would use cell phones and satellites after what happened to Osama bin Laden in Afghanistan? We communicated with couriers on motorcycles, and messages hidden inside prayers. They said, ‘How did you get your airplanes off the airfield without the normal chatter between pilots and the tower?’ I said, ‘Does anyone remember World War Two? We’ll use lighting systems.”
    Suddenly the enemy that Blue Team thought could be read like an open book was a bit more mysterious. What was Red Team doing? Van Riper was supposed to be cowed and overwhelmed in the face of a larger foe. But he was too much of a gunslinger for that. On the second day of the war, he put a fleet of small boats in the Persian Gulf to track the ships of the invading Blue Team navy. Then, without warning, he bombarded them in an hour-long assault with a fusillade of cruise missiles. When Red Team’s surprise attack was over, sixteen American ships lay at the bottom of the Persian Gulf. Had Millennium Challenge been a real war instead of just an exercise, twenty thousand American servicemen and women would have been killed before their own army had even fired a shot.
    “As the Red force commander, I’m sitting there and I realize that Blue Team had said that they were going to adopt a strategy of preemption,” Van Riper says. “So I struck first. We’d done all the calculations on how many cruise missiles their ships could handle, so we simply launched more than that, from many different directions, from offshore and onshore, from air, from sea. We probably got half of their ships. We picked the ones we wanted. The aircraft carrier. The biggest cruisers. There were six amphibious ships. We knocked out five of them.”
    In the weeks and months that followed, there were numerous explanations from the analysts at JFCOM about exactly what happened that day in July. Some would say that it was an artifact of the particular way war games are run. Others would say that in real life, the ships would never have been as vulnerable as they were in the game. But none of the explanations change the fact that Blue Team suffered a catastrophic failure. The rogue commander did what rogue commanders do. He fought back, yet somehow this fact caught Blue Team by surprise. … they had conducted a thoroughly rational and rigorous analysis that covered every conceivable contingency, yet that analysis somehow missed a truth that should have been picked up instinctively. In that moment in the Gulf, Red Team’s powers of rapid cognition were intact—and Blue Team’s were not. How did that happen?”

    Excerpt From: Malcolm Gladwell. “Blink.” iBooks. https://itun.es/us/wByuv.l