Diễn đàn của người dân Quảng Ngãi
giới thiệu | liên lạc | lưu niệm

 April 14, 2025
Trang đầu Hình ảnh, sinh hoạt QN:Đất nước/con người Liên trường Quảng Ngãi Biên khảo Hải Quân HQ.VNCH HQ.Thế giới Kiến thức, tài liệu Y học & đời sống Phiếm luận Văn học Tạp văn, tùy bút Cổ văn thơ văn Kim văn thơ văn Giải trí Nhạc Trang Anh ngữ Trang thanh niên Linh tinh Tác giả Nhắn tin, tìm người

  Trang Anh ngữ
THE COMING SWARM (Part 2)
Webmaster
Các bài liên quan:
    MỸ VẠCH MẶT NHÓM THƯỢNG HẢI
    11 CÁCH SỬ DỤNG INTERNET AN TOÀN
    CUỘC ĐỔ BỘ SẮP ĐẾN (COMING SWARM) (Phần 1)
    CUỘC ĐỔ BỘ SẮP ĐẾN (COMING SWARM) (Phần 2)
    THE COMING SWARM (Part 1)
    ROBOT QUÂN SỰ CỦA QUÂN ĐỘI MỸ

 

4. THE HUMAN ELEMENT IN ROBOTIC WARFARE

Paul Scharre     

War on the Rocks

March 11, 2015

 

Editor’s note: This is the fourth article in a six-part series, “The Coming Swarm,” on military robotics and automation as a part of the joint War on the Rocks-Center for a New American Security Beyond Offset Initiative. Read the first, second, and third entries in this series.

 

 

The first rule of unmanned aircraft is, don’t call them unmanned aircraft. And whatever you do, don’t call them drones.

 

The U.S. Air Force prefers the term “remotely piloted aircraft” to refer to its Predators, Reapers, and Global Hawks. And for Predators and Reapers, that undoubtedly is a reflection of their reality today. They are flown by stick and rudder by pilots who just happen to not be onboard the plane (and sometimes are on the other side of the globe).

 

For aircraft like the Global Hawk, which is largely automated and does not require a pilot with stick and rudder, but rather has a person direct the aircraft via keyboard and mouse, the question of whether they are “remotely piloted” is a bit murkier.

 

Is piloting about a physical act – controlling the aircraft directly via inputs to the flight controls – or about commanding the aircraft and being responsible for the mission, for airspace deconfliction, and for making decisions about where the aircraft should go?

 

Historically, the answer has been both. But automation is changing that. It’s changing what it means to be a pilot. A person no longer has to be physically on board the aircraft to be considered a “pilot.” Nor do they need to be physically controlling the aircraft’s flight controls directly. The day will soon come when, because of automation, a person can “pilot” multiple aircraft at the same time. It is already technically possible today. The cultural presumption that a person can only command one aircraft at a time is stalling implementation of multi-aircraft control.

 

But that will change.

 

Automation is and has long been colonizing jobs once performed by humans in a range of industries, from driving forklifts to writing newspaper stories. And the changes on military operations will be no less profound. While pilots may be the first to grapple with this paradigm shift, autonomous systems will raise the same issues across many military positions, from truck drivers to tankers. Autonomous systems will inevitably change how some military duties are performed and may eliminate some job specialties entirely. Physical prowess for some tasks, like piloting an aircraft, driving a vehicle, or firing a rifle will be less important in a world where aircraft fly themselves, vehicles drive on their own, and smart rifles correct for wind, humidity, elevation and the shooter’s movements all on their own.

 

For some military communities, the shift will be quite significant. Sometimes, this can lead to a reluctance to embrace robotic systems, with the fear that they are replacing humans. This is unfortunate because it could not be further from the truth. Autonomous systems will not replace warfighters any more than previous innovations like firearms, steam-powered ships, or tanks replaced combatants. These innovations did, however, change how militaries fight. Today’s infantryman, sailors, and cavalrymen no longer fight with edged weapons, work the sails and rigging of ships, or ride horses, but the ethos embodied in their job specialties lives on, even as the specific ways in which warfighters carry out those duties have changed. Similarly, the duties of tomorrow’s “pilots,” “tank drivers,” and “snipers” will look far different from today, but the ethos embodied in these job specialties will not change. Human judgment will always be required in combat.

 

The Human Element

 

Terminology that refers to robotic systems as “unmanned” can feed false perceptions on the roles that human beings will or will not play. The Air Force is right to push back against the term “unmanned.” (Note: I often use it myself in writing because it has become common currency, but I prefer “uninhabited vehicle,” which is more accurate.) “Unmanned” implies a person is not involved. But robotic systems will not roll off the assembly line and report for combat duty. Humans will still be involved in warfare and still in command, but at the mission level rather than manually performing every task. Uninhabited and autonomous systems can help but also have shortcomings, and will not be appropriate for every task. The future is not unmanned, but one of human-machine teaming.

 

Militaries will want a blend of autonomous systems and human decision-making. Autonomous systems will be able to perform many military tasks better than humans, and will particularly be useful in situations where speed and precision are required or where repetitive tasks are to be performed in relatively structured environments. At the same time, barring major advances in novel computing methods that aim to develop computers that work like human brains, such as neural networks or neuromorphic computing, autonomous systems will have significant limitations. While machines exceed human cognitive capacities in some areas, particularly speed, they lack robust general intelligence that is flexible across a range of situations. Machine intelligence is “brittle.” That is, autonomous systems can often outperform humans in narrow tasks, such as chess or driving, but if pushed outside their programmed parameters they fail, and often badly. Human intelligence, on the other hand, is very robust to changes in the environment and is capable of adapting and handling ambiguity. As a result, some decisions, particularly those requiring judgment or creativity, will be inappropriate for autonomous systems. The best cognitive systems, therefore, are neither human nor machine alone, but rather human and machine intelligences working together.

 

Militaries looking to best harness the advantages of autonomous systems should take a cue from the field of “advanced chess,” where human and machine players cooperate together in hybrid, or “centaur,” teams. After world chess champion Gary Kasparov lost to IBM’s chess-playing computer Deep Blue in 1996 (and again in a 1997 rematch), he founded the field of advanced chess, which is now the cutting edge of chess competition. In advanced chess, human players play in cooperation with a computer chess program, with human players able to use the program to evaluate possible moves and try out alternative sequences. The result is a superior game of chess, more sophisticated than would be possible with simply humans or machines playing alone.

  

Human-machine teaming raises new challenges, and militaries will need to experiment to find the optimum mix of human and machine cognition. Determining which tasks should be done by machines and which by people will be an important consideration, and one made continually challenging as machines continue to advance in cognitive abilities. Human-machine interfaces and training for human operators to understand autonomous systems will be equally important. Human operators will need to know the strengths and limitations of autonomous systems, and in which situations autonomous systems are likely to lead to superior results and when they are likely to fail. As autonomous systems become incorporated into military forces, the tasks required of humans will change, not only with respect to what functions they will no longer perform, but also which new tasks they will be required to learn. Humans operators will need to be able to understand, supervise, and control complex autonomous systems in combat. This places new burdens on the selection, training, and education of military personnel, and potentially raises additional policy concerns. Cognitive human performance enhancement may help and in fact may be essential to managing the data overload and increased operations tempo of future warfare, but has its own set of legal, ethical, policy, and social challenges.

 

How militaries incorporate autonomous systems into their forces will be shaped in part by strategic need and available technology, but also in large part by military bureaucracy and culture. Humans may be unwilling to pass control for some tasks over to machines. Debates over autonomous cars are an instructive example. Human beings are horrible drivers, killing more than 30,000 people a year in the United States alone, or roughly the equivalent of a 9/11 attack every month. Self-driving cars, on the other hand, have already driven nearly 700,000 miles, including in crowded city streets, without a single accident. Autonomous cars have the potential to save literally tens of thousands of lives every year, yet rather than rushing to put self-driving cars on the streets as quickly as possible, adoption is moving forward cautiously. At the state of the technology today, even if autonomous cars are far better than human drivers overall, there would inevitably be situations where the autonomy fails and humans, who are better at adapting to novel and ambiguous circumstances, would have done better in that instance. Even if, in aggregate, thousands of lives could be saved with more autonomy, humans tend to focus on the few instances where the autonomy could fail and humans would have performed better. Transferring human control to automation requires trust, which is not easily given.

 

War is a Human Endeavor

 

Many of the tasks humans perform in warfare will change, but humans will remain central to war, for good or ill. The introduction of increasingly capable uninhabited and autonomous systems on the battlefield will not lead to bloodless wars of robots fighting robots, with humans sitting safely on the sidelines. Death and violence will remain an inescapable component of war, if for no other reason than it will require real human costs for wars to come to an end. Nor will humans be removed from the battlefield entirely, telecommuting to combat from thousands of miles away. Remote operations will have a role, as they already do in uninhabited aircraft operations today, but humans will also be needed forward in the battlespace, particularly for command-and-control when long-range communications are degraded.

 

Even as autonomous systems play an increasing role on the battlefield, it is still humans who will fight wars, only with different weapons. Combatants are people, not machines. Technology will aid humans in fighting, as it has since the invention of the sling, the spear, and the bow and arrow. Better technology can give combatants an edge in terms of standoff, survivability, or lethality, advantages that combatants have sought since the first time a human picked up a club to extend his reach against an enemy. But technology alone is nothing without insight into the new uses it unlocks. The tank, radio, and airplane were critical components of the blitzkrieg, but the blitzkrieg also required new doctrine, organization, concepts of operation, experimentation, and training to be developed successfully. It was people who developed those concepts, drafted requirements for the technology, restructured organizations, rewrote doctrine, and ultimately fought. In the future, it will be no different.

 

War will remain a clash of wills. To the extent that autonomous systems allow more effective battlefield operations, they can be a major advantage. Those who master a new technology and its associated concepts of operation first can gain game-changing advantages on the battlefield, allowing decisive victory over those who lag behind. But technological innovation in war can be a double-edged sword. If this advantage erodes a nation’s willingness to squarely face the burden of war, it can be a detriment. The illusion that such advantages can lead to quick, easy wars can be seductive, and those who succumb to it may find their illusions shattered by the unpleasant and bloody realities of war. Uninhabited and autonomous systems can lead to advantages over one’s enemy, but the millennia-long evolution of weapons and countermeasures suggests that such weapons will proliferate: No innovation leaves its user invulnerable for very long. In particular, increasing automation has the potential to accelerate the pace of warfare, but not necessarily in ways that are conducive to the cause of peace. An accelerated tempo of operations may lead to combat that is more chaotic, but not more controllable. Wars that start quickly may not end quickly.

 

The introduction of robotic systems on the battlefield raises challenging operational, strategic, and policy issues, the full scope of which cannot yet be seen. The nations and militaries that see furthest into a dim and uncertain future to anticipate these challenges and prepare for them now will be best poised to succeed in the warfighting regime to come.

 

5. COMMANDING THE SWARM

Paul Scharre

March 25, 2015

 

Editor’s note: This is the fifth article in a six-part series, The Coming Swarm, on military robotics and automation as a part of the joint War on the Rocks-Center for a New American Security Beyond Offset Initiative. Read the first, second, third, and fourth entries in this series.

 

 

Photo credit: Mehmet Karatay

 

Today’s uninhabited vehicles are largely tele-operated, with a person piloting or driving the vehicle remotely, but tomorrow’s won’t be. They will incorporate increasing autonomy, with human command at the mission level. This will enable one person to control multiple vehicles simultaneously, bringing greater combat power to the fight with the same number of personnel. Scaling up to large swarms, however, will require even more fundamental shifts in the command and control paradigm.

 

The Naval Postgraduate School is working on a 50-on-50 swarm vs. swarm aerial dogfight, and researchers at Harvard have built a swarm of over a thousand simple robots working together to create simple formations. As the number of elements in a swarm increases, human control must shift increasingly to the swarm as a whole, rather than micromanaging individual elements.

 

How to exercise effective command and control over a swarm is an area of nascent and important research. How does one control a swarm? What commands can be issued to a swarm? How does one balance competing aims, such as optimality, predictability, speed, and robustness to disruption?

 

Possible swarm command and control models, ordered from more centralized to increasingly decentralized control, include:

 

Centralized control, where swarm elements feed information back to a central planner that then tasks each element individually.

 

Hierarchical control, where individual swarm elements are controlled by “squad” level agents, which are in turn controlled by higher-level controllers, and so on.

 

Coordination by consensus, where swarm elements communicate to one another and converge on a solution through voting or auction-based methods.

 

Emergent coordination, where coordination arises naturally by individual swarm elements reacting to others, like in animal swarms.

 

 

Swarm Command and Control Models

 

Each of these models has different advantages, and may be preferred depending on the situation. While completely decentralized swarms are able to find optimal solutions to complex problems, like how ant colonies converge on the shortest route for carrying food back to the base, converging on the optimal solution may take multiple iterations, and therefore time. Centralized or hierarchical planning may allow swarms to converge on optimal, or at least “good enough,” solutions more quickly, but requires higher bandwidth to transmit data to a central source that then sends instructions back out to the swarm. Action by consensus, through voting or auction mechanisms, could be used when low bandwidth communications exist between swarm elements. When no direct communication is possible, swarm elements could still rely on indirect communication to arrive at emergent coordination, however. This could occur by co-observation, like how animals flock or herd, or stigmergic communication by altering the environment, similar to how termites build complex structures. Indeed, this term – stigmergy – was coined in 1950’s by a French zoologist researching termites.

 

Decentralized swarms are inherently robust and adaptive

 

Centralized control is not always optimal even if high-bandwidth communications exist, since detailed plans and overly specific direction can prove brittle in a fast-changing battlefield environment. Decentralized control – either through localized “squad commanders,” voting-based consensus mechanisms, or emergent coordination – has the advantage of pushing decision-making closer to the battlefield’s edge. This can both accelerate the speed of immediate reaction and make a swarm more resilient to communications disruptions. Swarms of individual elements reacting to their surroundings in accordance with higher-level commander’s intent represent the ultimate in decentralized execution. With no central controller to rely upon, the swarm cannot be crippled or hijacked in toto, although elements of it could be. What a decentralized swarm might sacrifice in terms of optimality, it could buy back in faster speed of reaction. And swarms that communicate indirectly through stigmergy or co-observation, like flocks or herds, are immune to direct communication jamming.

 

Hordes of simple, autonomous agents operating cooperatively under a centralized commander’s intent but decentralized execution can be devilishly hard to defeat. The scattered airdrop of paratroopers over Normandy during the D-Day invasion wrecked detailed Allied plans, but had the unintended effect of making it nearly impossible for Germans to counter the “little groups of paratroopers” dispersed around, behind, and inside their formations. Simple guidance like “run to the sounds of gunfire and shoot anyone not dressed like you” can be effective methods of conveying commander’s intent, while leaving the door open to adaptive solutions based on situations on the ground. The downside to an entirely decentralized swarm is that it could be more difficult to control, since specific actions would not necessarily be predictable in advance.

 

Command and control models must balance competing objectives

 

Choices about command and control models for swarms may therefore depend upon the balance of competing desired attributes, such as speed of reaction, optimality, predictability, robustness to disruption, and communications vulnerability. The optimal command and control model for any given situation will depend on a variety of factors, including:

 

- Level of intelligence of swarm elements relative to complexity of the tasks being performed;

 

- Amount of information known about the task and environment before the mission begins;

 

- Degree to which the environment changes during the mission, or even the mission itself changes;

 

- Speed of reaction required to adapt to changing events or threats;

 

- Extent to which cooperation among swarm elements is required in order to accomplish the task;

 

- Connectivity, both among swarm elements and between the swarm and human controllers, in terms of bandwidth, latency, and reliability; and

 

- Risk, in terms of both probability and consequences, of suboptimal solutions, or outright failure.

 

The best swarm would be able to adapt its command and control paradigm to changing circumstances on the ground, such as using bandwidth when it is available but adapting to decentralized decision-making when it is not. In addition, the command and control model could change during different phases of an operation, and different models could be used for certain types of decisions.

 

Human control can take many forms

 

Human control over a swarm can take many forms. Human commanders might develop a detailed plan and then put a swarm into action, allowing it to adapt to changing circumstances on the ground.

Alternatively, human commanders might establish only higher-level tasks, such as “find enemy targets,” and allow the swarm to determine the optimal solution through centralized or decentralized coordination. Or human controllers might simply change swarm goals or agent preferences to induce certain behaviors. If the cognitive load of controlling a swarm exceeds that of one person, human tasks could be split up by breaking a swarm into smaller elements or by dividing tasks based on function. For example, one human controller could monitor the health of vehicles, with another setting high-level goals, and yet another approving specific high-risk actions, like use of force.

 

Ultimately, a mix of control mechanisms may be desirable, with different models used for different tasks or situations. For example, researchers exploring the use of intelligent agents for real-time strategy games developed a hierarchical model of multiple centralized control agents. Squad-based agents controlled tactics and coordination between individual elements. Operational-level agents controlled the maneuver and tasking of multiple squads. And strategy-level agents controlled overarching game planning, such as when to attack. In principle, cooperation at each of these levels could be performed via different models in terms of centralized vs. decentralized decision-making or human vs. machine control. For example, tactical coordination could be performed through emergent coordination, centralized agents could perform operational-level coordination, and human controllers could make higher-level strategic decisions.

 

In order to optimize their use of swarms, human controllers will need training to understand the behavior and limits of swarm automation in real-world environments, particularly if the swarm exhibits emergent behaviors. Human controllers will need to know when to intervene to correct autonomous systems, and when such intervention will introduce suboptimal outcomes.

 

Basic research on robotic swarms is underway in academia, government, and industry. In addition to better understanding swarming behavior itself, more research is needed on human-machine integration with swarms. How does one convey to human operators the state of a swarm simply and without cognitive overload? What information is critical for human operators and what is irrelevant? What are the controls or orders humans might give to a swarm? For example, a human controller might direct a swarm to disperse, coalesce, encircle, attack, evade, etc. Or a human might control a swarm simply by using simulated “pheromones” on the battlefield, for example by making targets attractive and threats repellent. To harness the power of swarms, militaries will not only need to experiment and develop new technology, but also ultimately modify training, doctrine, and organizational structures to adapt to a new technological paradigm.

 

6. COUNTER-SWARM: A GUIDE TO DEFEATING ROBOTIC SWARMS

By Paul Scharre

War On The Rock

March 31, 2015 issue

 

Editor’s note: This is the last article in a six-part series, The Coming Swarm, on military robotics and automation as a part of the joint War on the Rocks-Center for a New American Security Beyond Offset Initiative. Read the first, second, third, fourth, and fifth entries in this series.

 

 

Photo credit: Official U.S. Navy Imagery

 

Swarming with a large number of low-cost autonomous systems can be useful for a wide range of applications in warfare, and the U.S. military should move to harness the advantages of this approach. But so will others. While swarming provides numerous opportunities to expand U.S. combat effectiveness by enabling greater range, persistence, daring, mass, coordination, intelligence, and speed on the battlefield, it may be enemy swarms that are the real game-changer.

 

Many of the innovations that enable swarming – low-cost uninhabited systems, autonomy, and networking – are driven by the commercial sector, and thus will be widely available. Moreover, many states and non-state groups may be more eager to embrace them than the U.S. military, which is heavily invested in existing operational paradigms and the expensive and exquisite platforms they rely on. Swarms are more likely to be embraced by those who lack the institutional and cultural hurdles to their adoption that exist in the U.S. military.

 

Strategists should not be deceived by the cheap, unsophisticated drones currently in the hands of non-state groups like Hamas, Hezbollah, or the Islamic State. Fully autonomous GPS-programmable drones can be purchased online today for only a few hundred dollars. Large numbers of them could be used to field an autonomous swarm carrying explosives or even crude chemical or biological agents. Just as cheap improvised explosive devices wreaked havoc on U.S. forces operating in Iraq and Afghanistan, low-cost drones could similarly be extremely disruptive and cost-imposing – airborne improvised explosive devices that, instead of lying in wait, seek out U.S. forces. Think about the panic recently caused by one wayward drone mistakenly flown onto the White House lawn. Now imagine hundreds carrying explosives intentionally directed at the windows of the Oval Office, an open-air public event, or the deck of a carrier conducting freedom of navigation exercises. This is possible with today’s commercially available technology. The U.S. military and law enforcement must begin to think now about how to counter these threats and in cost-effective ways.

 

Swarms can be devilishly hard to target because they distribute combat capabilities across a wide number of dispersed assets and because they leverage mass to saturate and overwhelm defenses. That is, after all, one of their advantages. At the same time, nothing is invulnerable. Individual elements of a swarm can still be attacked directly, although cost-effective means are critical. Shooting down a thousand dollar drone with a million dollar missile is not a cost-effective strategy. Whole swarms can be targeted, however, through electronic warfare, high-powered microwaves, or cyber attacks. Furthermore, the cooperative nature of swarms can be used against them, by jamming a swarm to force it to “collapse” to individual uncoordinated elements, out-maneuvering a swarm to force it into a disadvantageous position, or even conceivably hijacking a swarm to take control of it.

 

Swarm warfare is only in the very earliest stages of development, but below are some of the innovative ideas being explored on how to counter enemy swarms.

 

Destroy the Swarm

 

Individual swarm elements are still vulnerable to destruction, although militaries will want to find cost-effective means of doing so. Possible approaches include low cost-per-shot weapons, counter-swarms, and wide area electronic attacks.

 

Low cost-per-shot weapons

 

Low cost-per-shot weapons consist of exotic technologies like lasers and electromagnetic rail guns as well as more traditional technologies like machine guns. The Navy is currently developing laser weapons and rail guns. It successfully tested a laser weapon at sea last year and will test a rail gun at sea in 2016. Lasers and rail guns are appealing counter-swarm weapons because they are electrically powered and therefore have relatively low costs for each shot – significantly lower than a missile – assuming power sources are available. The Navy has already demonstrated the ability of a laser to shoot down an enemy drone, although defeating an entire swarm of drones is a more significant challenge. Machine guns, such as the sea-based Phalanx and land-based counter-rocket, artillery and mortar (C-RAM) system, are also potentially cost-effective ways of defeating incoming projectiles or smart drones, provided the radars can successfully identify and track small, slow, low-flying objects.

 

Counter-swarm

 

One method of taking out a swarm could be with another swarm. As long as the counter-swarm is cheaper and/or more effective than the enemy swarm, it can be a relatively low-cost way to defend against enemy swarm attacks. The Naval Postgraduate School is currently researching swarm-on-swarm warfare tactics, with the intent of testing a 50-on-50 aerial swarm dogfight. Basic research in swarming tactics will be critical, as winning in swarm combat may depend upon having the best algorithms to enable better coordination and faster reaction times, rather than simply the best platforms.

 

High-powered microwaves

 

Low cost-per-shot weapons and counter-swarms are appealing, but still require targeting individual swarm elements. High-powered microwaves, on the other hand, could potentially blanket a wide area with electromagnetic energy to disrupt or destroy electronics, thus taking out an entire swarm in one move. While high-powered microwaves currently have limited range, they could be effective for terminal defense against some types of swarm attacks, or could be mounted forward on platforms that intercept and knock out swarms further away from the assets being defended.

 

Collapse the Swarm

 

Communications jamming can also be an effective means of disrupting a swarm by preventing coordination among individual elements, “collapsing” the swarm so that it disintegrates into many disparate, uncoordinated elements. Swarms that rely upon implicit communication techniques such as co-observation – as flocks of birds, schools of fish, or herds of animals do – are inherently resilient against direct communications jamming. Nevertheless, co-observation can still be jammed through the use of obscurants or other measures to create “noise” in whatever spectrum swarm elements observe one another. While such jamming wouldn’t destroy individual swarm elements, it would prevent swarm elements from fighting cooperatively, potentially making individual elements easier to the target and eliminate.

 

Trap the Swarm

 

Conversely, one could adopt the opposite approach and leave swarm coordination intact, but leverage it to one’s own advantage to force the swarm into a disadvantageous position. This could be done by trapping a swarm, canalizing it into unfavorable terrain, encircling it with another swarm, or otherwise compressing or dispersing the swarm to force it to fight in a way such that many of its advantages are negated. Native Americans, for example, used to leverage the herding behavior of buffalo to drive them off a cliff, killing whole herds at a time in a “buffalo jump.”

 

Hijack the Swarm

 

The ultimate tactic in counter-swarm warfare is to hijack the enemy’s swarm for one’s own purposes. There are many examples in nature of animals repurposing swarms for their own use, sometimes in ways that are not harmful to the swarm such as hiding within it for protection, but in other cases in ways that are absolutely destructive.

 

In a form of spoofing, or false data, attack, the West African Rubber Frog secretes a pheromone that prevents the normally aggressive stinging ant paltothyreus tarsatus from attacking it. The frog then lives inside the ant colony during the dry season, reaping the benefits of the nest’s humidity and protection from prey.

 

The slave-making ant polyergus breviceps, on the other hand, is less benign when it hijacks an entire colony of a rival ant species. The polyergus queen infiltrates a rival colony, kills the queen, and assumes control of the colony as its new queen. Her own offspring are then raised by the hijacked colony and its workers, thus taking control of an entire swarm and using it for her own purposes.

 

Similarly, in the military context, swarms could be hijacking by spoofing incoming data, generating signals in the environment to induce certain swarm behaviors, or by direct communications hacking.

 

Counter-Countermeasures

 

As these examples demonstrate, swarm security will be essential in military operations. The increased degree of autonomy in swarming systems introduces novel risks. While autonomous systems may not be more susceptible to spoofing or cyber attacks, the consequences if an enemy were to gain control of a highly autonomous system – or an entire swarm – could be much greater. While an enemy might be able to ground a human-inhabited aircraft through a cyber vulnerability, an enemy could conceivably take control of an uninhabited system and no human would be physically present to disable the system. Actually controlling a remotely piloted aircraft today would require replicating the control data and would be quite onerous for even a single aircraft, much less a large number of them. But as uninhabited systems incorporate greater autonomy, an enemy, once in, could conceivably redirect it to turn on friendly forces with a few lines of code.

 

Decentralizing swarm command-and-control architectures can be one method to enhance resiliency. Perhaps paradoxically, requiring human authorization for some functions can also enhance resiliency against some forms of attacks. Armed with “common sense” and the ability to adapt to unanticipated situations, humans may be better at responding to some forms of spoofing attacks. Furthermore, requiring a human “in the loop” for some functions at least requires an adversary to replicate that command and control function in order to take control of a swarm, rather than simply being able to send false data or insert malware and allow automation to do the rest. “Human firewalls” can help build resilience against cyber attacks.

 

Rejecting all automation and relying entirely on human operators would give up the advantages of not only swarming but even basic vehicle autonomy for communications-degraded operations. Therefore, the best cognitive architecture will inevitably be a blend of human and machine decision-making. Militaries will want to harness automation for some tasks but keep humans “in the loop” for others, particularly high-consequence actions like the use of force.

 

The Enemy Gets a Vote on the Future of Swarming

 

Faced with some of the challenges in building and operating swarms – and keeping them operational in contested environments – it might be tempting to move slowly in experimenting with swarming concepts. But the enemy gets a vote. Cheap drones are proliferating worldwide, and they will incorporate increasing amounts of autonomy, enabling swarming with commercially-available technology. An ongoing revolution in information technology is generating a technology space that is flat and accelerating. New innovations are widely available to friend and foe alike, and the pace of innovation far outstrips the lumbering bureaucracies of governments and defense institutions. Moreover, because the technology behind increasing autonomy is software- not hardware-based, it can be easily copied, modified, and proliferated. It may be enemy swarms that first force the U.S. military to confront the challenges of swarm warfare, but the United States would be foolish not to also exploit its opportunities.

 

Swarming will require new concepts of operation and several paradigm shifts for the U.S. military. We must move from a paradigm of driving and piloting vehicles to mission-level command of a swarm. We must shift from waiting for “full autonomy” to seizing the opportunities afforded by operationally-relevant autonomy today. Instead of focusing on capability, we must focus on capability per dollar, and the advantages that mass may bring. We must shift our focus from survivability of individual platforms to resiliency of the swarm as a whole. We must think of not only payloads over platforms, but also software over payloads. Finally, instead of thinking about “unmanned” systems replacing people, we should think about robotic swarms as yet another tool to help warfighters adapt to a changing world to do their jobs – to fight and win.

 

Paul Scharre

 

*  *  *

 

Author’s biographical sketch:

 

 

1. Biography on War On The Rock:

Paul Scharre (portrait above) is a fellow and Director of the 20YY Warfare Initiative at the Center for a New American Security (CNAS) and author of CNAS’ recent report, “Robotics on the Battlefield Part II: The Coming Swarm.” He is a former infantryman in the 75th Ranger Regiment and has served multiple tours in Iraq and Afghanistan. (War on the Rock)

 

2. Biography on CNAS:

Paul Scharre is a Fellow and Director of the 20YY Warfare Initiative at the Center for a New American Security.

From 2008-2013, Mr. Scharre worked in the Office of the Secretary of  Defense (OSD) where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. Mr. Scharre led the DoD working group that drafted DoD Directive 3000.09, establishing the Department’s policies on autonomy in weapon systems. Mr. Scharre also led DoD efforts to establish policies on intelligence, surveillance, and reconnaissance (ISR) programs and directed energy technologies. Mr. Scharre was involved in the drafting of policy guidance in the 2012 Defense Strategic Guidance, 2010 Quadrennial Defense Review, and Secretary-level planning guidance. His most recent position was Special Assistant to the Under Secretary of Defense for Policy.

Prior to joining OSD, Mr. Scharre served as a special operations reconnaissance team leader in the Army’s

3rd Ranger Battalion and completed multiple tours to Iraq and Afghanistan. He is a graduate of the

Army’s Airborne, Ranger, and Sniper Schools and Honor Graduate of the 75th Ranger Regiment’s Ranger

In doctrination Program.

Mr. Scharre has published articles in Proceedings, Armed Forces Journal, Joint Force Quarterly, Military

Review, and in academic technical journals. He has presented at National Defense University and other

defense-related conferences on defense institution building, ISR, autonomous and unmanned systems, hybrid warfare, and the Iraq war. Mr. Scharre holds an M.A. in Political Economy and Public Policy and a

B.S. in Physics, cum laude, both from Washington University in St. Louis.

 

*  *  *

 

Related story, please click here
Vietnamese text, please click here
More on English topic, please click here
Main homepage: www.nuiansongtra.com

 


Nếu độc giả, đồng hương, thân hữu muốn: 

* Liên-lạc với Ban Điều Hành hay webmaster 
* Gởi các sáng tác, tài liệu, hình-ảnh... để đăng 
* Cần bản copy tài liệu, hình, bài...trên trang web:

Xin gởi email về: quangngai@nuiansongtra.net 
hay: nuiansongtra1941@gmail.com

*  *  *

Copyright by authors & Website Nui An Song Tra - 2006


Created by Hiep Nguyen
log in | ghi danh