Think about the concept of “AI ethics.” Where does your mind wander? Do you think about Optimus Prime from Transformers?

In case you weren’t able to paint a vivid picture in your head, we’ll help you…

Start the Reconnaissance!

An autonomous robot powered by a DARPA-funded AI system begins its search of a village for a suspected terrorist. It rounds the corners of homes and navigates its way around obstacles, scanning the faces of those it encounters, looking for a match.

This robot knows exactly who it is looking for, with pre-programmed knowledge of the suspect’s facial structure and build. It identifies the correct person 9,999 out of 10,000 times, and just now, it’s found a match.

A remote missile system immediately launches a warhead at the exact coordinates reported; however this time it was not given permission.

AI Ethics

The above situation was replicated recently as part of a proof of concept at a marine base in Quantico, Virginia. The concept was designed to simulate the capabilities and responsibilities that AI systems are now being trusted with.

How should the robot react when it identifies the right guy? This situation demonstrates the apparent need for AI ethics.

The Terminator Conundrum

AI ethics is not a small-scale issue. Within the Pentagon, it has been dubbed as the “Terminator Conundrum.” Put simply: Whether or not we can allow autonomous AI systems to make decisions regarding human lives.

AI ethics terminator

AI may still be in its infantile stages. But it is quickly revealing itself to be the largest revolution in warfare since nuclear technology.

Right now, the military is betting heavily on its future importance in maintaining an advantage on the battlefield. Need some proof? Currently, the pentagon spends roughly $3 billion a year on autonomous systems projects. With this massive budget there are a slew of projects in the works for military applications.

The Pentagon’s Military Projects

Troop Support

Some of these projects, like the troop support project, are less controversial than our earlier example. Boston Dynamics LS3 is described as a:

“Rough-terrain robot designed to go anywhere Marines and Soldiers go on foot, helping carry their load.”

Support robots already exist on the battlefield, making human operations easier and more efficient. Robots like the LS3 are able to carry massive loads for miles over uneven terrain and scout out dangerous areas before we send in troops to further investigate.

AI Ethics

A Boston Dynamics support robot assists soldiers during a training exercise.

While the troop support project may not be controversial, other projects the Pentagon funds raise some serious questions related to AI ethics.

Perdix

Perdix, a product of MIT’s Lincoln Laboratory, is a swarm-based mini drone project designed for low-altitude intelligence, surveillance, and reconnaissance missions.

Cheap and expendable, these mini drones fly in swarms, communicating with each other thousands of times a second. They operate with a shared brain, and all decisions are made on-the-go by the entire group.

AI Ethics

Two Perdix mini drones

The applications for technology like Perdix are just as eerie as they are exciting.

Will Roper, Director of Strategic Capabilities Office, describes ‘how a swarm of Perdex mini drones would chase a fleeing suspect:

“There’s several different roads they could have gone down. And you don’t know which one to search. You can tell them, ‘Go search all the roads,’ and tell them what to search for and let them sort out the best way to do it.”

Is there a potential future in law enforcement utilizing technologies like Perdix? A reality where swarms of mini drones scour urban areas like locusts. A future in which tiny flying robots gang up in numbers, chasing and tracking down criminals without any human intervention.

Okay, so maybe Perdix still didn’t convince you of the importance of AI ethics. Don’t worry, we’ve got another example for you.

AI Ethics of Elimination

Yes, elimination is a euphemism for death on the battlefield.

The current generation of aerial AI systems is already wildly accurate in identifying hostile enemies–perhaps even more so than humans. They’re able to follow cars, pick out hidden enemies, and even discern weapons from mundane objects, such as cameras.

AI Ethics Drone

We’ll leave you with these three questions:

  1. Should humans continue to take on these tasks because of their ability to account for and consider emotional factors?
  2. If AI systems are more reliable and methodical, at what point do we consider switching over to an autonomous military?
  3. Or, are they just cold machines following the black and white protocols written into their programming?

The future of AI and AI ethics is ever-changing and fascinating. For now, we’ll stay open to, but weary of,  new autonomous developments in the military sector.

Enjoyed this Article?

We’ve got more where that came from, only on the Seamgen blog.

Automated Vehicles: The Future of Transportation

A Crash Course on Smart Cities

Google DeepMind is Revolutionizing AI