Advertisement
Articles
Advertisement

Brainstorm - Military Electonics

Mon, 05/04/2009 - 9:46am
Edited by Jason Lomberg

This Month's Question: Will autonomous devices supplant remote-controlled in the area of military applications?

Dr. Kelvin Nilsen, Aonix, www.aonix.com

One critical and largely solvable problem is the common failure to give high-integrity development proper attention. Most autonomous military robots are involved in safety critical activities such as aiming and firing of weapons or gathering and reporting intelligence that influences weapon aiming and firing.

As a company with a long tradition of supporting deployment of safety-critical software, we are frequently consulted by U.S. defense subcontractors. Far too often, software is mostly complete, and now they are ready to begin the long and arduous task of achieving safety certification. Unfortunately, “now” is not the time to begin this effort. Essential to safety certification is an audit of software process, spanning requirements capture, architecture, design, test plans, code, tests, and analysis of test results. Traceability from requirements to end results must be demonstrated. Analysis of test results must demonstrate full test coverage of every condition associated with every branch along all possible execution paths through every procedure.

Most successfully certified software systems have been developed under stringent guidelines designed to enable safety certification. Developers restrict their use of libraries and language features, select operating systems and tool chains designed for safety-critical development, avoid complex algorithms and information flows, and instrument their software to enable comprehensive testing and code coverage analysis. It is extremely difficult, often cost prohibitive, to certify code that was not developed in accordance with certification guidelines.

Part of the fault lies with government technology procurement offices who demand early milestones focused on capability demonstrations under idealized operating scenarios. Safety and security certifications were not mentioned in original calls for proposals and are sometimes only addressed with change orders issued after program funds are nearly depleted. No wonder so many government sponsored defense programs are over budget.

david moore avnetDavid Moore, Avnet Electronics Marketing, www.em.avnet.com 

Several reports indicate that the US military has more than 5,000 semi-autonomous robots deployed in Iraq and Afghanistan. The majority of these intelligent machines are unmanned aerial/ground vehicles with autonomous navigation capabilities and used primarily for enemy surveillance. Drones, such as the Predator, are equipped with Hellfire missiles and actively used to destroy well-defined targets, but missile strikes are exclusively conducted by a remote soldier with confirmed and approved “eyes on target.” The technology to deploy completely autonomous military robots (operate in a real-world environment without any form of external control) that would actively participate in combat situations is near, but the artificial intelligence required to distinguish between a combatant and non-combatant has simply not been developed and/or tested to a degree of certainty acceptable to the American government.  The potential for friendly fire within Allied forces and/or civilian rather than insurgent casualties is still too possible. Although most research will continue to be classified, military developments in the deliberative layer of intelligence will increase robot sophistication in planning, learning and meticulously defined human interaction scenarios. 

Since the voting American public is already sensitive to the perceived risks posed by robots, thanks to Hollywood blockbusters like “I, Robot” and “Terminator,” expect field deployment to happen in foreign remote areas, similar to how the South Koreans protect their northern border, long before the U.S. military will engage completely autonomous robots into an urban warfare or battlefield environment.

chris minter components corpChris Minter, Components Corp, www.componentscorp.com
     
As autonomous robots are emerging in the military sector the most pressing issue to consider is the capabilities of these robots to perform their programmed tasks.  Currently, autonomous robots are in use as armed border sentries in Israel, and guarding the border between North and South Korea.  Reports state that by the year 2015, the Pentagon wants one third of its forces to be robotic, either remote-controlled or autonomous.
     
The robots in use presently by the US military are remote-controlled with decisions to use lethal force being made by human fighters at distant locations.  The key in this type of robot deployment is just that, “humans” make the decisions.
     
With developments in technology to create autonomous robots programmed to destroy particular targets without direct human control, the question is raised as to where the ethical component fits in.   Should a robot be equipped with artificial intelligence to make decisions about human termination? A US navy document suggests the critical issue is for autonomous systems to be able to identify the legality of targets.  “Let men target men” and Let machines target machines”.  The robot would then not in fact target a human holding a weapon, but the weapon itself.  But how would the robot differentiate between an enemy military operative and an innocent civilian?
     
In a report funded by the US Office of Naval Research, “a fast approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans” is envisioned. …
     
The report states:  “There is a common misconception that robots will do only what we have programmed them to do…modern programs included millions of lines of code and were written by teams of programmers, none of which knew the entire program:  accordingly, no individual could accurately predict how the various portions of large programs would interact without extensive testing in the field.”  Is this field level testing option even available?  A report compiled by the Ethics and Emerging Technology department at Cal PolyTech cautions against complacency or shortcuts as military robot designers engage in a rush to market and the pace of advances in artificial intelligence is increased, risking the deployment of robots with design or programming flaws.
     
As artificial intelligence is becoming marketable reality, we need to be cautious of the ethical ramifications.

john qspeedJohn Jovalusky, Qspeed Semiconductor, www.qspeed.com 

Apart from the ethical and legal issues–such as the Laws of War and the Rules of Engagement–which may initially limit the use of autonomous military robots, there are technical issues that must be considered, since they will affect practical deployability.  The primary one that comes to mind is that of security, both of communication to and from the robotic unit as well as accessibility of its programs by unauthorized personnel; particularly, enemy interests.

The latest malware episode–the conflicker worm–reminds us that even though the internet has been around for some time now and internet security has been greatly improved, the dangers of vulnerability to hackers and unwanted programs are still very relevant and pertinent.  Given that reality, ensuring that military robots are sufficiently protected from being hacked into and possibly high jacked, or having their communications listened in upon, is no small task.  Advanced weaponry ceases to be an advantage if it can easily be rendered useless or, worse yet, confiscated and used against the forces of its original deployers.

To that end, great care and fore thought in the design phase will be an essential requirement, as well as thorough testing–by ‘friendly fire’ assaulting hackers–to ensure that the program code of military robotic units is not accessible to anyone but authorized persons and that two-way communication with the robots cannot be readily intercepted, decoded or altered.  Lastly, it would seem to be prudent that if unauthorized access to a unit is made, the machine can report the security breach and disable itself, if necessary, rather than risk the possibility of being controlled by hostile forces.

vanzwol micropowerJeff VanZwol, Micro Power, www.micro-power.com 

For autonomous military robots, increased power from the robot's power plant remains one of the most critical areas needing improvement.  Increased power output enables an unmanned vehicle to run longer missions and improves maneuverability in hostile terrains (i.e. like climbing stairs in an urban setting).

Smaller Unmanned  Ground Vehicles (UGVs) and Unmanned Aerial Vehicles (UAVs) utilize battery power plants to power these vehicles.  Two trends in battery technology have extended the range of operation for these unmanned vehicles.  First, the capacity levels for cobalt-oxide, lithium-ion cells - such the 18650 (18mm diameter, 65 mm length) cells - has consistently increased over the last few years.  2.2 Ahr Cells were introduced in the early 2000's, and 2.6 Ahr cells are now commonplace.  Currently, 2.9 and 3.0 Ahr cells are available in limited production quantities.  Although not progressing at the rate of Moore's Law, cell manufacturers have continued to improve cell capacity by stuffing more and more battery active material into the cells. 

Second, companies like A123 Systems, E-one Moli, and LG Chemical have introduced high-rate lithium iron phosphate cells. These cells provide up to five to ten times more rate capability than cobalt oxide cells mentioned earlier.  Some of these cells can support almost 100 Amp pulses.  This higher rate capability substantially expands the range of operations and capabilities of unmanned vehicles.

 

Powell, LindsayLindsay Powell, 3M, www.3minterconnects.com 

In time, autonomous devices will take over from remote-controlled. It’s just a matter of when. In carrying out its mission, the military needs the best tools for the job both in its role as war fighter and peacekeeper. The history of warfare proves the point. Whether it was the Assyrians using their battering rams against the king of Hamath’s fortified palace gates in 835 BCE, or the Confederate States’ using the submarine H.L. Hunley in their struggle to break the Union forces’ blockade of Charleston in 1864 CE, military commanders demand ever more innovative devices to give them the edge. The twentieth century saw the emergence of technologies that began to replace the combat fighter with a remote operator. In the current ‘war on terror,’ remote controlled vehicles (RCVs) are being usefully deployed to detonate bombs on the ground while unmanned aerial vehicles (UAVs) are proving their value for air-to-ground surveillance and attack. Not so long ago these tasks were performed by frontline troops who risked life and limb. Now RCVs place the hardware in harm’s way while leaving command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR) and judgment to the human operator.

Robots are getting smarter by the day and beginning, little by little, to break free of their human creators. The Grand Challenge sponsored by DARPA, the Defense Advanced Research Projects Agency, has successfully produced cars that can drive themselves both in desert and urban environments. The competing teams exploited a variety hardware (data acquisition and processing, GPS, machine vision, laser range finders, sensors) and software combinations for interpreting sensor data, planning, and execution. While several key enabling technologies are already available and maturing others are still very early in development.

In time, it seems highly likely to me that fully autonomous devices will find their place onto the battlefield alongside RCVs (quite literally). Initially these might be deployed to bring tactical advantage to war fighters in very high risk situations, and to bring ‘shock and awe’ to the enemy, much as tanks did in World War I. Once the technology is proven and hard baked, the devices built on it will become more widely deployed. This may be fifty or a hundred years away, but as surely as mechanized armored vehicles replaced the horse in cavalry units, fully automated devices will come. Here the issue is the extent to which a machine can be built with sufficient artificial intelligence (AI) to allow it to make ‘life and death’ decisions. Designers of such AI devices will face complex programming issues to avoid the problems of renegade armed robots as anticipated by the writers of Terminator and Robocop.

Of course, companies like 3M will be watching these developments with interest as we consider the nanotechnology and form-factors enabling this brave new world.

If as Carl von Clausewitz contended “war is politics by other means”, who gets to decide which is the ‘good guy’ and which is the ‘bad guy’ is not usually made by the military top brass, but by politicians. Let us hope that they ever remain human.

Michael TolandMichael Toland, International Rectifier, www.irf.com 

The war in Iraq has shown us that today’s battleground is complex and the enemy comes in many forms. The threat from roadside bombs, hidden mines, improvised explosive devices (IEDs) and hazardous materials and gases is far greater than direct human combatants. Consequently, the role of robots in military missions is becoming increasingly important. Whether remote controlled or autonomous, robots are playing an important role in protecting our troops in modern warfare. Analysts predict that by 2020, some 30% of the military’s ground forces will be robots. In addition, both remote controlled or autonomous robots/vehicles will be crucial in accomplishing the job without bringing the human operator in harm’s way.

With the cost of advanced digital computing and imaging falling quickly and processors gaining speed with an increasing degree of reliability, both remote controlled, as well as autonomous systems are becoming practical and feasible for a variety of defense and homeland security applications. The advantages of remote controlled systems for military applications like defusing roadside bombs, detecting and clearing mines, removing or disarming IEDs or sensing gases and other poisonous substances are obvious, as these systems offer excellent mobility, control, and precision, and can perform complex tasks in challenging terrains with NO risk of injury or fatality to a human operator.

Alternatively, autonomous vehicles lend themselves very well to military applications that require real-time threat assessment and action, reconnaissance, independence of control signals, tough terrains, and immunity to interference and jamming.

Thus, both these systems are vital to dangerous military missions. That means both have to perform with utmost reliability, as there is no room for failure or below par performance. Robotic systems must be designed with highest reliability components that meet stringent military standards in terms of hermeticity, wide temperature range, resistance to the environment, shock and vibration... From discrete power semiconductors to fully integrated power systems, IR has established a Hi-Rel process that subjects its products to rigorous JANS or JANTX/TXV testing and screening to eliminate failures and guarantee highest reliability in such critical military missions.

David PursleyDavid Pursley, Kontron, www.kontron.com

The movement from remote-controlled to autonomous has already been realized in several military areas.  I would suggest that the real question is not “will autonomous devices supplant remote-controlled in the area of military applications?” Or even “when will they?”  Instead the question is “in what fashion?”

For example, Kontron hardware is providing the necessary horsepower for autonomous navigation of unmanned aerial and naval vehicles typically used for reconnaissance purposes. In these applications, a hardware or software fault would not cause lives to be lost so it is not surprising that these were some of the first application areas for truly autonomous devices.

Even the more complex task of navigating a ground vehicle through urban settings has been proven possible with today’s technologies. Software algorithms, multi-core hardware, sensors, and communications have evolved to the point where real-time intelligent decisions can be made in the complex and somewhat random environment of a populated city. Autonomous navigation capabilities will be used throughout the Future Combat Systems (FCS) program.

But more widespread today are the autonomous systems that protect U.S. and Allied troops from aerial assault. These typically integrate radar, tracking and fire control systems to provide a three-dimensional shield against threats such as artillery, mortar, rockets and cruise missiles. Depending on the tactical situation and prevailing policies, these systems can run autonomously or semi-autonomously. When used semi-autonomously, the warfighter must trigger the use of counter-measures such as missiles, Gatling guns or high energy lasers.

Despite advances in technology, it is extremely unlikely that the warfighter’s decision-making capabilities will ever be completely removed from the equation when it comes to aggressive actions. Just because technology exists to support the capability doesn’t mean it is prudent or ethical to do so.  Human involvement is absolutely necessary in some way, shape or form to ensure the success of any mission.

Blaine lairdBlaine R. Bateman, Laird Technologies, www.lairdtech.com 

I think we are a long way from completely autonomous devices being the majority of applications vs. having some remote-control capability.

In reality, most applications are already a hybrid of a degree of autonomous processing and remote-control.  It is easy to see the reasons for this – you can build a system with an embedded GPS receiver, a processor, memory, terrain and other data, tell it where you want the device to go, and it will try to go there.  However, you might not know what the system will encounter along the way and, more importantly, what you will find when it gets to the desired destination.  This is where remote control, even at a minimal level, is invaluable.  At the other end of the spectrum are remote-controlled aircraft where a lot of the value of the system is allowing human adaptability in real-time, without exposing the human operator to risk.  Sure, there are times when a cruise missile is the right choice, but even then you might employ remote abort capability.

To have remote capability generally means a wireless link.  This is true for a lot of autonomous applications as well – they gather critical data that must be sent back for analysis.  These capabilities generally require sensors (video, position/location, IR, radar, RF, etc.) on the device that gather necessary information and generate data that can be used on-board or sent back to the operator.  That is where RF technology comes in and plays a part in the large military RF components and systems industry today.  I expect we will be supporting these remote links for a long time to come.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading