Tag Archives: robotic

Petman, you scary.

Am I the only one seriously freaked out about where robotics are heading and who’s funding the research?

*shivers*

Leave a comment

Filed under news, sci-fi, technology

The Kamikaze Drone: The Weaponized RC Plane that Fits in your Backpack.

From the L.A Times:

Seeking to reduce civilian casualties and collateral damage, the Pentagon will soon deploy a new generation of drones the size of model planes, packing tiny explosive warheads that can be delivered with pinpoint accuracy.

Reducing civilian casualties?  Sounds great until an enemy gets a hold of one of these – or better yet, orders one from on online hobby shop and makes their own.  And we thought I.E.D’s where bad…

Errant drone strikes have been blamed for killing and injuring scores of civilians throughout Pakistan and Afghanistan, giving the U.S. government a black eye as it targets elusive terrorist groups. The Predator and Reaper drones deployed in these regions typically carry 100-pound laser-guided Hellfire missiles or 500-pound GPS-guided smart bombs that can reduce buildings to smoldering rubble.

The new Switchblade drone, by comparison, weighs less than 6 pounds and can take out a sniper on a rooftop without blasting the building to bits. It also enables soldiers in the field to identify and destroy targets much more quickly by eliminating the need to call in a strike from large drones that may be hundreds of miles away.

“This is a precision strike weapon that causes as minimal collateral damage as possible,” said William I. Nichols, who led the Army‘s testing effort of the Switchblades at Redstone Arsenal near Huntsville, Ala.

The 2-foot-long Switchblade is so named because its wings fold into the fuselage for transport and spring out after launch. It is designed to fit into a soldier’s rucksack and is fired from a mortar-like tube. Once airborne, it begins sending back live video and GPS coordinates to a hand-held control set clutched by the soldier who launched it.

When soldiers identify and lock on a target, they send a command for the drone to nose-dive into it and detonate on impact. Because of the way it operates, the Switchblade has been dubbed the “kamikaze drone.”

I wish I could believe that the military were only thinking of reducing civilian casualties when developing this technology, and maybe in the short-term that may be the case,  the long-term implications of these little death toys coming state-side and getting into the wrong hands are the things of nightmares.

Arms-control advocates also have concerns. As these small robotic weapons proliferate, they worry about what could happen if the drones end up in the hands of terrorists or other hostile forces.

The Switchblade “is symptomatic of a larger problem thatU.S. military and aerospace companies are generating, which is producing various more exotic designs,” said Daryl Kimball, executive director of the Arms Control Assn. “This technology is not always going to be in the sole possession of the U.S. and its allies.

 (emphasis mine)

We need to think about the rules of the road for when and how these should be used so we can mitigate against unintended consequences.”

You can read the full article here.

Leave a comment

Filed under news, technology, Uncategorized

Autonomous Killing Machines

I just read an excellent article at i09.com about the future of robotic warfare – especially the autonomous kind.  Believe it or not the technology exists and is being used right now.

From 109.com:

Autonomous killing machines aren’t anything new. We already have various levels of autonomy in a number of weapons systems, including cruise and patriot missiles. The Aegis Combat System, which is found aboard naval ships, has an autonomous mode in which it uses powerful computers and radars to track and guide weapons to destroy enemy targets.

But these are largely defensive systems with limited intelligence. They typically find their way to a target, or take certain action without human oversight – but only after being launched or triggered by a human.

As time passes, however, these systems are getting more sophisticated, and their potential for increased autonomy is growing. Take Samsung Techwin’s remote-operated sentry bot, for example, that works in tandem with cameras and radar systems. Working in the Korean DMZ, the device can detect intruders with heat and motion sensors and confront them with audio and video communications. They can also fire on their targets with machine guns and grenade launchers. As of right now, the robots cannot automatically fire on targets, requiring human permission to attack. But a simple change to engagement policy could override all that.

Another example are the packbots used by the U.S. military. These devices have an attachment called REDOWL which uses sensors and software to detect the location of a sniper. Once it detects the threat, it shines a red laser light on it, indicating its presence to human soldiers who can then choose to take it out. It wouldn’t take much to modify this system such that the REDOWL could act on its own – and with its own weapons.

Other than the obvious reasons for why this technology is terrifying, there are others – namely how cheap and easy this technology is for anyone to use.  It isn’t like the cold war where there was a significant technology barrier to entry to make nuclear weapons – anyone with the basic know how can weaponize a robot.  Example, right now I can go into most hobby shops and by a robot kit for my son.

Here’s more from i09.com:

Wallach told io9 that these systems aren’t very complicated and that virtually any country has the potential to develop their own versions. “The larger question,” asks Wallach, “is whether or not the U.S. military is producing such weapons – and other countries.” He suspects that there are more than 40 countries now involved in developing unmanned vehicle programs similar to the ones deployed by the United States, including drones.

Complicating the issue are ever-increasing levels of autonomy in military machines. The U.S. airforce is starting to change the language surrounding their engagements, referring to systems that are “in the loop” to “on the loop” to describe the level of future human involvement. By being “on the loop”, humans are largely outside of the process, but can intervene if the weapons system is about to do something inappropriate. Trouble is, says Wallach, is that the speed of modern warfare may preclude human involvement. “It’s dubious to think that a human can always react in time,” he says.

And take REDOWL, for example. Once the system points out an enemy sniper, the question emerges: Who is in whose loop? Is the soldier in the REDOWL’s loop, or vice-versa?

“A common concern among some military pundits is that it lowers the barriers to starting new wars,” says Wallach, “that it presents the illusion of a quick victory and without much loss of force – particularly human losses.” It’s also feared that these machines would escalate ongoing conflicts and use indiscriminate force in the absence of human review. There’s also the potential for devastating friendly fire.

And once developed, the systems are likely to proliferate widely. The fear is that their presence would introduce a serious, unpredictable element in future conflicts. Just because, say, the United States adheres to international laws and restraints doesn’t mean that other state actors and interests will, too. It could very well instigate an arms race.

What is also missing from the discussion is who are the new causalities in this kind of warfare?  How do you win a war where both sides are hiding in bunkers and controlling or NOT controlling autonomous robots?  How do they determine the winner?  By innocent civilian casualties?

Wallach is alarmed at how little this issue is being discussed, which is something that he’s hoping to change. “There are various policy makers, military thinkers, and academics who suggest that autonomous killing machines are science fiction and that no one is moving in that direction,” he notes. Wallach cites the work of Werner Dahm, chief scientist of the Air Force, who he feels is not taking the issue seriously enough – and even potentially downplaying the threat.

Quite understandably, some military thinkers see the tremendous advantage that these systems could bring. Unmanned smart weapons could increase capabilities, reduce collateral damage through greater precision, decrease loss of personnel, lower manpower costs, and enable the projection of lethal force in a future where manpower resources will be far more limited.

And for better or worse, these rationales point to a future in which wars are fought by robots pitted against each other. “This is not just the concern of futurists or nay-sayers,” says Wallach, “but also from both retired generals and active military leaders who are very concerned that this could lead to a robust lack of control and undermine the human levels of engagement.”

And we haven’t even discussed the potential for these systems to be hacked.  Remember, this is no longer sci-fi.  It’s happening right now.

In his conversation with io9, Wallach seemed frustrated that some people see this issue as something that’s too futuristic to care about. “We’re at a potential inflection point in the development of autonomous weaponry,” he said. “That inflection point won’t last for a long period of time, and if we wait too long, other vested interests will take over that prospect.”

For the full article go here.

Cheers,

R.

2 Comments

Filed under news, technology