Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Black Hawk flies, lands and avoids threats - all without pilots (al.com)
59 points by showerst on Dec 6, 2012 | hide | past | favorite | 45 comments


Just keep in mind, this is a science project. It's going to be a long, long time (or never) before this is safe from enemy exploitation. I can think of all kinds of wacky ways to crash one of these things.

Autonomy is fine for high-flying, loitering aircraft, not so fine for low-altitude flight relying on easily-exploitable terrain avoidance & positioning technologies.


Already fielded: http://en.wikipedia.org/wiki/Kaman_K-MAX#Unmanned_remote_con... I presume the difference is that the K-Max is not doing terrain avoidance using laser ranging (undoubtedly it has a preprogrammed 3D flight profile).

"In December 2010 NAVAIR awarded a $46 million contract to Kaman for two aircraft, and in 2011 they completed a five-day Quick Reaction Assessment.

In December 2011 an unmanned K-Max was reported to be at work in Afghanistan. [...]

On July 31, 2012, Lockheed announced a second service extension for the K-MAX in Afghanistan for the Marines. This extends operation time to the end of March 2013, with the option to extend through to the end of September 2013. As of July 31, 2012, Lockheed reports the K-MAX has completed 485 sorties, 525 flight hours, and delivered over 1.6 million pounds of cargo since its deployment in November 2011"


The Army uses tons of transport helicopters in non-combat situations though, right?

The military (having a significantly lower standard for 'safe enough' operation than, say, airlines) will be an interesting inroad for autonomous transport vehicles in society.


Sure, and there may be applications there, but they aren't testing with a transport helicopter (such as a Chinook which flies quite differently than a Blackhawk or other traditional helo design), they're testing Blackhawks in a terrain avoidance environments. Two wholly different things.


What's important here is how you're using and what encapsulates "transport", that imo is critical to the point you're wanting to make, and whether or not I disagree with it.


The military is usually the inroad for wide use civilian technologies.


Either you're saying most civilian technologies were military in origin. Or you're saying most military technologies end up with civilian applications.

I doubt the latter is true, and I'm not sure about the former either. The military R+D budget is big, but these days (maybe always?) it pales compared to public nonmilitary and commercial R+D activity.


I don't know, I think the second one could be true. I imagine it's quite likely that someone involved with this is also a hobby drone builder that also contributes to open source. If they've created some new algorithms or made some existing ones more robust it would be great to get those to the community.


Your right, and I think where this will end up is with 'super aware' pilots. Where the plane does just about everything automatically, but there's a human inside that basically 'thinks' and the vehicle does. Our brains are already better at strategy than the most advanced computers, it makes sense that the vehicle takes care of the easy stuff (like not hitting walls and landing) and the human focuses on the enemy.


I agree that this will (and should) end up with 'super aware' pilots. There is a lot of work that can be and should be automated so that a pilot only does what is absolutely necessary -- tactical strategy and anticipation of human responses. Computers and their programming can be made mathematically perfect. Flying, evading, tracking, and even self-preservation are perfect for this.

The one thing that a human can do is make real mistakes, anticipate such errors, and make seemingly illogical choices -- illogical to a computer. True deceit. A computer may be able to one day fake deceit, but there will be a logical probability factor to it. Detection of such deceit will likely be more difficult for a computer algorithm than faking deceit.


Vaguely reminds me of Snake Eyes, by Tom Maddox, though not quite as extreme.

http://www.dthomasmaddox.com/snakeeyes.htm


"Our brains are already better at strategy than the most advanced computers"

--- for how much longer? and when taken at scale, is the human race really all that great at strategy? We are so much at the mercy of emotions to ever be truly great strategists. What if, for example, given food production and water usage etc the math dictated that the population must reduce, do you really think this strategy would be adopted? Of course not.

i don't really see the point of self-flying helicopters, it's not like there's a shortage of bright young men and women who want to fly these things. The real challenge isn't flying helicopters, but finding ways to resolve situations without blowing shit up. As fun as that might be...


"I don't really see the point of self-flying helicopters, it's not like there's a shortage of bright young men and women who want to fly these things."

Fewer caskets arriving at Dover AFB for repatriation.


While that may be an issue at higher political levels (deaths make wars less popular), young people aren't put off by the risk of death, as armies have know for thousands of years. And given the description of the US military as being Americas social welfare system, is this for the greater good, or not?


How about where troops are pinned down in an extremely hot firefight and running low on ammo. The commanders aren't going to risk letting a human piloted black hawk land because it's just too damn risky, but a drone might get the go ahead.

It also makes sense from a purely rational point of view because the major costs of that capability are pilot training (millions) and maintenance. It doesn't hurt nearly as much to lose a chopper when you don't lose those millions spent in pilot training along with it, so you can take much greater risks with it


> --- for how much longer? and when taken at scale, is the human race really all that great at strategy?

We still mop the floor with Go programs and AI in RTS games.


When the hoard plays nasty in Left for Dead, I can't say I can beat AI.


In most games, the deck is stacked in favor of AI in a variety of ways. (BTW, it's "horde," unless it's a deliberate use of a homophone by the game designers.) If the AI's units can overwhelm humans with superior reaction times or superior numbers, then it's not the AI's strategy that's winning. There are real world situations and games that are still complex enough, it's difficult to compute tactics and strategy properly.

Here's the difference between humans and AI in a nutshell: Competent humans can usually tell when someone is using an exploit against them. AIs almost never have a clue that's happening.


2 good points in there, and as a zombie fan, I'm ashamed of my horde failure. And the comment on exploits is sharp, has AI ever noticed exploits/cheating? How? I know how I start to suspect and start to confirm suspicions, but its not much more than recognition of suspicious patterns.


Where the plane does just about everything automatically, but there's a human inside that basically 'thinks' and the vehicle does.

"...must think russian"


I think initially this could be more effectively used for transporting the helicopters from one place to another, such as another base.

You are right though, aircraft like the Global Hawk operate at altitudes that are safe from enemies, so operationally they are easier to reason with for autonomous use.


I think the Army's self-flying helicopter trumps Google's self-driving car. This is the kind of competition where everyone wins!

Edit: For the people who are still alive...


> This is the kind of competition where everyone wins!

Except the people that the autonomous attack vehicle is killing.


> the kind of competition where everyone wins!

I want to make a snarky comment about the dead (possibly innocent civilian) people not being winners, but I guess a drone is less likely to kill innocent people than shock & awe style bombing and cluster bombing, which tend to harm a lot of children.

It's a weapon of war and so the US is unlikely to release any information for many years. That's a shame, because they've probably got some neat tech which could be used for self driving cars.


No autonomous vehicle made by the US has attack / strike capabilities yet. The J-UCAS / J-UCAV was proposed and demo vehicles were made, but they are not even close to operational. I would imagine the blackhawk will be a similar situation where it starts out as a transport vehicle for the foreseeable future.


Tomahawk cruise missiles are autonomous vehicles, as are stand-off anti-tank weapons that loiter on parachutes after deployment and choose their own target tanks.


Israel have built an anti-radar missile that has cruise capability, allowing it to loiter above a theatre waiting for something to make an active radar lock. I think (but am not sure) that it can even land safely for reuse after a set time frame if unused.


Would it be easier to build one of these systems for a Commanche or an Osprey? I ask because I imagine that control systems are heavily utilized in these two helicopters, meaning that all the necessary feedback loops / transfer functions / frequency responses are already known. Thus, taking those systems a step further into controlling position may be more efficient than starting from pneumatic controls on a Black Hawk.


Well, the Comanche was canceled a while ago and never entered service. The Black Hawk is an ideal testbed for this tech because there are so many of them in service and its handling and controls are well understood. It's a pretty conventional design, and would benefit more from being unmanned due to cruising lower and slower (more vulnerable) than the Osprey.


Almost an hour and still no references to SkyNet. Am I just that old? :-)

http://en.wikipedia.org/wiki/Skynet_(Terminator)


Is it just me, or is SkyNet's rational intelligence less scary than the people who will actually end up controlling these machines?

edit: I mean no disrespect to soldiers - I refer to the politicians.


No disrespect to soldiers? Seen the Abu Ghraib photos? It isn't the fact it was soldiers so much as the fact the people keep doing things like that, year in year out. The problem as I see it is that people will make the machines better at committing atrocities.


It depends. If Skynet's morality were all about doing what benefits itself, it would basically be a sociopath. That's not very reassuring.


The more relevant and current that comparison becomes, the less of a joke it is.


I can think of all kinds of wacky ways to crash one of these things.

Please do tell. Thanks.


Instead, think of ways to hijack it like the Iranians (supposedly) did. After all, you could sell it to the Chinese or Russians for a LOT of money. Go ahead I dare you.

As a countermeasure, we would have a second-channel kill switch and self-destruct mechanism. If you try to hijack it back to an unfriendly location, we blow it up when you approach it with resulting casualties. If you try to use it as an offensive weapon we use the kill switch and/or self-destruct in an area of our choosing.

To overcome the possibility of an external self destruct signal reaching the drone when you hijack it, you'll probably come up with tricks like bathing it in strong spread-spectrum interference or dropping it into a Faraday cage. Either can be detected and trigger the self destruct internally.

Perhaps you can disable the drone by hitting it with an EMP weapon but still recover it. Good luck.


> To overcome the possibility of an external self destruct signal reaching the drone when you hijack it, you'll probably come up with tricks like bathing it in strong spread-spectrum interference or dropping it into a Faraday cage. Either can be detected and trigger the self destruct internally.

That seems counterproductive. If you create a self destruct system that can auto-enable based on conditions, the enemy just needs to create those conditions. They may be deprived of the weapon system, but so are you.


After the last drone fiasco, how about GPS signal overloading?

Terra-forming (moving landmarks or painting hills with reflective paint of sorts) and cloud salting could also bear some interesting results to at least introduce noise into the data-input systems. Enough noise and you'd probably down something.


Def can figure out position in space without the aide of GPS.


Fully Automated Killing, FTW!


I hope you respond this same way on every post about Google's autonomous car since this is also a demo of a autonomous transport vehicle.


Scary thought, also when armies can't think for themselves.


In America, you think for the Army. In Soviet Russia...

But seriously, in the 80's the Soviet brass still treated tank battalions somewhat like fire and forget weapons. Basically, they'd just mow down everything in front of them until they broke.


I've seen this episode of Mythbusters...


he he....RASCAL...he he




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: