Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How bad does it have to get before the automated systems refuse to deal and a human has to land the plane? An engine gone? Two engines gone? Multiple engines gone and a really stiff crosswind? How good is it at landing in a blizzard on an icy runway? I know it never happens. Planes never land on the Hudson River, either. There's obvious value in making the happy path as safe as possible, and I'm certain full automation is the way to do that, but what are the current limits of automation?


Today's autopilots are designed to be as simple as possible.

The intention is that a pilot should be able to fully understand the complete autopilot algorithm and understand exactly how it should act in any situation.

Today's autopilot can't handle any emergencies at all, they are designed to disconnect when they detect something weird. They don't even understand aerodynamics.

I don't know where Airbus' new system lies, but if we actually designed a fully autonomous plane using modern traditional AI techniques (so no machine learning or neural networks; Just decision trees, adaptive aerodynamic models and predictions) then it could handle a wide range of emergencies.

It should be able to handle bad weather and engine out emergencies just fine, and should even be able to land in the Hudson River or anywhere else it has detailed maps.

These are all documented flight emergencies that any programmer of a fully autonomous plane would know about and test their software against in simulations.

The main areas where such fully autonomous planes would have issues are avionics failures, misleading sensors, emergency landings in areas with substandard mapping and unexpected aerodynamic failures (beyond what their adaptive aerodynamic model can handle)


I suspect that the with the current electrical system on airbuses, the emergency electrical configuration wouldn't have enough power to run all those image processing computers at the very least (dual engine failure would cause this). However that could be resolved with additional battery capacity.

A more likely cause of automation failure would be failure or disagreement in the various sensors on the plane. These are typically triple redundant, but there have been cases where, for example, all the pitot tubes ice over and airspeed indications are lost. There are ways to deal with this but they're not currently programmed into the autopilot.

Other possible causes include failure of major flight controls requiring workarounds - eg, dealing with a stuck rudder with deliberate asymmetric thrust.

That being said flight control issues can probably be dealt with, if necessary. The real problem is with dealing with ambiguous situations where you have to weigh risks.

Consider: your radios are out. Per standard procedure you should proceed on your flight plan to the final fix and hold until your scheduled arrival time. However, fuel consumption is high - you're not sure, but you might have a slow fuel leak, and your destination has dicey weather. Do you divert without clearance or risk that you might be marginal on fuel at your scheduled arrival time?


Probably the limit is that in the miracle on the Hudson, Sully wouldn’t have had a co-pilot.


I wonder how a computer would have reacted to Search US Airways Flight 1549...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: