Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Why is supervised AI used in aerospace, healthcare or military?
2 points by semidror on Jan 27, 2024 | hide | past | favorite
Is the usage of AI/ML just a gimmick in aerospace, healthcare, military and other fields where safety and reliability is of paramount importance?

I don't seem to get the hype around *(semi-/self-)supervised* AI/ML being framed as the superior solution to the "classical" way of solving problems (classical = unsupervised, without any data, which boils down to human-designed (not only) optimization algorithms), especially in fields like aerospace, healthcare and military, where there are stringent requirements on security & reliability, and where a single mistake introduced e.g. by a poorly trained (semi-/self-)supervised NN (neural network) could result in injury.

I see NNs as a way of approximating functions, but with (semi-/self-)supervised NNs, how can we ensure that the trained NN is a faithful approximation of the (sometimes intractable) ground-truth function *on the whole input domain* (I guess this can only be checked by brute force)?

I can only see the benefit of using (semi-/self-)supervised NNs in such security-demanding fields as healthcare, when we want to "bootstrap" classical methods, e.g. we could use an NN to suggest a better initial guess for a classical optimization-based algorithm, which could allow the classical method to finish the optimization problem faster. If the NN would output an inaccurate or invalid guess, the algorithm would gracefully degrade. It would still solve the problem, although potentially much slower.

Another benefit of (semi-/self-)supervised NNs I see, is when there are no classical methods for solving a given problem to begin with, like classification of objects in images. But how can we trust (or better say detect) the case when an NN provides an incorrect output, so that we can prevent a disaster during heart surgery, missle interception or rocket landing?

Sure, even the classical methods of solving problems can fail when encountered with input data that the author of the classical algorithm did not think about, but with (semi-/self-)supervised NNs, the risk seems to be greater due to no way of knowing if our training set contains enough examples to cover *all* of the input space (or at least without knowing whether it generalizes over the whole input domain despite a limited dataset).

Are there any successful, *safe* and reliable usages of (semi-/self-)supervised NNs in aerospace, healthcare, and defense industries? How can engineers working in those fields sleep well if they use (semi-/self-)supervised NNs in their codebase? Do they "just" use enormous training datasets and hope for the best (which does not solve the problem, as can be seen with Tesla's autopilot incidents)?

P.S. I do not mean these questions in a bad way, I just want to understand how NNs fit into the worldview/philosophy of the engineers working in those demanding fields, where human life is at stakes.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: