Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it's likely this result in innovations that would drive down costs, improve accuracy, as well as producing a much larger corpus of data with which to guide diagnosis and reduce false positives.

Why is it likely? We already have a lot of MRI data. There are already a lot of incidental findings. It might also be an issue of the MRI not being able to produce enough information to discriminate.

> To use a software analogy, if your downtime detection system kept producing false negatives, would your solution to be just turn it off? You'd get some better night's sleep, but you'd pay for it when the system really went down and you had no idea.

The analogy is rather something like this: your downtime detector is not just a "ping" but a full web browser that tests everything and it sometimes flags things that are not actually issues. So you don't turn it off, but you only use it when you have another signal that indicates that something might be going wrong.



> Why is it likely? We already have a lot of MRI data. There are already a lot of incidental findings. It might also be an issue of the MRI not being able to produce enough information to discriminate.

This is the main reason. Well technically the opposite of the main reason but more or less it's the same. MRIs are extremely high fidelity nowadays and as a result it's really really hard to read an MRI. Every person is different and there's a lot of variations and weird quirks. You get all the data rather than clearly identified problem areas like you get with say a CT w/ contrast, etc.

That's actually exactly why it's important to have MRIs more frequently to be able to establish baselines and identify trends as they develop.


> That's actually exactly why it's important to have MRIs more frequently to be able to establish baselines and identify trends as they develop.

How? How do you establish baselines? How do you build a classification of incidental findings? It's very possible that you'll find a lot of types and not a lot of representatives of each type. And then you have to correlate that to actual clinical results, but the population will be so heterogeneous that it'll be really hard to find an actual result.

It's not just "let's throw more data at the problem".


When I say establish baselines what I mean is to establish baselines for the individual.

If you have records of the locations and sizes of various atypical structures and forms throughout the body going back for years and all of a sudden one of them starts changing in size at a rate disproportionate to its history, that's probably cause to dig a little deeper.

It's certainly not "throw more data at the problem". Instead it's about giving the data a time axis with some decent fidelity.


> and all of a sudden one of them starts changing in size at a rate disproportionate to its history, that's probably cause to dig a little deeper.

That sentence is doing a lot of heavy lifting.

- What's "disproportionate to its history"? Obviously something going from 1mm to 10cm is worth checking out, but what about something going from 1mm to 2mm? Might be a tumor, might be that the position is just slightly different.

- What about other less measurable factors? Example, border features. That's harder to measure and things like movement or different machines can change how the borders of a feature look. How do you know what's a baseline and what's not.

- How frequently do you run these scans? It's likely that if something "starts changing in size" suddenly it will start giving symptoms before you have your next scheduled scan.

> It's certainly not "throw more data at the problem". Instead it's about giving the data a time axis with some decent fidelity.

It's definitely throwing more data at the problem, and you're assuming that it's viable to give "a time axis with decent fidelity". MRIs are much more complicated to interpret than people think, and screening is a much harder problem too. There are a lot of studies testing MRI imaging as a screening technique (among other techniques) and they don't always show an increase in survival rates.


We do not have a lot of MRI data. The average person probably gets a couple MRIs in their lifetime, and this is biased because we wait until something is clearly wrong to get the MRI. If you want to find an MRI scan of an early stage asymptomatic cancer, the only data on that will be the exceedingly rare case that someone has something else unrelated wrong with them in the same general area and gets an MRI for that, and then just by chance also has the early stage cancer at the same time.


> we wait until something is clearly wrong to get the MRI. f you want to find an MRI scan of an early stage asymptomatic cancer, the only data on that will be the exceedingly rare case that someone has something else unrelated wrong

Not always. There are bunch of studies for MRI screening in high-risk populations for specific cancers. There are scoring systems for a lot of them based on imaging features and they do find asymptomatic cancers.

In fact, if you add low-risk populations to the studies used to design imaging scores, you might end up adding more noise and making the study more difficult and the scoring less accurate.


> We already have a lot of MRI data.

That's true but not in a useful way for improving MRI screening.

What we have is lots of days from people who were sent to get an MRI because they had a complaint.

That's a very different group than people doing screening.


And the fact that they have a complaint (or have known risks) makes it easier to classify, compare and understand the data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: