Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting work.

Given, the recent noise around this paper https://arxiv.org/pdf/2407.07218 about "weak baselines" in ML x CFD work, I wonder how it resonates with this specific work..

I am not super familiar with DEM, but I know that other particle based model such as SPH benefit immensely from GPU acceleration. Does it make sense to compare with a CPU implementation ?

Besides, the output of the NeuralDEM seems to be rather coarse fields, correct ? In that sense, and again I'm not an expert of granular models so I might be entirely wrong, but does it make sense to compare with a method that is under a very different set of constraints ? Could we think about a numerical model that would allow to compute the same quantities in a much more efficient way, for example ?



Regarding your questions, yes, DEM also benefits a lot from GPU acceleration. So you can compare it to a CPU based code, but obviously there's an order of magnitude you can gain via GPU.

Usually you are not interested in the fine fields anyways. Think of some fine powder in a big process, where there are trillions of real particles inside. You can't and don't want to simulate that. Mostly you are interested in these course quantities anyways and getting statistical data, so for that there's no need for the fine resolution.

Regarding the numerical model that can compute these things in a more efficient way, they don't always exist. When you move to large numbers of particles you can sometimes go to continuum models, but they might not always behave as the real thing, as it's really difficult to find governing equations for such materials.


I haven't heard of this paper, very interesting read! Thank you for bringing it up here. Resonates very well with the (little) experience I have from playing around with CNN-based surrogate models years ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: