Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that random sampling surfaces of 3D meshes seems like a reasonable way to generate synthetic data for mesh > point cloud.

Without knowing a dang thing about AI, it feels like the problem moreso lies in:

1. Math related to topology: vertices, faces, edges, tri vs quad etc

2. Different topologies for the same object are better for different use cases. Rendering, skinning, morphing, physics etc all have different optimal topologies, and the definition of optimal varies based on workflow and scene specifics or even the human who has skills based on certain topological preferences. In other words, I'm not sure how much of 3D workflows are standardized even -- getting the topological data for workflows is no easy task, and it's not super usable until the model output can plug right into a workflow and the existing DCC ecosystem.

text2img generates a static asset, text2mesh is far more interesting beyond just the static rendering part which is where mesh topology becomes a big sticking point.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: