Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok my fault i was writing from perspective of ML engineer not reasercher (I'm using Julia for 1.5 year now and my bois reaserchers prefer pure julia solutions cause its easier to write u can use symbols and not using OOP etc.)

But for production ready models PyTorch and TF is miles ahead first of all: NLP, audio and vision based packages building frameworks, (attention layers, vocoders etc.) then u have option to compile models using XLA and use TPU (about 2/3 times cheaper then gpu for most of our models [audio and nlp])

Next inference performance (dunno about now maybe this change but about ~8 months ago flux was about 15-20% time slower [tested on VGG and Resnet's]then pytorch 1.0 without XLA)

Time to make it to production: Sure maybe writing model from scratch can take a bit longer on PyTorch then Flux (if u not using build in torch layers) but getting in into production is a lot faster, first of all u can compile model (something not possible in Flux) and u can just use it anywhere from Azure and AWS to GCP and Alibaba Cloud make a rest api using Flask/Fast-api etc. or just using ONNX.

Dont get me wrong i love Julia and Flux but there is still a LONG way before most people can even consider using Flux on production enviroment not for reasearch or some MVP stuff.



I have no special insight into ML or Julia (though I love it), but one thing I can confirm from experience is that there is a huge difference between getting a model work once in an academic or research setting, and having something reliably and scalable work in production day after day. Mind boggling, totally different challenges.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: