You can stick to TensorFlow for training your networks, but if you want to deploy a trained network to iOS or macOS devices (and your network is expressible in terms of Apple's primitives), you'd be doing your users a disservice not to use the fastest and most energy efficient backend to do the actual inference.
I'll add that at this point, there's isn't much 'lock in' between different frameworks. Once you've trained, and if your primitives are available in the target framework, porting is just a matter of getting your weights and topology into the right format. Not too hard compared to the nitty gritty of gathering data, designing a network, and doing training and hyperparameter optimization.
For what it's worth, we're hoping to integrate these APIs into our iOS version of the TensorFlow runtime, so you can maintain graph portability but still get the benefits of the optimized implementation on the platform.
That's great to hear. Can't wait for TensorFlow to fully support Windows, that's the main thing stopping us from using TF instead of MXNet as a backend. Any news on that?
Windows support is definitely being worked on and lots of progress has been made, so it will eventually arrive -- just lots of little details to work out, but we're optimistic it'll come soon.
Thank you for your thoughtful comment, you've convinced me to take a moment and think more carefully about this.
The energy efficiency is an excellent point.
Ultimately I am still extremely leery of the apple lock in factor in general and their arbitrary rulings of what is and is not okay within the garden.
I am still pretty upset and maybe even somewhat traumatized about all the previous times they've fucked me over in scenarios like this. It starts out great and then gets ruined.
edit It looks like I've hit the HN rate limiter, so I'm merging my reply:
Unfortunately I can't go into detail about these scenarios because I don't want to get into trouble with my employer. Suffice it to say that I no longer place trust in Apple keeping anything of value "open".
> I am still pretty upset and maybe even somewhat traumatized about all the previous times they've fucked me over in scenarios like this. It starts out great and then gets ruined.
I haven't done much development using Apple frameworks. I'm curious, where has this happened to you before?
The threading model is poorly written. It is very hard to setup. Keeping legacy model data and writing transformers for said data is painful (person upgrades the app, schema changed). There are lots of crashes we've seen at scale. Also, its often really poorly performant, and the IO is completely synchronous. Most apps do not need this wrapper around SQLite (or SQLite at all) and in fact should just use simple file writes. Easier to debug, maintain, and scale, with fewer bugs.
Didn't see the original comment, but why wouldn't Apple do this? It could be as simple as allowing more complex visual effects (blurring, stereoscopic views), more features in general causing low-RAM devices to suffer.
It doesn't necessarily have to be evil or crazy that they do this. In fact it would be strange if they worried excessively about preserving the performance of all legacy devices.
I think there's malicious and then there's new expectation. Recently on the desktop side, Apple's legacy retention is great- you have iMacs from 2007 I think running El Capitan, albeit a mildly neutered version of it, and treating it like any other update. I've worked with a lot of now legacy machines that spec wise are fit for purpose (write a paper, read some stuff on facebook), but due to software restrictions (individual browsers), they were formerly unable to simply because the browser was no longer supported on OS X10.7 or lower. Now these older machines have a new life.
iOS is slightly different, as each time apple works to retain legacy devices and pushes out the major iteration of iOS, the performanceon other devices do suffer a little, but even with the most recent iOS release they started focusing on slicing down unnecessary parts of apps to save space.
Apple has really been working decently ont he preservation leg of their line up.
I'm not disputing that newer OS's run slower on older devices.
I'm disputing that this is done intentionally to degrade performance and make it more attractive to upgrade - which is what the original comment stated.
They actually disable most of the new effects on older hardware which indicates that they want the software to perform acceptably.
And more to the point, supporting older hardware at all extends the useful life of that hardware by allowing it to run modern applications and have access to new features.
I don't think Apple is intentionally trying to create lock-in, but I do think there is a valid fragmentation concern in the field of deep learning right now. For example, you look at CUDA vs OpenCL, and CUDA has clearly become the winner there. Anyone building a system for deep learning would be crazy not to buy nVidia hardware. And while some projects support both CUDA and OpenCL (e.g. OpenCV), you can usually count on the CUDA implementation being more tested and performant. Metal is going to just throw one more wrench into the mix :)
I'll add that at this point, there's isn't much 'lock in' between different frameworks. Once you've trained, and if your primitives are available in the target framework, porting is just a matter of getting your weights and topology into the right format. Not too hard compared to the nitty gritty of gathering data, designing a network, and doing training and hyperparameter optimization.