Current applications are just ML/adtech, and the only parts of my job that have really used any PDE skills have been understanding the phase space of autoscaling and optimizing for a set of parameters that have minimal costs and don't wake the team up at night. There have been some other problems where my math background was helpful, but not in a PDE sense. Most of my current job doesn't use anything even halfway tangential to PINNs, despite being an ML engineer. I mostly do infrastructure work and make the machines go brr. I'm not positive yet, but I might be blogging about some of those things soon.
At my last job, one of the big problems was acquiring more MRI data in less time. One of the crucial steps in that is throwing away all the assumptions that make it easy (like having enormous field strengths and large enough relaxation times that you can treat anything nonlinear as gaussian noise). Those assumptions require time and money to generate a certain amount of data, but if you instead just blast blue noise at the patient and can model the physics involved well enough then you can gather much more data in much less time. The trick is in interpreting it. PINNs were very useful in speeding up classical solvers (in that case, entirely by choosing "good" initializations). For some applications (like quasi real-time plotting), you could even skip the classical solving step.
I've done a lot of things over the years. Back in school it was genomics and quantum chemistry. In between, I've had a lot of ideas (most of them bad, but no matter what anyone tells you I think the bad ideas are even more useful pedagogically), and I tend to throw at them the whole gamut of techniques I've learned as I explore. It's somewhere between "extremely wasteful", a "fun hobby", and "crucial to my professional learning and development". I'm not sure yet where the balance is, but I like how my career is progressing, so I keep studying things in detail.
If I had to guess, that "deep level of knowledge" you're referencing might be from my propensity for being a bit cocky and self-aggrandizing. Else, it might be from having built multiple versions of every optimization technique, ML framework, or other piece of software I've ever written about and studying what made them work and made them fail. I like to think it's more of the latter (enough so that I encourage other people to build things from first principles even when they only want to call an API and make a thing happen), but there's probably some truth to the former too.
It seems like you have had quite a career thus far. I have academic and professional backgrounds in PDEs (analytics and numerics) before moving onto ML. It's rare that I get a chance to talk to an expert in my own niche field. Thanks again for your expert answers and greatly contributing to this community. I look forward to reading your blog.
Current applications are just ML/adtech, and the only parts of my job that have really used any PDE skills have been understanding the phase space of autoscaling and optimizing for a set of parameters that have minimal costs and don't wake the team up at night. There have been some other problems where my math background was helpful, but not in a PDE sense. Most of my current job doesn't use anything even halfway tangential to PINNs, despite being an ML engineer. I mostly do infrastructure work and make the machines go brr. I'm not positive yet, but I might be blogging about some of those things soon.
At my last job, one of the big problems was acquiring more MRI data in less time. One of the crucial steps in that is throwing away all the assumptions that make it easy (like having enormous field strengths and large enough relaxation times that you can treat anything nonlinear as gaussian noise). Those assumptions require time and money to generate a certain amount of data, but if you instead just blast blue noise at the patient and can model the physics involved well enough then you can gather much more data in much less time. The trick is in interpreting it. PINNs were very useful in speeding up classical solvers (in that case, entirely by choosing "good" initializations). For some applications (like quasi real-time plotting), you could even skip the classical solving step.
I've done a lot of things over the years. Back in school it was genomics and quantum chemistry. In between, I've had a lot of ideas (most of them bad, but no matter what anyone tells you I think the bad ideas are even more useful pedagogically), and I tend to throw at them the whole gamut of techniques I've learned as I explore. It's somewhere between "extremely wasteful", a "fun hobby", and "crucial to my professional learning and development". I'm not sure yet where the balance is, but I like how my career is progressing, so I keep studying things in detail.
If I had to guess, that "deep level of knowledge" you're referencing might be from my propensity for being a bit cocky and self-aggrandizing. Else, it might be from having built multiple versions of every optimization technique, ML framework, or other piece of software I've ever written about and studying what made them work and made them fail. I like to think it's more of the latter (enough so that I encourage other people to build things from first principles even when they only want to call an API and make a thing happen), but there's probably some truth to the former too.