Interesting, thank you for the link. I had a hunch this should be possible but I wasn't aware that it was already proven. I used a similar trick on image recognition: turn images into a single 32 bit word by heavy pixelation and then look up a matching description. It's interesting how often that will work once you feed it with enough data. After all, that gives you 4 billion inputs mapped onto 4 billion descriptions, and plenty of those will contain the Eiffel tower with various cloudy backgrounds apparently recognized perfectly.
It's a total cheat but it is funny how close that can get you to something that might be actually useful.
I wonder if you could use adaptive optimal kernels, AOK[0]? I had used this for work on multiphase flow recognition from an electrical capacitance tomography, ECT, as a proxy for void fraction. We wanted to tinker with time-frequency representations.
Yes, that is cool. I had just come back from an internship in Wireline at Schlumberger where I was exposed to tools like one that did nuclear magnetic resonance, NMR, thousands of metres below. Pretty sweet tech. Transitioned to ECT for that project, then ECG for anomaly detection on anonymized hospital patient data. I never will underestimate the effect hair and sweat have on data. That was a cool year with lessons that served well later.
It's a total cheat but it is funny how close that can get you to something that might be actually useful.