Dumb question, but... Is AI and ML the same thing? I ask because I am seeing a lot of my ML peers claim to be AI experts with years of experience, even though the term AI wasn't really used at all until OpenAI got big.
AI as a field has been around since the 50s and gone through multiple paradigms, but modern AI (since its resurgence around 2010) is effectively all machine learning.
What "AI" means has always been shifting; right now when most people hear it they assume it means generative AI and deep learning, but it's really a very broad category.
Early incarnations/uses of the term include what is now sometimes referred to as Good Old Fashioned AI (GOFAI), with such things as expert systems and ontological classification systems; these still technically fall under the umbrella of "AI". After GOFAI came other forms of technology, including the precursors to our current deep learning models, including much simpler and smaller neural networks. Again, these are still "AI", even if that's not what the public thinks when they hear the term.
Im not a programmer, but my understanding is that back in the '80s investors got in a frenzy when the idea of AI was big. They threw money at it and... it was decades away. Then they tried the "e" phase that became the dot com boom... and it was overhyped. This pattern has occured several times with programs.
Technology has been progressing and the definition of AI is loose at best when marketing is nearby. I have a question: what kinds of ways do people describe "unemployed" on linkedin? It is probably glossed over and hyped based on whatever marketing is difficult to verify and is at least adjacent to reality. I think this is similar to ML researchers classifying themselves as AI researchers. HOWEVER, there may be a lot more overlap than simply "adjacent", so someone please correct me if the term "machine learning" or "AI" are regulated terms for public advertisement.
Ok, I hear a lot of 'AI is a superset of ML' definitions, and yes, that's historically true, but today AI is shorthand for LLM's and others (image generation, embedding, DL-based time series). I'd draw the line at systems using emergent phenomena that aren't realistically reducible to understandable rules.
Should we say 'this uses AI' for a Prolog/expert system/A*/symbolic reasoning/planning/optimization system today? Idunno; I had scruples even about calling classical and Bayesian statistical models 'machine learning,' reserving that for models prioritizing computational properties over probabilistic interpretation.
In my experience they mean basically the same thing in practice most of the time. The industry used AI for a long time (up through the 2000s iirc) but it developed a bit of a reputation for going nowhere. Then there was a revival in the 2010s "The Social Network" funding environment when "machine learning" was preferred, I think to shake off some of the dust associated with the old name. Now the pendulum seems to be swinging back to "AI", driven by several high profile and well funded though leaders who have rewarmed the old AGI narrative thanks to some major leaps forward in models for certain applications.
Eh, a lot of it’s more down to marketing. ‘AI’ as a term has periodically come in and out of fashion; for instance, is OCR AI? Well, not 20 years ago, certainly, the term being unfashionable at the time, but now: https://learn.microsoft.com/en-us/azure/ai-services/computer...
(Computer vision, in particular, is basically always classified as AI today, but the term was mostly avoided in the industry until quite recently.)