These are good practices to keep in mind when setting up GenAI solutions, but I'm not convinced that this part of the job will allow "data scientist" as a profession to thrive. Here's my pessimistic take.
Data scientists were appreciated largely because of their ability to create models that unlock business value. Model creation was a dark magic that you needed strong mathematical skills to perform - or at least that's the image, even if in reality you just slap XGBoost on a problem and call it a day. Data scientists were enablers and value creators.
With GenAI, value creation is apparently done by the LLM provider and whoever in your company calls the API, which could really be any engineering team. Coaxing the right behavior out of the LLM is a bit of black magic in itself, but it's not something that requires deep mathematical knowledge. Knowing how gradients are calculated in a decoder-only transformer doesn't really help you make the LLM follow instructions. In fact, all your business stakeholders are constantly prompting chatbots themselves, so even if you provide some expertise here they will just see you as someone doing the same thing they do when they summarize an email.
So that leaves the part the OP discusses: evaluation and monitoring. These are not sexy tasks and from the point of view of business stakeholders they are not the primary value add. In fact, they are barriers that get in the way of taking the POC someone slapped together in Copilot (it works!) and putting that solution in production. It's not even strictly necessary if you just want to move fast and break things. Appreciation for this kind of work is most present in large risk-averse companies, but even there it can be tricky to convince management that this is a job that needs to be done by a highly paid statistician with a graduate degree.
What's the way forward? Convince management that people with the job title "data scientist" should be allowed to gatekeep building LLM solutions? Maybe I'm overestimating how good the average AI-aware software engineer is at this stuff, but I don't see the professional moat.
I don't really see why evals are assumed to be exclusively in the domain of data scientists. In my experience SWEs-turned-AI Engineers are much better suited to building agents. Some struggle more than others, but "evals as automated tests" is, imo, so obvious a mental model, and can be so well adapted to by good SWEs, that data scientists have no real role on many "agent" projects.
I'm not saying this is good or bad, just that it's what I'm observing in practice.
For context, I'm a SWE-turned-AI Engineer, so I may be biased :)
I think there's a lot of methodological expertise that goes into collecting good eval data. For example, in many cases you need human labelers with the right expertise, well designed tasks, well defined constructs, and you need to hit interrater agreement targets and troubleshoot when you don't. Good label data is a prerequisite to the stuff that can probably be automated by the AI agent (improving the system to optimize a metric measured against ground truth labels). Data scientists and research scientists are more likely to have this skillset. And it takes time to pick up and learn the nuances.
I agree with you take the there isn’t a lot of specialist work for data scientists to do with using off-the-shelf LLMs that can’t be done by an engineer. As an AI-aware software engineer myself… this stuff wasn’t that hard to pick up. Even a lot of the work on the Evals side (creating an LLM judge etc.) isn’t that hard and doesn’t require serious ML or stats.
But aren’t there still plenty of opportunities for building ML models beyond LLMs, albeit a bit less sexy now? It’s not like you can run a business process like (say) AirBnB’s search rankings or Uber’s driver marching algorithms on an LLM; you need to build a custom model for that. Or am I missing something here? Or is that point that those opportunities are still there, but the pond has shrunk because so much new work is now LLM-related? I buy that.
> I agree with you take the there isn’t a lot of specialist work for data scientists to do with using off-the-shelf LLMs that can’t be done by an engineer.
Conversely, data scientists are doing software engineering, including webdev. It’s an interesting time. I think it’s less about the job title demarcation now, and more about output.
I think most use-cases will still use simpler models like XGBoost etc. rather than LLM's. Customer segmentation is a really common use-case with no need for an LLM. Same for revenue/LTV forecasting.
Perhaps they can use the LLM to write and deploy these models without needing a Data Scientist but that seems risky to say the least.
In my company, the most Data Scientist-adjacent people are the Data Analysts but they tend not to have programming experience beyond SQL and basic Python and they aren't used to using the terminal etc.
Do those use cases need LLMs? Probably not. but if good results can be had with a day of prompting (in addition to the stuff mentioned in the article, which you have to do anyway) and a smaller model like Haiku gives good results why would you build a classifer before you have literally millions of customers?
The LLM solution will be much more flexible because prompts can change more easily than training data and input tokens are cheap.
I don't disagree that very numerical tasks like revenue forecasting are not a good fit for LLMs. But neither did a lot of data scientist concerns themselves with such things (compared to business analysts and the like). Software to achieve this has been commoditized.
I agree. It is difficult to convince leadership to do this work at all ("it works on my example, ship it"), and in my experience most DS don't even want to do it.
One of the key value is that it forces some thinking about what is the task you want to solve in the first place. In many cases, it is difficult if not impossible to do it, which implies the underlying product should not be built at all. But nobody wants to hear that.
Doing eval only makes sense if making the product better impacts something the business cares about, which is very difficult to do in practice.
I don’t actually even know what people are hinting at when they say that LLMs replace the need for building custom models. Regression models? People are using LLMs instead of say building a Bayesian hierarchical model? That’s not possible. Time series modeling using an LLM? Also ridiculous. Recommender systems? Ok maybe, still utterly ridiculous and abysmally slow.
For anything NLP sure, it definitely wins. However, I’ve just recently used some big fancy OpenAI model to actually just label thousands of text data for me, just so I could build a classifier with CatBoost. Guess what, inference speed is at a guaranteed sub 100ms and it costs $0 in tokens. The”AI Engineer” solution here would be just run every classification request through an LLM.
AI Engineering is going to have the same problem we had when Data Science as a term arrived and you had every Statistician saying they’re just re-inventing everything that exists in statistics, poorly.
You're right. For years the real impediment to "AI" products at many companies was the sheer crappiness of ML frameworks which were built by and for grad students, not professional engineers.
When LLMs appeared it was just so much easier to use then as an uber model and leave behind the training and inference infrastructure (if you can even call it that).
Now that LLMs can code I expect we'll be coding up custom model pipelines more and more... but only when we stop subsidizing LLMs.
One thing data scientists brought to the table was statistical rigor in the models, but that seems to have left the building at this point with LLM-based solutions.
As a AI-aware software engineer currently creating systems that integrate with LLM provider APIs for my company- who also has no idea what an eval is or how a data scientist thinks about RAG. I honestly don't see what value a data scientist would bring to the table for my team. Maybe someone would care to enlighten me?
You recognize that you haven't really needed strong mathematical (or coding) skills to create models for some time. Data Scientists add value by knowing how to translate business speak into XGBoost type model and interesting XGBoost model results into business speak. And, frankly, often by being some of the smartest people in the room. The math is occasionally helpful for speaking the language of the XGBoost model. And picking only people who are decent at math (and coding) helps ensure the smart factor. How much of that will really change with AI? I've also seen Business stakeholders try to use the chatbot to bypass the Data Scientist. Typically it's not long before there is a design decision or an interesting result the Business stakeholders don't understand. That's why I think there will be demand for Data Scientists. Not exactly evaluation and monitoring. And definitely not gatekeeping building of LLM solutions. Often the opposite, called in to explain and debug the Business stakeholders' slop.
> You recognize that you haven't really needed strong mathematical (or coding) skills to create models for some time.
And then there goes something like this [1], where researchers failed to control for p-value: "In this particular setting, emergent abilities claims are possibly infected by a failure to control for multiple comparisons. In BIG-Bench alone, there are ≥220 tasks, ∼40 metrics per task, ∼10 model families, for a total of ∼10^6 task-metric-model family triplets, meaning probability that no task-metric-model family triplet exhibits an emergent ability by random chance might be small."
Then just post your opinions rather than the text the LLM dreamed around your opinions. Short posts and tweets tend to be well-liked on HN, there is no need to puff it up to a big blog post.
This is true beyond software. It used to be that the proof of the thinking process was in the resulting artifact. No longer can you estimate from the existence of a piece of text and the level of polish behind it that the apparent author has put at least a reasonable amount of thought into it. This applies to comments, blogs, emails, and most troublingly I've seen this happen at my job with things like requirement specs. Now, the veneer of quality makes it much harder to know what is the appropriate amount of skepticism to judge the contents with. And it's too tiring to be maximally skeptical about everything.
What exactly are you claiming here? That a handful of theorems about the limits of mathematics and provability somehow combine to show that the current LLM-based AI developments will inevitably live up to what is expected of them? And that this is obvious to a select few? That all seems unlikely, to say the least.
The notion that the brain uses less energy than an incandescent lightbulb and can store less data than YouTube does not mean we have had the compute and data needed to make AGI "for a very long time".
The human brain is not a 20-watt computer ("100 watts per day" is not right) that learns from scratch on 2 petabytes of data. State manipulations performed in the brain can be more efficient than what we do in silicon. More importantly, its internal workings are the result of billions of years of evolution, and continue to change over the course of our lives. The learning a human does over its lifetime is assisted greatly by the reality of the physical body and the ability to interact with the real world to the extent that our body allows. Even then, we do not learn from scratch. We go through a curriculum that has been refined over millennia, building on knowledge and skills that were cultivated by our ancestors.
An upper bound of compute needed to develop AGI that we can take from the human brain is not 20 watts and 2 petabytes of data, it is 4 billion years of evolution in a big and complex environment at molecular-level fidelity. Finding a tighter upper bound is left as an exercise for the reader.
> it is 4 billion years of evolution in a big and complex environment at molecular-level fidelity. Finding a tighter upper bound is left as an exercise for the reader.
You have great points there and I agree. Only issue I take with your remark above. Surely, by your own definition, this is not true. Evolution by natural selection is not a deterministic process so 4 billion years is just one of many possible periods of time needed but not necessarily the longest or the shortest.
Also, re "The human brain is not a 20-watt computer ("100 watts per day" is not right)", I was merely saying that there exist an intelligence that consumes 20 watts per day. So it is possible to run an intelligence on that much energy per day. This and the compute bit do not refer to the training costs but to the running costs after all, it will be useless to hit AGI if we do not have enough energy or compute to run it for longer than half a millisecond or the means to increase the running time.
Obviously, the path to design and train AGI is going to take much more than that just like the human brain did but given that the path to the emergence of the human brain wasn't the most efficient given the inherent randomness in evolution natural selection there is no need to pretend that all the circumstances around the development of the human brain apply to us as our process isn't random at all nor is it parallel at a global scale.
> Evolution by natural selection is not a deterministic process so 4 billion years is just one of many possible periods of time needed but not necessarily the longest or the shortest.
That's why I say that is an upper bound - we know that it _has_ happened under those circumstances, so the minimum time needed is not more than that. If we reran the simulation it could indeed very well be much faster.
I agree that 20 watts can be enough to support intelligence and if we can figure out how to get there, it will take us much less time than a billion years. I also think that on the compute side for developing the AGI we should count all the PhD brains churning away at it right now :)
"The key is ensuring that any future cuts at NASA are not indiscriminate. If and when Jared Isaacman is confirmed by the US Senate as the next NASA administrator, it will be up to him and his team to make the programmatic decisions about which parts of the agency are carrying their weight and which are being carried, which investments carry NASA into the future, and which ones drag it into the past. If these future cuts are smart and position NASA for the future, this could all be worth it. If not, then the beloved agency that dares to explore may never recover."
You pay an annual % tax on the value of your investments less debt as of January 1st. This means you still pay taxes if your assets lose value, too. It's a wealth tax that pretends to be a capital gains tax.
It doesn't pretend to be a capital gains tax at all. It's a tax on income from assets, which is in practice more or less a 'wealth tax' which is also why it's called the Dutch word for 'wealth tax' in the first place.
It is a tax on an assumed return on assets, determined as a set percentage of wealth. "Vermogensrendementsheffing" means a "tax on return on wealth", not on the wealth itself. In name it is not a wealth tax, but in reality it is, since the assumed return that is taxed has no relation to the true return. This relates to the recent decisions declaring this partially unlawful, see e.g. https://www.tilburguniversity.edu/magazine/supreme-court-net...
Data scientists were appreciated largely because of their ability to create models that unlock business value. Model creation was a dark magic that you needed strong mathematical skills to perform - or at least that's the image, even if in reality you just slap XGBoost on a problem and call it a day. Data scientists were enablers and value creators.
With GenAI, value creation is apparently done by the LLM provider and whoever in your company calls the API, which could really be any engineering team. Coaxing the right behavior out of the LLM is a bit of black magic in itself, but it's not something that requires deep mathematical knowledge. Knowing how gradients are calculated in a decoder-only transformer doesn't really help you make the LLM follow instructions. In fact, all your business stakeholders are constantly prompting chatbots themselves, so even if you provide some expertise here they will just see you as someone doing the same thing they do when they summarize an email.
So that leaves the part the OP discusses: evaluation and monitoring. These are not sexy tasks and from the point of view of business stakeholders they are not the primary value add. In fact, they are barriers that get in the way of taking the POC someone slapped together in Copilot (it works!) and putting that solution in production. It's not even strictly necessary if you just want to move fast and break things. Appreciation for this kind of work is most present in large risk-averse companies, but even there it can be tricky to convince management that this is a job that needs to be done by a highly paid statistician with a graduate degree.
What's the way forward? Convince management that people with the job title "data scientist" should be allowed to gatekeep building LLM solutions? Maybe I'm overestimating how good the average AI-aware software engineer is at this stuff, but I don't see the professional moat.
reply