Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Microsoft working on 'far larger' in-house AI model (pymnts.com)
46 points by orenaluf on May 7, 2024 | hide | past | favorite | 61 comments


Surely they all are. Surely for the next while, no one takes the lead by more than an inch for more than a minute.


Branding names under consideration...

.NET for Copilot

Pilot for Copilot

Clippy 2


Copilot Subsystem for Windows. Or is it Windows Subsystem for Copilot?


Bob As A Service


Been trying to sell people on this for over 10 years. No luck so far though.


Copilot 360

XCopilot X

XCopilot Series X


.GPT


Why does Microsoft spend resources on their own model when they can freely use OpenAI models? The model must have some specific characteristics.


Embrace (done), extend (in progress), ...


Maybe they want to explore or iterate or have control over the models in ways different than what OpenAI is doing.

Also they only own like 49% openai. So they don't have full control and would have interest in having a full control models.


“<CEO’s sector of ascension> all the things”, so currently “Azure all the things”


So it is 7x larger than a 70B parameter model?


nice try, extremely human user shrubble


How about finishing Azure first


What do you think that Microsoft is doing with all of that telemetry?


[flagged]


"Hackers" became an incredibly diluted term, and I wouldn't describe the readers of this site as such.


We're crackers.


I would not be surprised if AGI is already here, hidden away in some lab


The best trillion dollar companies can do is a token generator that defies logic. And they have personified these generators to make them seem intelligent. For AGI to exist, I doubt it will have anything to do with LLMs.


Intelligence is next-token prediction.

See: active inference, predictive processing.


Maybe, but on the other hand, I imagine the stock price of the first company that can go public with having AGI would go up by a lot. It would be an expensive secret to maintain.


I would be very surprised and question my existence.


Since the Cold War days and analog computers were turned into a psyop


If AGI were already here, we'd probably all be dead, IMHO.


I doubt anything good will come out of it, probably some corporate enterprise thingie


That's a very plausible strategy – after all, Google Gemini regardless of its technical merits is probably making money from GSuite corporate customers. (I say this as someone in a large but fairly conservative Google shop with policies that initially allowed OpenAI and Gemini, but is now limited to Gemini.)


> AI models are used in almost every one of our products, services and operating processes at Microsoft

Oh boy. Any insight from microsfot peoples on this apparent hell-hole?


I'm really looking forward to the Trough-of-Dissillutionment phase of LLM's hype cycle. This insistence of shoehorning it into everything is getting beyond stupid.


This remind me of when they were shoehorning voice assistants into everything. “Alexa can play music through my smoke alarm? Alexa can start my microwave for me instead of pushing two buttons? Why not”


This remind me of when everybody and his dog was shoehorning blockchain into everything. Blockchain-based pet platforms, pet owners earning tokens for participating in community, pet care services fueled by smart contracts, and the like.


When the metric for success starts and stops at "engagement"!


The big problem with these things is that the people responsible for the misery they cause will not feel any of the consequences.


In fact, they'll likely be rewarded for delivering results.


What misery? The hyperbole here is astounding.


Misery might be a bit hyperbolic, but I'm referring to the larger scale model of engagement being a tier 0 metric for success. Instagram & cohorts stealing people's attention spans is something I'd describe as negative and almost evil, with the larger scale problem of the smartest people in our industry having been at ad companies for the past 2 decades.

In the context of LLMs I think that they're useful tools, and if "we" play it right they can be a great boon. Short-term they'll lead to the enshittification of the internet even more though imo.


What do you expect exactly? I'm sure every big tech has had AI in their products for a while now: Who do you think filters the spam in your Gmail, if not their AI Bots? Or the music suggestions in your Spotify?

Why do you think Microsoft would be a hellhole for doing the same? Especially considering all the productivity use cases they've shown for the Office suite.

I swear HN needs to hate everything Microsoft is doing just because.


I think there are two ways to go about implementing AI.

The low-key implementations that assist are the most elegant ways to implement AI functionality. If I can use a product and not realize AI is behind it, the product has successfully utilized it. Spam filters fall into this case. Automatic “radio” stations from streaming services fall into this too.

The worst forms of AI implementation are the kind that spend more screen real estate advertising AI as if the product has something to prove. This hinders my experience as a user because I do not care about AI if it isn’t seamlessly fitting in my workflow.

I’m not sure what’s happening at Microsoft, but their insistence on AI in very unusual places doesn’t give me confidence they want to embrace AI in a manner that’s helpful. It feels like someone’s resume boosting exercise. It gives me the feeling they are desperate.


>Who do you think filters the spam in your Gmail, if not their AI Bots?

I would hope it's a purpose-built ML model and not an LLM that was cajoled into doing spam filtering.


So exactly what I just said.

Trained ML Models have been in use long before LLMs came out.


ML models are AI too. And you don't even need statistical models to call your system AI, just a set of conditional blocks behind a decision


You really think of a bunch of conditions can be labeled as AI? Do you work in marketing?

https://miro.medium.com/v2/1*gXZeYDjqLBWqbnGvlr_gyQ.png


It was an exaggeration but most of pre-ML AI is just a set of fairly rigid and general rules. They mostly differ in the sense that rules are added by different means (human vs machine) or at different times (during the "training" process or on the go while being used)


Is your contention that AI can't be implemented on a Turing machine?


It's true I'll hate on micrsoft for about anything but they didn't say "some products" "some services" "some processes", they said "EVERY SINGLE THING!!!", see the difference?


>they said "EVERY SINGLE THING!!!", see the difference?

Where did they say this? I read "almost every".


Does it make a difference of context whether or not they've successfully shoehorned only 99%?


Given the quote is referring to what they've done "for years and years and years [...] in our product groups" outside of the OpenAI arrangement, the fact that a large number their of products have come to make some use of AI models without much fanfare (search, spell-check, spam filtering, voice dictation, language translation, recommendation systems, ...) is not inherently due to the more recent LLM shoehorning. Machine learning is just the best choice for a good number of tasks.


I'm not talking about LLMs in particular. I guess this is a company wide mandate to grow knowledge of how to do this stuff well, I mean that makes sense. But in the trenches (aka hells-ahole) that means a lot of bad bad stuff is being relied on and it generates calcification of business segments and kafkaesque anti-patterns for the uninitiated. This doesn't only apply to "AI" its a generic feature of shoe-hornings. The problem with the shoe-horn is that its politically costly to resist even if it makes good business sense to resist at the micro level.


I'd agree that "We're going to shove a chatbot in every single one of our products", like the recent Copilot integrations, would reek of shoe-horning and a possible company wide mandate.

But remarks more along the lines of "Looking back over the past decade, we've made use of ML models in some part of almost all of our products" seems fairly reasonable to me, not necessarily indicative of much other than machine learning being the best tool for an increasing number of tasks. If they weren't using ML-based echo cancellation in Teams calls for instance, they would have a worse product than competitors that do.


Teams doesn't even do copy paste or open Microsoft based formats in under 10 seconds (IME)... I rest my case.


I don't claim that Microsoft products are perfect, just that it seems a reasonable use of machine learning. The things I've seen them use ML models for are genuinely useful, and mostly added without too much fanfare years prior to the recent generative AI hype.


I'm saying that forcing particular technology leads to worse products in many cases.


I understand, but think that in many relevant cases it'd now be non-ML approaches that would be "forced". Machine learning is just the easiest and best way to accomplish a large range of tasks.


I guess a general application of statistics and control is going to be called ML then? If that's the world we're living in then I wonder how the missing fraction are being governed. Pure malice?


Not all applications of statistics/control are ML - I don't know what I said to give that impression.

A spam filter based on regex or manually-selected criteria and thresholds would not be ML for instance, whereas modern effective spam filters typically do make use of machine learning.


I hope they are not drawing inspiration from Steve Ballmer's brain.


Now we can take this LLM, and paste it right into windows's write!


It shouldn't be a very large one. Lot's of empty space and dead synapses leading nowhere in the source material.


Developers! Developers! Developers! Developers!

becomes

CoPilots! CoPilots! CoPilots! CoPilots!


Co-copilots! Co-copilots! Co-copilots!


I can write that with 2 lines of basic with print and goto. It also has the word Developers a few times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: