Indeed. I think a GPT-4o class model, properly prompted, would work just fine today. The trick is, unlike a human, the computer is free to just say "no" without consequences. The model could be aggressively prompted to detect and refuse weird orders. Having to escalate to a human supervisor (who conveniently is always busy doing other things and will come to you in a minute or three) should be sufficient at discouraging pranksters and fraudsters, while not annoying enough to deter normal customer.
(I say model, but for this problem I'd consider a pipeline where the powerful model is just parsing orders and formulating replies, while being sanity-checked by a cheaper model and some old-school logic to detect excessive amounts or unusual combinations. I'd also consider using "open source" model in place of GPT-4o, as open models allow doing "alignment" shenanigans in the latent space, instead of just in the prompts.)
Francois'(the creator of ARC-AGI benchmark) whole point was that while they look the same, they're not. Coding is solving a familiar pattern in the same way (and fails when it' s NOT doing that, it just looks like it doesn't happen because it's seen SO MANY patterns in code). But the point of Arc AGI is to make each problem have to generalize in some new ay.
I have ChatGPT4, I have no idea what arrow you are talking about. Could you be more specific? I see now arrow on any of my previous messages or current ones.
By George, ItsMattyG is right! After editing a question (with the "stylus"/pen icon), the revision number counter that appears (e.g. "1 / 2") has arrows next to it that allow forward and backward navigation through the new branches.
This was surprisingly undiscoverable. I wonder if it's documented. I couldn't find anything from a quick look at help.openai.com .
Careful what you trust with help.openai.com. You used to be able to share conversations, now it's login walled when you share, and the docs don't reflect this (if someone can recommend a frontend that has this functionality, for quick sharing of conversations with others via a link, taking recommendations, thank you in advance).
Yeah, right now most human written content isn't that good, either. Quality writing has largely been abandoned for verbosity and formality, offends no one, and often lacks substance or that human touch. I'm guessing AI content will be about as flavorless. But time will tell.
Yeah. It is difficult see AI supplanting humans for the things you go outside for, but any human involvement on the internet has always just been an implementation detail.
Agreed, by some definitions, specifically associating unrelated things, models are already creative.
Hallucinations are highly creative as well. But unless the technology changes, large language models will need human-made training substrate data for a long time to operate.
This is literally what the whole article was about. Not only does the quote itself contain that context "and the lego group", but the very next paragraph is "And then… nothing. The Tintin votes dried up, and Lego rejected both his fan-favorite Avatar and Polar Express ideas. The company never says why it rejects an Ideas submission, only that deciding factors include everything from “playability” and “brand fit” to the difficulties in licensing another company’s IP."
oh hmm, the penguin/giraffe one when I first saw it I was like "that looks like an upside down penguin, where's the giraffe?" Whereas others I immediately saw what it was trying to be.