Must be a fun time to work on open problems. I published my graduate research close to a decade ago, often find myself fantasizing about tackling open problems with Claude.
I went *browserless on my device and it has solved my screen compulsion issues with very little downside. It has been the most effective step I've ever taken. I realized I really love msg'ing friends, having access to maps and navigation, banking, just a handful of apps (no google apps), and that all along it was the browser.
*ios doesnt let you delete Safari so I set a 10 minute timer on it, and i dont have any adblock or content filtering enabled, so it's essentially only good for brief checks (auto-shop phone number, quick news check, etc.) but is useless for anything beyond that.
I have been thinking the exact same thing! That while I "need" a smartphone for secure communications, banking and nav while traveling, it's Instagram and compulsive news/reddit consumption that keeps me instinctively checking my phone. I think I need to do the same thing on gOS.
Yes! Also shopping is an insidious thing. Now it's no longer a shopping tool. For me a common flow would be like, 1) think about some thing 2) instantly I'm scroll shopping somewhere 3) remember I'm too frugal to buy things 4) 10 minutes lost to the void. It's been really nice for me to break that particular cycle.
There's a great discussion with Stephen Wolfram on the Sean Carroll podcast. Listening to it made me think very highly of Wolfram. He's a free thinking, eccentric, mathematician, scientist; who got started doing serious work at a very young age. He still has a youthful creative approach to thought and science. I hope LLMs do pair well with his tools.
I'm a fan of his work and person too. Not a fanatic or evangelical level, but I do think he's one of the more historically relevant computer scientists and philosophers working today. I can overlook his occasional arrogance, and recognize that there's a genuine and original thinker who's been pursuing truth and knowledge for decades.
Same here. I've found the "me me me" a bit off-putting over the years, but can't deny that he is a genuinely smart, interesting, and forward thinking person. I especially enjoyed his writings on measuring every aspect of his life [1].
Also Wolfram (person and company) don't seem to be stodgy and stuck in old ways. At least as an outside observer (I'm not a mathematician, nor do I use Wolfram's main tools), seem to handle new trends with their own unique contributions to augment those trends:
Wolfram Alpha was a genuinely useful and good tool, perfect for the times.
These tools will actually further supercharge LLMs in certain use cases. They've provided multiple ways to adopt them.
Looking forward to see what people will do with this stuff.
He's been in AI-land forever, the whole idea of Wolfram Alpha circa 2009 was to transform natural language into algorithms. I met him briefly in New York when he was on a panel on AI ethics in 2016, and ya, dude is sharp.
he seems to think his times better spent on software than science it seems. i take it he didnt really crack anything of worth on the physics side then?
Recently I went back to The Ecstasy of Communication by Jean Baudrillard which I couldn't get through back in the day when I first picked it up. I used Haiku to walk me through the first chapter, and Haiku would not state anything verbatim due to copyright, but if I referenced a sentence it knew it exactly.
If you tell your doctor that a parent had polyps removed (say, recently), that will give you your best chance of getting one. Most likely, if you're in an even remotely progressive area, your doc wants you to have one, but their hands are tied by the insurance company. Afaik you dont have to provide any proof of your claim re parental polyps.
> but their hands are tied by the insurance company.
Doctors' ability to prescribe or refer is never restricted by an insurance company. If they think a patient should get whatever healthcare, they are free to say it.
Is the intended meaning that health insurance should pay for anything and everything? Even systems where the government pays directly like the UK have parameters under which the government will pay for a procedure or medicine.
Not at all. Patients are free to pay out of pocket for procedures not covered by insurance. An extra colonoscopy (one not classified as medically necessary), while expensive, is within the financial means of most middle-class adults.
In CA, my doctor can refer me to get a Cologuard. But it's private pay, and they want payment up front since isurance companies don't restrict doctor's ability, only reimbursement.
So they may not be willing (even though they are able) perform procedure/test if they aren't confident they'll get paid.
Unfortunately, one of the struggles in old high tech (thats the only thing i know, are you also experiencing this?) is that the C-level people don't look at Ai and say LLM's can make an individual 10x more productive therefore (and this is the part they miss) we can make our tool 10x better. They think: therefore we can lay off 9 people.
(In the semiconductor industry) We experienced brutal layoffs arguably due to over-investment into Ai products that produce no revenue. So we've had brutal job loss due to Ai, just not in the way people expected.
Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.
Maybe I'm being naive here, but for AI (heck, for any good algorithm) to work well, you need some at least loosely-clearly defined objectives. I assume it's much more straightforward in semi, but there're many industries, once you get into the details, all kinds of incentives start to disalign and I doubt AI could understand all kinds of nuances.
E.g. once I was tasked to build a new matching algorithm for a trading platform, and upon fully understanding of the specs I realized it can be interpreted as a mixed integer programming problem; the idea got shot down right away because PM don't understand it. There're all kinds of limiting factors once you get into the details.
Well, like I said, there're hidden incentives behind the scene; in my case, the hidden incentive is that, the requester/client is one of the company's subpar broker, and PM probably decided to just offer an average level of commitment, not going above and beyond. Hence the plan was to do exactly what the broker want even though that was messy and inferior. You can't just write down that kind of motivation on paper anywhere.
---
I said it because I did the analysis, and realized that if I implement the original version, which basically is a crazy way to iteratively solve the MIP problem, it's much harder to reason with internally, and much harder to code correctly. But obviously it keep the broker happy (the developer is doing exactly what I said)
I think I'm finally realizing that my job probably won't exist in 3-5. Things are moving so fast now that the LLMs are basically writing themselves. I think the earlier iterations moved slower because they were limited by human ability and productivity limitations.
reply