Hacker Newsnew | past | comments | ask | show | jobs | submit | entech's commentslogin

I’ve picked up an old discarded dining table and bought some varnish and sand paper and for a few bucks was able to turn it into a table that would have cost me hundreds to buy brand new - the furniture makers are now history!

I can’t wait to see what car mechanics could do with discarded cars and some wrenches - the possibilities are endless.


How do you know that it’s a source worth trusting?

I think the expectation of AI being perfect all the time is probably driven by the hype and marketing of “1 million PhDs in your pocket”.

If you compare AI to an average person or a random website you’d come across google I would wager that AI is more likely to be accurate in almost every scenario.

Hyper specific areas, niche domains and rapidly evolving data that is not being published - a lot less so.


It seems that it’s useful if it’s better than what you would have done yourself.

Although the poster had a bus company business plan that includes actuarial analysis in his head and some spreadsheets so that bar appears to be sufficiently high.


Well said. I feel that every technology has a benefit to harm ratio and we as society put guardrails around use of the technology to reduce and mitigate the harm outcomes.

Unfortunately a lot of the guardrails are developed in response to the bad outcomes occurring (or rather the guardrails refined).

I feel that we are much better at dealing with obvious physical harms like planes being hijacked and viruses causing damages to your data than with higher level impacts that are not clearly felt and where you cannot hold someone accountable.

The fact that there are not even basic regulations around AI is truly mind boggling. Like how we just acknowledge a phenomenon of AI psychosis and suicidal ideation. We ban drugs and install barriers on bridges, but somehow if AI causes it - it’s seemingly ok. Let’s just hope Sam and Dario care enough to fix it.


And when I see the never ending discussions on Tesla/FSD crashes, there is always a defender saying "don't look at this, compare the rates of AI/FSD vs humans, humans are worse". As in my potential death is ok just because it would be a statistical anomaly !!


The humans will still own the business (unless you are proposing some alternative version of AI ownership), so in effect there will be always a human who is concerned about their business’s well being.

I doubt that we would get into a world where a company would be allowed to run without human involvement (AI directors and AI management) as you will have nobody to hold accountable.


Well, wasnt this what are all these blockchain DAO entites where supposed for? :D


Yes, I was just about to bring this up as well. One could argue that they were simply too early. It will be interesting to watch things like ERC-8004.


The prevailing view of government’s incompetence and inability to act has reached such high levels that people do not even factor any sort of meaningful intervention anymore.

Does everyone really think that the world governments would allow any level job loss that would create panic before shutting this whole thing down within the area of their control?

It’s probably the western culture bias - people in UK or US have not seen or experienced big enough government intervention. US citizens are probably feeling a bit of the change now.


The original post did wind me up and I was hoping to see a good rebuttal from someone. Unfortunately this is just as bad going the other way. Using expletives and highly emotional language ('don't talk to me about my kids' etc.) and making some unsubstantiated claims in responses as well just devolves it into 'AI good' vs 'AI bad'.

With barrage of pro-ai content, I like to add some opposing views to my watch/read queue. Ed comes up a lot with comments on the other side, but after watching him once or twice I have lost any interest in his view as he seems to basically just be AI bashing rather than providing good counter arguments to the more bombastic points.

It's a shame that middle-of-the-road, reasonable takes don't seem to cut through to the public's attention. I would love to see someone popular enough and sensible enough advocating for a measured approach to the rollout of new tech and an approach to manage the risks and capture the opportunities.

Is AI transformational and can it impact 'most' white-collar jobs? YES. Is it going to leave us all without jobs? 'Likely' NO, but it's worth assessing and preparing for if it does...

I truly feel like our system of laws and government is failing us in providing a rapid response and guardrails to safeguard the public from new and rapidly advancing tech. The advancement of tech seems to be accelerating while our approach to responding to it properly has not really kept up. Things like microtransactions, BNPL, AI, ridesharing, prediction markets etc. have all been able to perform a form of regulatory arbitrage and have been a vast net negative to some segments of society (mostly those that need the most help and support), yet it takes years to implement the most basic of protections.


To me, this is highlighting the fall of modern media - people have lost trust in MSM and have flocked to 'influencers' and 'thought leaders'. There is no checking for credibility, veracity of claims, or any relevant expertise. This is basically vibe news, sentiment peddling.

I could say AI is here - we don't even need to do research anymore. All I need to say is: "Claude, cure cancer", go away for 4 hours to drink my coffee, come back and boom - cancer cured. Perfect research ready for funding and trials.

People would call me crazy. But what if I say 'PHARMACEUTICAL CEO', 'PhD', 'MULTIPLE COMMERCIAL DRUG SUCCESSES' - people will eat it up.

Let's see what you can find out about Matt Shumer, the AI CEO, from his public profiles:

No technical background - looks like a business 'entrepreneurial' degree from what looks like a middle-of-the-road school No experience working anywhere other than the companies he founded No notable exits or commercial success from the companies he founded And if you dig deeper, it appears that:

His latest startup is a scam https://news.ycombinator.com/item?id=41484981 He is the CEO of a startup He's trend-hopping on the 'new thing' (all in with VR in 2019, now AI) - incredible that there is no crypto in there The post offers no concrete evidence for its claims and is peddling fear and sentiment, yet somehow respected publications write opinion pieces about his article, credible people retweet it, and it goes MEGA viral.

The only credit I am willing to give here is that he managed to accurately reflect the vibes that resonate with people, which is really a shame because this is what people actually think.

What's worse is that now he probably has legitimate people seeking his views and opinions on technical matters because he's got it 'so right' and he is so 'knowledgeable' about it.

Hopefully someone can succeed in online reputation management for websites, content, and people, and help us separate credible from grift.


Absolutely agreed—seeing this post continue to circulate across all forms of media with seemingly zero critical thinking or evaluation of its author’s credibility reminds me of the Zoolander “I feel I’m taking crazy pills” meme.

I remembered this guy from his “Reflection 70b” scam in 2024. That should have basically put his credibility at zero, but clearly it has not.

I found this interaction in the HN comments from the time of that minor scandal to be prescient:

>> It's amazing what people will do for clout. His whole reputation is ruined. What was Schumer's endgame?

> But does reputation work? Will people google "Matt Shumer scam", "HyperWrite scam", "OthersideAI scam", "Sahil Chaudhary scam", "Glaive AI scam" before using their products? He wasted everyone's time, but what's the downside for him? Lots of influencers did fraud, and they do just fine.

https://news.ycombinator.com/item?id=41485180


While this is true, I believe AI (and other technological advances) erodes the trust embedded in this 'facade'. And that’s how I interpreted the authors’s sentiment.

When you watch a video or use a service that requires significant effort and value to create, you inherently trust that the creators have invested diligence and care to protect their investment. Creators risk losing customers through bad reviews or, worse, being sued for damages.

In an age where it's reasonably straightforward to create something that appears to match the quality and effort of what was previously difficult to accomplish, it becomes harder for users to distinguish high quality anymore.

I think we'll go through a period where many users will get burned by poor services (lost data, security breaches, etc.) and will need to find new ways of verifying product and service credibility.

I suspect the market for simple consumer apps charging $5+ monthly for basic functions (like todo lists) will disappear, and possibly the same for low-to-moderate complexity enterprise apps (like Jira). This is probably better for consumers. Many of these apps and tech businesses can charge so much for fairly basic functionality because the barrier to building alternatives is too great. There was simply no option if you wanted a particular set of features. It's 'value-based pricing' that extracts benefits from consumers unable to negotiate the price.


I am with you. I think that it is more likely to be related to Japanese carry trade unwind starting to worry the banks, while continuing to drive the “AI disrupt everything” narrative via mainstream news.

I might be not across the detail, but to me the legal plugin seems like it’s mostly adding some skills (prompts) that are fairly basic that any technically minded people could do, and is not enough of an improvement for completely non technical people to use.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: