Hacker Newsnew | past | comments | ask | show | jobs | submit | mgraczyk's commentslogin

Still plenty of signal. You'd be surprised at how badly most people do at very simple questions.

I am quite passionate about algos, do lots of katas on codewars for fun, and done plenty of leetcodes.

Then I had a technical interview when I was asked to implement a simple algo for the tris game (aka tic tac toe) and my mind was completely blurry.

I was tired, i'm in eu and this was for a San Francisco startup interviewing me at their lunch time, very late in Italy.

And generally don't like to be interviewed/tasked.

Of course the solution is beyond simple, but I struggled even at brute forcing it.

I can easily do these kind of exercises (and much harder ones obviously) for fun, but not when interviewed.

I struggled with the same thing in University. I graduated with 104/110 even though I was consistently among the most prepared, and I learned to learn, not to pass exams (plenty of stellar performers didn't remember anything few weeks after exams).

Once I asked a professor why did he grade me 27/30 even though I spent one hour answering with details on everything, including the hardest questions.

"Because you never appear convinced when you answer".

I get nervous, I don't like to prove my knowledge this way. I rethink constantly what I'm saying, or even how I sound.

I forget how to type braces or back ticks.

I did not have any issues when not interviewed, or in written exams, or during my research period when I published 3 papers that have been highly cited.

But I am just not a fan of these types of interviews they tell absolutely nothing about the candidate.

You interview me and you'll have the very wrong impression if you ask me to live code or white board.

Meanwhile I've seen leetcode black belts spend most of their time logged on Tekken 7 on discord, consistently creating work and providing negative value while somehow always selling their high skills.

I have found much more value in seeing personal projects, and OSS contributions.

Never asked a single of these bs questions and never failed at hiring anyone. Not once.


> I am just not a fan of these types of interviews they tell absolutely nothing about the candidate.

Unfortunately this is wrong and I have seen tons of data at 5 companies showing this. These kinds of interviews really do correlate well with job performance

There is noise, but large companies in particular need a scalable process and this one works pretty well

Startups shouldn't do this though, but the reason is the opposite of what you're complaining about. It's too easy to accidentally waste your time on somebody who is good at leetcode


I have never seen correlation.

The most important thing about a candidate is whether he's gonna be professional and hard working.

Technical interviews tell me nothing about it.

Of course I can see large companies that have a high number of candidates relative to positions needing such methods, they can afford false negatives.

But outside those cases personal projects and OSS contributions say much more.


How many people have you interviewed and hired? I have interviewed around 400 and hired around 20, and I've seen data compiled on over 100,000 interviews. I have never worried about a false negative, except DEI stuff pre-2021

Maybe this is a Europe vs US thing?


I've hired around 10 and interviewed around 50/60.

Half the people I hired, I did so without any technical interview, I met them in coding communities and saw everyday their OSS contributions and skills.

In any case I'm not debating your method is wrong, I'm just saying there's individuals like me that don't do well in these kind of situations/exams and can easily be false negatives.

I'm also saying that this isn't rocket science, and in general trying to understand if the person is honest and hard working is much more important than his coding skills.

I've seen way too many stellar graduates leetcode ninjas being just terrible at their job or completely uninterested.

And in Europe it's hard to fire them.


Yes, understanding algos is valuable, and implementing them of the top of one's head is a nice party trick, but in the end we're paid to solve problems and it's much faster and easier to identify existing solutions and adapt them rather than reinvent the wheel.

Knowing when and what to optimize is vital.


>Once I asked a professor why did he grade me 27/30 even though I spent one hour answering with details on everything, including the hardest questions.

>"Because you never appear convinced when you answer".

Sounds like pure anti-INTP discrimination ;)


> I forget how to type braces or back ticks.

US layout and compose key on AltGr. You'll never look back.


Yeah, this.

In my experience, it’s the relatively basic questions that have the highest value — both because they’re what you run into programming most often, and because they’re less likely to overwhelm candidates in a high-stress setting.

The goal, at least from my point of view, isn’t to see if they can come up with the perfect algorithm, but about how they construct an algorithm, how they communicate about the decisions they’re making, how they respond to challenges about edge-cases, etc.

I’m also strongly in favour of picking out questions that are reflective of the actual codebase they’re being hired for — find something with some basic algorithmic complexity which has a relatively simple and easy to explain input and output, and use that as the problem.

In general, I think the best problems are those which any competent senior engineer could design a good solution for almost off the top of their head with little difficulty.


I learn a lot faster now with LLMs.

You could learn the windows APIs much faster if you wanted to learn them


Is this maybe more about the quality of the documentation? I say this 'cause my thinking is that reading is reading, it takes the same time to read the information.


How is this faster than just reading the documentation? Given that LLMs hallucinate, you have to double check everything it says against the docs anyway


I learn fastest from the examples, from application of the skill/knowledge - with explanations.

AIs allowed me to get on with Python MUCH faster than I was doing myself, and understand more of the arcane secrets of jq in 6 months than I was able in few years before.

And AIs mistakes are brilliant opportunity to debug, to analyse, and to go back to it saying "I beg you pardon, wth is this" :) pointing at the elementary mistakes you now see because you understand the flow better.

Recently I had a fantastic back and forth with Claude and one of my precious tools written in python - I was trying to understand the specifics of the particular function's behaviour, discussing typing, arguing about trade-offs and portability. The thing I really like in it that I always get a pushback or things to consider if I come up with something stupid.

It's a tailored team exercise and I'm enjoying it.


Windows APIs docs for older stuff from Win32 is extremely barebones. WinRT is better, but still can be confusing.

I think AI is really great to start with the systems programming, as you can tailor the responses to your level, ask to solve specific build issues and so on. You can also ask more obscure questions and it will at least point you at the right direction.

Apple docs are also not the best for learning, so I think as a documentation browser with auto-generating examples AI is great.


Human teachers make mistakes too. If you aren't consuming information with a skeptical eye you're not learning as effectively as you could be no matter what the source is.

The trick to learning with LLMs is to treat them as one of multiple sources of information, and work with those sources to build your own robust mental of how things work.

If you exclusively rely on official documentation you'll miss out on things that the documentation doesn't cover.


If I have to treat LLMs as a fallible source of information, why wouldn't I just go right to the source though? Having an extra step in between me and the actual truth seems pointless

WinAPI docs are pretty accurate and up to date


Because it's faster.

If the WinAPI docs are solid you can do things like copy and paste pages of them into Claude and ask a question, rather then manually scan through them looking for the answer yourself.

Apple's developer documentation is mostly awful - try finding out how to use the sips or sandbox-exec CLI tools for example. LLMs have unlocked those for me.


But you have to check the answer against the documentation anyway though, to validate that its actually correct!

Unless you're just taking the LLM answers at face value?


For most code stuff you don't check the answer against the documentation - you write the code and run it and see if it works.

That's always a better signal than anything that official documentation might tell you.


That seems like a strong error, you have no idea if it works or if it just happens to work


If you're good at programming you can usually tell exactly why it worked or didn't work. That's how we've all worked before coding agents came along too - you don't blindly assume the snippet you pasted off StackOverflow will work, you try it and poke at it and use it to build a firm mental model of whether it's the right thing or not.


Sure. A big part of how I'd know that the function I'm calling does what I think it does, is by reading the source documentation associated with it

Does it have any threading preconditions? Any weird quirks? Any strange UB? That's stuff you can't find out just by testing. You can ask the LLM, but then you have to read the docs anyway to check its answer


I envy you for the universally high quality of documentation that the code you are working with has!


Because it will take you years to read all the information you can get funneled through an LLM in a day


Except you have no idea if what the LLM is telling you is true

I do a lot of astrophysics. Universally LLMs are wrong about nearly every astrophysics questions I've asked them - even the basic ones, in every model I've ever tested. Its terrifying that people take these at face value

For research at a PhD level, they have absolutely no idea what's going on. They just make up plausible sounding rubbish


Astrophysicist David Kipping had a podcast episode a month ago reporting that LLMs are working shockingly well for him, as well as for the faculty at the IAS.[1]

It's curious how different people come to very different conclusions about the usefulness of LLMs.

https://youtu.be/PctlBxRh0p4


The problem with these long videos is that what I really want to see is what questions were asked of it, and the accuracy of the results

Every time I ask LLMs questions I know the answers to, its results are incomplete, inaccurate, or just flat out wrong much of the time

The idea that AI is an order of magnitude superior to coders is flat out wrong as well. I don't know who he's talking to


Somehow we went from writing software apps and reading API docs to research level astrophysics

Sure it's not there yet. Give it a few months


It doesn't even work for basic astrophysics

I asked chatgpt the other day:

"Where did elements heavier than iron come from?"

The answer it gave was totally wrong. Its not a hard question. I asked it this question again today, and some of it was right (!). This is such a low bar for basic questions


Yes you have to be careful, but the LLM will read and process core and documentation literally millions of times faster than you, so it's worth it


Why does it matter? We have table of contents, index and references for books and other contents. That’s a lot of navigational aid. Also they help in providing you a general overview of the domain.


I mean, is it really that hard to find information in the docs?

Like, if I want to find out what, I don't know, "GetQueuedCompletionStatus" does. I google

GetQueuedCompletionStatus

Find this page:

https://learn.microsoft.com/en-us/windows/win32/api/ioapiset...

Bam, that's the single source of truth right there. Microsoft's docs are pretty great

If I use an LLM, I have to ask it for the documentation about "GetQueuedCompletionStatus". Then I have to double check its output, because LLMs hallaucinate

Doubly checking its output involves googling "GetQueuedCompletionStatus", finding this page:

https://learn.microsoft.com/en-us/windows/win32/api/ioapiset...

And then reading the docs to validate whether or not what its told me is correct. How does this save me any time?


How about we do the following.

I have not done win32 programming in 12 years. Maybe you've done it more recently. I'll use an LLM and you look up things manually. We can see, who can build a win32 admin UI that shows a realtime view of every open file by process with sorting, filtering and search on both the files and process/command names.

I estimate this will take me 5 minutes Would you like to race?


This mentality is fundamentally why I think AI is not that useful, it completely underscores everything that's wrong with software engineering and what makes a very poor quality senior developer

I'll write an application without AI that has to be maintained for 5 years with an ever evolving featureset, and you can write your own with AI, and see which codebase is easiest to maintain, the most productive to add new features to, and has the fewest bugs and best performance


Sure let's do it. I am pretty confident mine will be more maintainable, because I am an extremely good software engineer, AI is a powerful tool, and I use AI very effectively

I would literally claim that with AI I can work faster and produce higher quality output than any other software engineer who is not using AI. Soon that will be true for all software engineers using AI.


I'm curious, have you ever worked on a single software project for more than 5 years?


Slack is $45/user/month

Soon you'll be able to write, host, and maintain a fully customizable version for probably 20k/month

If you have a lot of employees this makes sense


If people wanted to do this theyd be self hosting xmpp servers already. No one wants to write and maintain the code and infra for things like this, you are grossly underestimating the effort involved here.


No no it makes sense. Hypothetical scenario: I, a high-level employee at a company just convinced my boss (or did we convince each other?) to spend $30k/year on Claude/Codex enterprise licenses. So far, the productivity gains have not been there and we're starting to sweat. So, I propose to my boss to build an internal version of $SaaS and call it a win. Galaxy brain.

Now some IC somewhere in the company who is at the end of his rope and sees the company as a dead end, sees an opportunity. Why not advocate for this project, get real experience building something greenfield in a brand new domain, strengthen their own resume, and finally have a way out of their strut? It's not like they're gonna stick around maintaining what they built.


Most people using Slack, Teams etc. and especially those making purchase decisions have no idea what XMPP is and what it's capable of. Heck, even Facebook used to federate XMPP until they decided to go proprietary. Not in the interest of their users, but because it makes the most money for its shareholders.


No they wouldn't have Nobody will write this, AI will write the entire thing. You don't need many people to maintain it


We've had xmpp for decades; the issue is that companies don't want to be responsible for it not that they can't do it


What features are you using that the $18/user/month plan doesn't cover?


I don't pay for slack any more, I just picked the price of their enterprise plan. Large users probably get big discounts but it doesn't matter, the cutoff where this makes sense financially is probably around 4000 employees even at $10/seat


The article mentions some sort of legal audit reasons that the author is of the opinion that any reasonably sized company needs. These features are apparently only on the expensive plan.


The part about pay is wrong, it's not comparing apples to apples.

I've been a staff engineer at Google and other companies, I have been an EM and a very senior IC at big and small companies.

If you're a very good IC, you can make a lot at a small number of good companies

If you're a relatively worse manager you can make a similar amount at many other companies

So the decision tree I would use is (focusing exclusively on compensation), if you're a very good IC, go somewhere willing to pay you >1M/year. If you can't get that you should be a manager


And yet they didn't do that!

Really makes you think about what makes products good


Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it


If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not?

Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?

(Note, I myself am not an US citizen)

Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]

[1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n...

[2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi...


This isn't about privacy rights, it's about war

I'm not suggesting that Anthropics models should be used by foreign governments for domestic surveillance

I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned


But.. the US doesn't perform mass surveillance on foreign people only when it's at war. It doesn't perform mass surveillance only on adversarial nations it potentially could be at war either.

This absolutely is about privacy.

> I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned

Those foreign governments are spying on Americans and then sharing the results with the US government because the US government is misaligned with the interests of its own people


The United States gets to spy on countries when it's in the interest of the United States to do so. This isn't complicated. We get to spy on quite literally whoever we want abroad, within various legal and well established parameters, at at the risk of offending the governments of the spied-on. "It's only okay for the United States to spy on foreigners when they're in a shooting war with them" is silly.


So you are saying its OK to spy on others because the US say is fine?

Maybe the others on here are not happy that this company is supporting a fascist government in committing international aggressions on other countries which has been condemned by the majority of countries around the world.


[flagged]


That is great, and i know this is not some crappy marvel comic. Im talking as a European who will be spied upon with this tooling, because we are not domestic. He seems perfectly fine with that, as well as using it in other military conflicts that has been caused by this governments greed.


The contract included the agreement, and the government is now trying to change the contract, hence the disagreement


When did he say this?


This is lie, I was there and this is not at all what happened


I appreciate the voice of experience but if you're going to post a comment like this, could you please share some of that experience so we know at least some of what did happen?

Otherwise it comes across as a drive-by swipe, which is a human reaction when you know that something on the internet is wrong, but which degrades the threads, partly because of the example it sets for others. The life of this community depends on knowledgeable people sharing some of what they know, so the rest of us can learn.

https://news.ycombinator.com/newsguidelines.html


Fair and sorry about that

Specifically what happened, and I think this is all public now is that prior to 2016 journalists and news organizations argued that Facebook was demoting news for various reasons. In reality it wasn't very engaging so it was automatically demoted. They promised to boost news more in early 2016, but largely as a result of worse engagement and negative experiences (arguing in comments) Facebook started ranking news worse than other content. This all happened in 2016, months before the general election

And while Russia did run ads, it was mostly not political and the political content they ran had very little engagement. Russia mostly focuses on conspiracy theories and undermining American institutions. Facebook was aware of this in 2016 and certainly did not contribute to it intentionally, and I don't believe even by accident of some kind of misguided A/B testing

The reason Facebook got worse for younger people is because younger people stopped posting.


Thank you! both for the kind response and the informative reply :)


What's the best case scenario for the positive impact of this regulation?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: