> Dumb question, why do “sensitive” spots on the body need more nerves? Couldn’t you just have the normal touch-sensing nerves and map signals from specific spots on the body to stronger/pleasurable qualia in the brain?
Think of a television. What gives you a better picture, quadrupling the number of pixels or making the existing pixels 4x as intense?
> Historically, what makes people with capital able to turn things into more capital is its ability to buy someone's time and labor.
You forgot to include resources:
What makes people with capital able to turn things into more capital is their ability to buy labor and resources. If people with more capital can generate capital faster than people with less capital, then (unless they are constrained, for example, by law or conscious) the people with the most capital will eventually own effectively all scarce resources, such as land. And that's likely to be a problem for everyone else.
AI doesn't change the equation; it makes the equation more brutal for people who don't have capital.
If you don't have capital, the only way to get it is by trading resources or labor for it. Most poor people don't have resources, but they do have the ability to do labor that's valued. But AI is a substitute for labor. And as AI gets better, the value of many kinds of labor will go towards zero.
If it was hard for poor people to escape poverty in the past, it's going to be even harder with AI. Unless we change something about the structure of society to ensure that the benefits of AI are shared with poor people.
Ok, I'm following you. You're saying because labor gets cheaper it will be harder to make a living providing labor. Not disagreeing, but I wonder how much weight to give this argument. History shows a precedent of productivity revolutions changing the workforce, but not eliminating it, and lifting the quality of life of the population overall (though it does also create problems). Mixed bag with the arc bending towards betterment for all. You could argue that this moment is unprecedented in history, but unless the human spirit changes, for better or worse, we will adapt as we always have, rich and poor alike.
If the value of many kinds of labor go towards zero, those benefits also go to the poor. ChatGPT has a free tier. The method of escaping poverty will still be the same. Grow yourself. Provide value to your community.
Entire classes of workers have been put in the poorhouse on a near permanent basis due to technological changes, many tines during the past two centuries of industrial civilization. Without systemic structural changes to support the workforce this will happen/is already happening with AI.
This is exactly it. If you don't connect it, it's a dumb stove like any other.
I was extremely dubious about connecting it, but I decided to do it anyway and see whether it's worth it. So far I've noticed two things:
* It sets the clock with NTP and follows daylight savings time. This actually might be worth it, I'm one of those people who otherwise just lives with clocks set an hour wrong for half the year. The odd thing though is that this isn't default behavior, I had to install an add-on in the mobile app.
* It gives me a mobile notification when the oven gets to temp. Not really compelling.
So depending on how you feel about clocks, feel free to skip the wifi setup.
The actual study (1) is observational and makes no causal claim, only that there exists a statistical association between caffeine consumption and dementia. Nevertheless, people are apt to misinterpret the finding as “caffeine consumption prevents dementia”:
Caffeine -> Dementia
However, the two variables would be correlated if the causal arrow were reversed and dementia influenced the propensity to consume caffeine:
Caffeine <- Dementia
And we would also observe the correlation if a person's general health influenced both the propensity to consume caffeine and dementia risk:
Caffeine <- General Health -> Dementia
Since caffeine is a stressor, we would expect to see reduced consumption among people with reduced general health. But we would also expect increased dementia among that same group. So the relationships in the diagram immediately above are plausible and would give rise to a spurious correlation between caffeine consumption and dementia risk.
While studies can try to “control for confounding factors,” it’s easy to overlook or misunderstand the true causal relationships in play, causing spurious correlations. In other words, you can create false “causal” relationships through imperfect identification and control of confounding variables.
In short, take this article’s claims with a suitable dose of suspicion.
And "However, the two variables would be correlated if the causal arrow were reversed" is missing "also", almost suggesting that the article gets it wrong and the two variables are not correlated because of the placement of the causal arrow...
>we would expect to see reduced consumption among people with reduced general health.
I would not expect this at all as it goes against my real world observations of people with poor general health consuming caffeine in as high doses. Some of the same causal factors for poor general health, like long work hours and long commutes can lead to increased caffeine consumption.
> Hell, with commercial printers from the likes of Konica Minolta, the print quality for text is better than offset print.
Can you link to some high-resolution comparison scans to support this claim? I find it hard to believe that any toner-based process is going to result in a cleaner page and crisper text than a properly made-ready offset press run.
Because it was. Was probably this person's 10th interview of the day. They probably only need the simplest of infractions to weed someone out given the absurd volume of applications they receive.
> Because [arriving 15 minutes late to a 30-minute interview] was [nothing].
I'd expect over 95% of both interviewers and interviewees to say that arriving 15 minutes late to a 30-minute interview is very much not nothing; it's a serious breach of what is expected – on both sides of the interview.
If you show up late for an interview, no matter which side of the table you're on, you ought to apologize and, if you're more than a few minutes late, have a good explanation. To do anything less signals that you are an unreliable person. And, when you are representing a company, it makes the company look like it's run by people who don't even understand how to do something as simple as show up on time. It suggests that one of the company's unspoken core values is Dysfunction.
If I had shown up fifteen minutes late for the interview then they likely wouldn’t make an offer and they likely would have called it out during the interview. No one seems to call out companies when they do this shit.
They wouldn’t care if I had a really bad day beforehand, and they certainly wouldn’t assume that I had a good excuse for it.
I agree with the sentiment that gratuitous happy-talk adds noise to what ought to be clear, bottom-line-up-front engineering communications. But the recipients of those communications are people, and most people have feelings. So a good engineer ought to optimize those communications for overall success, and that means treating the intended recipients as if they matter. Some human-level communication is usually beneficial.
So, to use an example from the original post:
> "I hope this is okay to bring up and sorry for the long message, I just wanted to flag that I've been looking at the latency numbers and I'm not totally sure but it seems like there might be an issue with the caching layer?
There’s a lot of noise in this message. It’s noise because it doesn’t communicate useful engineering information, nor does it show you actually care about the recipients.
Here’s the original post’s suggested rewrite:
> The caching layer is causing a 400ms overhead on cold requests. Here's the trace.
This version communicates some of the essential engineering information, but it loses the important information about uncertainty in the diagnosis. It also lacks any useful human-to-human information.
I’d suggest something like this:
> Heads up: It looks like the caching layer is causing a 400ms overhead on cold requests. Here's the trace. Let me know how I can help. Thanks!
My changes are in italics. Breaking them down:
“Heads up” provides engineering context and human-to-human information: You are trying to help the recipients by alerting them to something they care about.
“It looks like” concisely signals that you have a good faith belief in your diagnosis but are not certain.
“Let me know how I can help” makes clear that you share the recipients’ interest in solving the problem and are not just dumping it at their feet and turning your back on them. You and they are on the same team.
“Thanks!” shows your sincere appreciation to the recipients for looking into the issue. It’s a tiny contribution of emotional fuel from you to them to give them a boost after receiving what might be disappointing news.
In sum, strip the noise and concisely communicate what is important, both engineering information and human information.
I agree with your point about human level communication and treating the recipients like they matter. I generally tend to prefer communication that is more on the blunt/direct side, but if there's one thing about communication that I've learned throughout my career, it is that the people who do best are adept at communicating well with a wide variety of people with different communication styles and preferences.
The people who try to force everyone else to fit into a specific bucket of communication style, or who refuse to deviate from their own strict communication preferences no matter the audience, those are the people I see struggle to find success relative to their peers.
I agree it makes sense to specify that it is not certain, by adding "it looks like" (or "it seems like", or other wording that would not be too long; as another comment mentions, "looks" can sometimes be wrong). The other stuff might be unnecessary, although it might depend if it is implied or expected according to the context (in many contexts I would expect it to be unnecessary; another comment mentions how it can even be wrong sometimes).
(Your message is better than the one with a lot of noise, though.)
> “It looks like” concisely signals that you have a good faith belief in your diagnosis but are not certain.
A lot of people never get past this level of sureness, so the signal is lost (or at least compressed). You can ask them for a number from a digital display and they’ll say it “looks like 54”.
One way to rectify the idea that these messages have signal (which I agree with) and what the article says is that it’s declaring bankruptcy on additional context. The extra text has so little value it’s worth removing as a rule.
"seems to be causing" is also an excellent alternative to "it looks like" that doesn't hinge on visual-sensory primacy, and tends to translate slightly less ambiguously across language-familiarity boundaries due to 'seems' having more precise meaning re: uncertainty than 'looks', 'feels', 'sounds'. Or you could abbreviate to "could be" / "may be" / "might be" (non-high certainty), "is probably" (high certainty) if that sort of nuance is your thing. Noteworthy point: it is neurotypical to treat "is" as 100% certain rather than 99.9% certain when someone says it confidently, but as 80% certain rather than 99.9% certain when someone says it uncertainly, based solely on non-verbal nuance; this can be infuriating and I tend to recommend saying "I am certain" at 99.9% in combination with courteous handling of the slight but eternal possibility of being wrong.
"Let me know how I can help" should not be taken for granted as a thing to be offered, though. Some teams have very strict divisions of labor. Some workers (especially anyone whose duties are 'monitor and report' rather than 'creatively solve') are not overtime-exempt and cannot volunteer their time. Some workers (especially anyone who's reached a high-capability tech position from the ground up) are flooded with opportunities to do less of their own job and more of everyone else's and must not preemptively offer their time to an open-ended offer of 'help'. A more focused phrase such as "Let me know if you have questions, need more evidence, etc." provides a layer of defense against that without implicitly denying assistance for help if requested.
"Thanks!" is one of the most mocked request-terminators I've seen in twenty years of business. It is widely abused as "have fun storming the castle, i'm out micdrop" rather than as a sincere expression of gratitude that contains any actual statement of why you're grateful. "Thank you for doing the job the company paid you to do" sounds ridiculous when you say it out loud, even to neurotypicals. Tell people thank you with more than one word if you mean it, and tell them what you're thanking them for, and consider thanking them for what they did rather than lobbing it like a grenade strapped to a problem. If you hand them a problem and they say "got it, I'll look into it", saying "Thanks." to that is completely fine; it serves the exact purpose of courtesy described, and also doubles as a positive-handoff "your plane" reply concluding the problem handoff, so that you can safely mark it as delegated, they can safely assume you didn't miss their message and are continuing to work it, etc.
I feel like this is why the communication medium matters so much to how things are perceived. There is like this extra layer nuance and detail that is critical in email/chat and must be accounted for. Like the "Thanks!" thing. It's darn near impossible to hear the tone of someone's voice in email. So for me, the "Thanks!" ending kinda defaults to sounding like "Ha ha! It's your problem now!" in my head. Which may, or may not be completely wrong.
I don't think this is a good interview question, but I do think it is interesting as a thought exercise.
I bet that perhaps 25% of candidates could actually answer the question correctly, even if they didn't know anything about Schelling points. It could also lead to some nice discussions about how to solve an open-ended problem, probability distributions, strategies for maximizing payoffs when making decisions in the face of uncertainty, and so on. The question is so bad, it's actually kind of good.
I'm sorry you had such a bad interviewing experience. You asked for feedback in your blog post, and since your blog doesn't allow comments, I hope you won't mind my responding here.
You wrote something that I think is untrue of most tech companies, so I'd like to discuss it:
> [As I and a friend spoke], I realised something: Three technical interviews went well, I was feeling confident going into the behavioural interview... This means that I'm heading into behavioural and HR contract stages with confidence in my performance thus far and my ability to excel at the role. And it means that I have the upper hand in salary and benefit negotiation. This is horrible for them. THEY NEED to shut me down and bring me down a few rungs before this step. And to edge me for 2 weeks (and counting...) after the supposed final round before I hear anything back.
I suspect that approximately 0% of top tech firms are trying to tank your interview as a comp-negotiating tactic. For most of these firms, the biggest problem is finding people they want to hire. To find qualified people, they need to measure what applicants, like you, can actually do. And they can't get a good measurement when they sabotage your performance. Further, if they decide to hire you, they need you to feel good about the company, not hate it because of how you were maltreated. They want you to say yes to their offer, not rage quit the hiring pipeline.
I'm not saying that there aren't bad companies or bad interviewers out there. Nor am I saying that you can't get into an interview where the other person is actually out to get you. It happens. Maybe it happened to you.
What I'm trying to say is that if your mental model of the hiring process is that the company is probably going to sabatage your end-game interviews, you're probably going to be wrong most of the time and make some bad decisions.
> What do you think? Was that a normal interview that I should have expected? I am in the wrong by posting this? Should I nuke my blog?
Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.
And you got asked about those signals:
> "How do we know we won't hire you and you'll try to transition to a data scientist?"
You ought to be prepared for questions like these. For example, most interviewers would probably be satisfied with an answer like these:
That's a great question. Data science is something I do for fun in my spare time. I don't want it to become my day job. I love software engineering and that's what I want to focus my career on.
Or:
That's an important question. Thanks for asking about it. I try to stay abreast of important trends in industry, and when AI and data became important in some of my past work, I put in some personal time to learn more about them. When I learn things, I often write about them on my blog to help me remember. My blog's just a learning tool, a memory aid, right? It's not a barometer of my career interests. If you want to know what my career interests are, let me be clear: I want to write software. Five years from now, I still want to be a software engineer.
> Should I nuke my blog?
I'd say no. But you should read your blog from the perspective of a firm that's considering you for a job and be prepared to explain away anything they might have concerns about.
That's just my two cents. If you find anything in my comment helpful, great. If not, feel free to dismiss everything I've written.
> Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.
This is kind of absurd. Could you imagine a registered nurse being asked to expain why they have a blog about astronomy and not nursing?
"What do you mean you don't write about dressing wounds in your spare time? How much could you really know about it then?"
"Managing Type 2 Diabetes isn't interesting enough for you to blog about? I'll have you know most of the patients htat you would be dealing with at this long term care facility have T2D. I'm skeptical that you'd be able to care for them."
Why do we allow this kind of BS in the tech industry? Whens the last time a nurse did a whiteboard interview?
> Could you imagine a registered nurse being asked to expain why they have a blog about astronomy and not nursing?
That hits pretty close to home... I'm a doctor who has a small blog about the implementation details of the lisp I made.
> Managing Type 2 Diabetes isn't interesting enough for you to blog about?
If someone asked me this point blank I think I'd laugh out loud. It's interesting enough for me to keep up with the latest evidence, thanks.
> Whens the last time a nurse did a whiteboard interview?
To be fair, healthcare professionals have some pretty gruelling training and difficult licensing examinations. Some amount of preselection is taking place. Nobody needs a license to write software.
> mental model of the hiring process is that the company is probably going to sabatage your end-game interviews
I definitely agree and it is not a mental model that I carry into any interview, I have good intentions and I'm super friendly! This was only a tiny (disillusioned) post-interview reflection. I would say most interviews especially with engineers have gone well but there has absolutely been a vibe shift in the past year.
You can tell teams are a lot more risk averse when it comes to hiring. The promise of a fabled 10x engineer on the horizon paired with SWE automation devaluing existing talent has meant they will make you jump through 10 more loops and even then the decision is scrutinised. Understandably hiring is an expensive process (both successful and unsuccessful).
> Most employers will want some assurance that you are serious about the position you're applying for.
This is also a reflection of the job market. If it was balanced this notion would not exist. It's become a game of numbers, automated screening + AI has meant candidates need to send out 100s of application often with automation on their end too. On the other side every job likely receives 1000s of applications especially with stupid things like "L*nkedIn Easy Apply". Me personally, I would not apply for a role I am not committed to taking and I especially would not have gone through FOUR stages for fun, the first interview should be plenty screening for both parties!!! Alas.
I appreciate you taking the time to respond and thank you for your well wishes!
> the first interview should be plenty screening for both parties
Most good companies will interview you multiple times simply because they understand that individual interviewers can be biased. If five different people all say hire this guy, that's a much more trustworthy signal than if one person says the same thing.
> Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.
Great! Let me trawl through all candidates' HN and social media comments, and ask why they spend more time talking about politics, movies, science fiction, than CRUD SW development. They need to justify it!
That's certainly one way of interpreting what I wrote.
My point was that potential employers are not blind to what you put out in the public space. If what you put out would cause a reasonable employer to have questions about your viability as candidate, you ought to be prepared for those questions. If you're lucky, they'll ask you those questions and you can dispell their concerns.
>> For most of these firms, the biggest problem is finding people they want to hire.
While the firm wants to hire someone, the hiring pipeline/process is made up of individuals that have their own individual preferences on who should get hired. One person can certainly sabotage a candidate, and the further into the process the greater their incentive.
> Getting a lot of applications that don't meet your standard doesn't force you to raise you[r] bar. You still just need someone who meets your standard.
I'm not sure that first sentence true. Let me play Devil's advocate:
What's the primary cause of not being able to find someone who meets your standard when you already get lots of applications? It's that your hiring process is bogged down by the masses of unwanted candidates you must evaluate to find the few wanted candidates in the crowd of applicants. And what's the fix? It's better screening. Which is raising your bar, isn't it? Even if it's only to add cargo-cult screens to your bar, it's making the bar more selective, isn't it? Fewer people clear it, right?
Arbitrary filtering of candidates doesn't reduce the effort that it takes. Let's say 1 out of 1000 of the candidates you see is what you need. The total amount of effort to find the right candidate is still the same. But throwing out half the resumes just doubles the amount of time until you find the candidate you need (you just spread lower effort over a longer time).
On the other hand if you "raise your bar" (let's say you do so by some method that makes it twice as expensive to judge a candidate; twice as likely to reject a candidate that would fit what you need, i.e. doubles your false negative rate; but cuts down on the number of applications by 10x, so that now 1 out of 100 candidates are what you need, which isn't that far off the mark for certain kinds of things), you cut down the effort (and time) you need to spend on finding a candidate by over double.
EDIT: On reflection I think we're mainly talking past each other. You are thinking of a scenario where all stages take roughly the same amount of effort/time, whereas tmorel and I are thinking of a scenario where different stages take different amounts of effort/time. If you "raise the bar" on the stages that take less amount of effort/time (assuming that those stages still have some amount of selection usefulness) then you will reduce the overall amount of time/energy spent on hiring someone that meets your final bar.
I wasn't suggesting arbitrarily removing candidates was a good idea, but simply responding to their specific devils advocate example of applying "cargo cult screens", which would presumably be arbitrary.
I wasn't suggesting arbitrary filtering. That's a straw-man interpretation of what I wrote. Even if a firm cargo-cult copies the screening practices of the big-tech firms, they are going to be much better at selecting good hires than arbitrary filtering would.
Think of a television. What gives you a better picture, quadrupling the number of pixels or making the existing pixels 4x as intense?
reply