Hacker Newsnew | past | comments | ask | show | jobs | submit | andmarios's commentslogin

This raises the question: Are mass layoffs less frequent than a company's MS administrator account getting hacked?

Question the concept of mass layoffs?

A quick search for their developer support revealed they accept submissions via GitHub issues for their API. Perhaps try there? https://developer.viva.com/get-support/

Sometimes you have to get creative to reach out to a company's engineering department...


The model seems to have some problems; it just failed to create a markdown table with just 4 rows. The top (title) row had 2 columns, yet in 2 of the 3 data rows, Opus 4.6 tried to add a 3rd column. I had to tell it more than once to get it fixed...

This never happened with Opus 4.5 despite a lot of usage.


Disclaimer: I am part of the Lenses team, but I thought this might be interesting since Kafka replication is a pain point we keep hearing about.


I guess it's latency and data residency.


Correct! Re: latency, as I just noted elsewhere, if you run your prod database using Crunchy Bridge or Supabase or another big provider (which you absolutely should for prod), that typically means that your db will be running within an AWS region. You would, in most cases, need to run your compute in the same region. So yeah, at that point, Hetzner would be out.


It's the other way round, at least some, if not all, of these screens were in KDE before they were released in Windows. In general, KDE tends to be widely copied. Even macOS has borrowed a lot from KDE.

It has been over 10 years since I stopped being a KDE fanboy and became just a regular fan, but I remember that during my flame-war era, many features from KDE would often appear in Mac OS and Windows and their most popular applications (such as iTunes).

These days I don't care so much, I use KDE and I'm too old to switch.


I would argue it's the other way round. :)

Even GIMP, the one GTK app I would never expect to be surpassed by a KDE app, is being outdone by Krita these days.


To be fair, once your data has been stolen, it doesn't make sense to engage with the hackers. There is no way to guarantee that the stolen data won't be used.

What you must do immediately is notify the affected customers, bring down or lock the affected services, and contact the authorities.


I'm a customer and the first I'm hearing about this is from HN.


There is no guarantee anywhere (strictly speaking, including in the legal market), but that doesn't mean the paying has no effect of the probability of data being dumped. Notification is an independent requirement.


There is an interesting dynamic/risk in play:

If an attacker make an extortion threat, but then still follows through on the release/damage after being paid, then people are not incentivized to engage with you, and will go into attack mode right away, making it riskier for you.

HOWEVER, if the attacker make the extortion threat, takes payment, and then honors the agreement, and ends the transaction, then parties are more inclined to just pay to make the problem go away. They know that the upfront price is the full cost of the problem.

I've seen that there are 'ethical attackers' out there that move on after an attack, but you never know what kind you're dealing with :-/ "Never negotiate...."


Then the hacker org spins up a new name(like a shitty construction llc) and robs the next guy.

Reputation isn't all that useful for extortion.

Running all your crimes as the "Wet Bandits" makes it much easier for law enforcement if they do catch up with you.


There's no way to guarantee that I won't get in a car accident. So I pay for insurance. I may never need it, it may never come in handy, but it still makes sense to carry the policy.


Nginx (and Apache, etc) is not just a web server; it is also a reverse proxy, a TLS termination proxy, a load balancer, etc.

The key service here is "TLS termination proxy", so being able to issue certificates automatically was pretty high on the wish list.


The magic of bisect is that you rule out half of your remaining commits every time you run it. So even if you have 1000 commits, it takes at most 10 runs. An n-bisect wouldn't be that much faster, it could be slower because you will not always be able to rule out half your commits.


The idea is, suppose I did a trisect, splitting the range [start,end) into [start,A), [A,B), and [B,end). At each step, I test commits A and B in parallel. If both A and B are bad, I continue with [start,A). If A is good and B is bad, I continue with [A,B). If both A and B are good, I continue with [B,end).

This lets me rule out two thirds of the commits, in the same time that an ordinary bisect would have ruled out half. (I'm assuming that the tests don't benefit from having additional cores available.) In general, for an n-sect, you'd test n - 1 commits in parallel, and divide the number of remaining commits by n each time.


have you ever seen this implemented somewhere ? interesting idea


No, unfortunately not. If your history is strictly linear, you could probably hack together something relatively simple on top of git rev-list. But git bisect does all sorts of magic to deal with merge commits and other funny situations, and generalizing that to an n-sect would take a fair bit of work.


Yes, you'd need 4x parallelism for a 2x speedup (16x for 4x, etc). But there's plenty of situations where that would be practical and worthwhile (think a build and test cycle that takes ~1 hour each and can't be meaningfully parallelised further).


Yes, but I could also see the case where you have 10 commits to check, each bisect takes 20 minutes and it takes 40 minutes to find the problem.

Or 20 minutes if you had 10-'sect.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: