A quick search for their developer support revealed they accept submissions via GitHub issues for their API. Perhaps try there? https://developer.viva.com/get-support/
Sometimes you have to get creative to reach out to a company's engineering department...
The model seems to have some problems; it just failed to create a markdown table with just 4 rows. The top (title) row had 2 columns, yet in 2 of the 3 data rows, Opus 4.6 tried to add a 3rd column. I had to tell it more than once to get it fixed...
This never happened with Opus 4.5 despite a lot of usage.
Correct! Re: latency, as I just noted elsewhere, if you run your prod database using Crunchy Bridge or Supabase or another big provider (which you absolutely should for prod), that typically means that your db will be running within an AWS region. You would, in most cases, need to run your compute in the same region. So yeah, at that point, Hetzner would be out.
It's the other way round, at least some, if not all, of these screens were in KDE before they were released in Windows.
In general, KDE tends to be widely copied. Even macOS has borrowed a lot from KDE.
It has been over 10 years since I stopped being a KDE fanboy and became just a regular fan, but I remember that during my flame-war era, many features from KDE would often appear in Mac OS and Windows and their most popular applications (such as iTunes).
These days I don't care so much, I use KDE and I'm too old to switch.
To be fair, once your data has been stolen, it doesn't make sense to engage with the hackers. There is no way to guarantee that the stolen data won't be used.
What you must do immediately is notify the affected customers, bring down or lock the affected services, and contact the authorities.
There is no guarantee anywhere (strictly speaking, including in the legal market), but that doesn't mean the paying has no effect of the probability of data being dumped.
Notification is an independent requirement.
If an attacker make an extortion threat, but then still follows through on the release/damage after being paid, then people are not incentivized to engage with you, and will go into attack mode right away, making it riskier for you.
HOWEVER, if the attacker make the extortion threat, takes payment, and then honors the agreement, and ends the transaction, then parties are more inclined to just pay to make the problem go away. They know that the upfront price is the full cost of the problem.
I've seen that there are 'ethical attackers' out there that move on after an attack, but you never know what kind you're dealing with :-/ "Never negotiate...."
There's no way to guarantee that I won't get in a car accident. So I pay for insurance. I may never need it, it may never come in handy, but it still makes sense to carry the policy.
The magic of bisect is that you rule out half of your remaining commits every time you run it. So even if you have 1000 commits, it takes at most 10 runs. An n-bisect wouldn't be that much faster, it could be slower because you will not always be able to rule out half your commits.
The idea is, suppose I did a trisect, splitting the range [start,end) into [start,A), [A,B), and [B,end). At each step, I test commits A and B in parallel. If both A and B are bad, I continue with [start,A). If A is good and B is bad, I continue with [A,B). If both A and B are good, I continue with [B,end).
This lets me rule out two thirds of the commits, in the same time that an ordinary bisect would have ruled out half. (I'm assuming that the tests don't benefit from having additional cores available.) In general, for an n-sect, you'd test n - 1 commits in parallel, and divide the number of remaining commits by n each time.
No, unfortunately not. If your history is strictly linear, you could probably hack together something relatively simple on top of git rev-list. But git bisect does all sorts of magic to deal with merge commits and other funny situations, and generalizing that to an n-sect would take a fair bit of work.
Yes, you'd need 4x parallelism for a 2x speedup (16x for 4x, etc). But there's plenty of situations where that would be practical and worthwhile (think a build and test cycle that takes ~1 hour each and can't be meaningfully parallelised further).
reply