Hacker Newsnew | past | comments | ask | show | jobs | submit | EvgeniyZh's commentslogin

Statistically speaking "murderer is black" is a sensible assumption in US [1], but I'd prefer it wouldn't be made

[1] https://ucr.fbi.gov/crime-in-the-u.s/2019/crime-in-the-u.s.-...


The chance of a random black person being a murderer is substantially lower than the chance of a random cop being racist.

There are two generations and 4.5 years between A100 and B200.

A100 has 312 TFLOPS of FP16 for 250W, i.e., 1.25 TFLOPS/W.

B200 has 2250 TFLOPS of FP16 compute for 1000W, i.e., 2.25 TFLOPS/W.

This is ~34% growth per generation and ~14% per year. It's hard to believe it will be 400% per generation this time


It might be 400% in the one thing everyone is interested in.


you think in FP16. nobody uses FP16 for inference anymore. 400% probably for FP4/INT4 computation.


Tensor core performance is inversely proportional to precision across all generations (i.e., reducing precision by a factor of 2 increases OPS by a factor of 2). 8-bit precision will give you the same improvement ratio. A100/H100 didn't support 4-bit if I remember correctly.

So FP4/INT4 will likely improve the same 30% OPS/W. You could get a separate improvement by reducing precision, but going 1-bit for 4x improvement feels unlikely for now.


> Do we have an example of a real quantum computer doing some kind of a computation that is not easily accessible by the regular computer?

Simulations of condensed matter simulations performed on QCs (google's OTOCs, quantinuum's HUbbard model) are not easily accessible by the regular computer. There are people working hard on simulating these results classically so it's quite likely they'll be simulated eventually. We're at point where classical computers are still in the race thanks to immense scale and algorithmic progress, but I think it won't be the case soon.

> something useful in real life?

usefulness is subjective. There are results that are potentially interesting to some people on Earth (as opposed to RCS).


Can you remember any pro-Israeli posts you turned flags off for since the October 7 attack?


I can't remember virtually anything - this is not a joke - having one's brain be sandblasted by the firehose every day turns memory into a dodgy thing (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). I believe there have been some, though not as many. That's largely a function of the submission feed, i.e. which articles the community submits, upvotes, or flags. All we can do is respond case-by-case, and we try to do that in a principled way. The principles we apply (or try to) have been explained many times and can be found via https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so... and links from those comments.

If you feel like the submission feed and/or the moderation decisions on top of it are biased, all I can tell you is that everyone feels that way, especially on any topic they are passionate about. You needn't look far for examples of commenters complaining that we're suppressing and censoring the Gaza story - there are some in this thread.

What I feel a lot more confident talking about, in terms of balanced moderation, is the comments. We've moderated, warned, and banned many accounts for breaking the site guidelines while posting anti-Israeli (and sometimes even anti-semitic) comments, and we'll continue to do that. That's something we take very seriously, and of course, we do the same the other way round as well.


Thanks for the answer (and what seems to be unflagging the comment). Having some experience moderating (of course, much smaller) communities I understand it's impossible to keep everyone satisfied.

I, of course, can't judge the intent or the effort. What I can say is that I read all captions of 150+ votes submission, rarely skipping any, and I saw 20+ pro-Palestine ones and zero pro-Israeli ones. I think this is quite objective measure.

At some point I thought it might be intentional but now I think it is just bias amplification: these submission are flagged too fast and upvoted too slow to get anywhere.


Coincidentally, I just used hn.algolia to look up one of your old comments where you describe being sandblasted, and was surprised to find the most recent use of "sandblasted" on HN is by you, linking to an algolia search of you saying "sandblasted".

Thank you sincerely for your sacrifice, Dan. Whenever I have an urge to flame, I picture my impending comment as one more grain of sand speeding towards your cranium, and instead I step away from the keyboard.


Can you point to any pro-Israeli posts on HN since October 7, flagged or not?


They don’t get flagged though.


Yes they are, just like the comment you answered to will.



It's worth noting that this is "compute-bound optimal", i.e., given fixed compute, the optimal choice is 20:1.

Under Chinchilla model the larger model always performs better than the small one if trained on the same amount of data. I'm not sure if it is true empirically, and probably 1-10B is a good guess for how large the model trained on 80B tokens should be.

Similarly, the small models continue to improve beyond 20:1 ratio, and current models are trained on much more data. You could train a better performing model using the same compute, but it would be larger which is not always desirable.


> your car, TV

yeah I hope I won't ever be shown ads on TV for which I already paid


Unavoidable ads in the UI of the TV itself? I would hope not.

On channels/services that you might choose to access via the TV? That's a separate matter.


How many? What are the top 3 countries scammers flee to?


You didn't answer his question


Was I supposed to? My comments makes it quite clear that I don't have information to do that.

The author of the question appears to have some information I don't (unless he made it up of course), so I asked him to share it


cool it with the anti-semitism


Asic for matmul is systolic array more or less


Quantum volume is a good metric but that's kind of one-dimensional take. Almost any interesting circuit doesn't requires all-to-all connectivity and superconducting QC are bad at all-to-all connected circuit so we can have interesting NISQ experiments without particularly large QV


It is not a one dimensional take... it is a stress test of qubit gate fidelity [across all qubits involved in the circuit], state prep and measurement , lifetime (coherence), memory errors, etc.

Now I agree that there are other great stress tests of quantum computer systems... but most of the industry agreed that quantum volume was a great metric several years ago. As many companies systems have been unable to hit decent QV, companies have pivoted away from QV to other metrics... many of them are half baloney.


> fidelity [across all qubits involved in the circuit]

I don't see a scenario in which the fidelity of 2QG between two far away qubits matter. Stress tests should be somehow related to the real tasks the system is intended to solve.

In case of quantum computers, the tasks are either NISQ circuits or fault-tolerant computation, and in both cases you can run them just fine without applying 2QG between far-away qubits that translate in large amount of swaps.

If you're interested in applying Haar-random unitaries, then surely QV is an amazing metric, and then systems with all-to-all connectivity is your best shot (coincidentally, Quantiniuum keeps publishing their quantum volume results). It's just not that interesting of a task.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: