Hacker Newsnew | past | comments | ask | show | jobs | submit | devilsavocado's commentslogin

Napkin math: For just 1 person to intersect: Each of those 10000 people have a (10000/30000000)/100 % chance to get selected, or .00000033%. Multiply that by a thousand selections = 0.00033% chance. Those selections are refreshed maybe 100 times a year so 0.033% chance per year.

There are billions of facebook users, so this should be happening all the time by pure chance. But of course it's not pure chance. Facebook will be selecting from a pool much much smaller than 300 million, and the selections won't be random.


I have no idea why you are italicizing types of logical fallacies. I can't think of a good reason to do that.

And just pointing out that you believe they are committing a logical fallacy without any explanation is a very lazy discussion technique.


> I have no idea why you are italicizing types of logical fallacies.

If I were to guess, it is because it it is a term from a foreign language, in this case Latin, not because it is the name of a logical fallacy.

There exists an orthographic, or perhaps typographic practice in English to italicize words and phrases borrowed from other languages, like à la carte, du jour, sine qua non, ...


It's customary in some contexts to italicize "loan words". The terms for the fallacies come from the Latin. Ergo...

As for your second point, I think it's pretty clear from both my comments and their context why I'm pointing out those fallacies. That said, in the case of "non sequitur", I was actually referring more to the literary sense of the term — "Where the hell did that come from?" — than the fallacious "does not follow" sense.


Those terms are used all the time on hn but not usually italicized, but that's fair, I get your point.

When you respond twice in a row in a comment chain saying that they are using a logical fallacy, and nothing else, you've done nothing to add to the conversation, but are playing the role of Logic Studies 101 Professor. That's probably what they meant by the looking back on on your writing comment.


Try out your thought experiment again with a machine that can predict markets with 55% reliability, or 50.1% reliability . This more likely reflects reality.

If someone had a machine that could predict the stock markets with 100% reliability, or even 55% reliability, do you think they would make those predictions known to the public? Or would they start a hedge fund and become one of the wealthiest individuals of all time?


As far as I can see, making the information available to the public isn't even a choice. There is no market information you can make available to the public that will make everyone richer. The only people who get richer are the ones who act before it becomes public knowledge. When they are done, asset prices reflect the now public knowledge, and there is no longer a profit to take.


This is a great resource. I'd add Practical Deep Learning for Coders [0] to the video series list.

[0] http://course.fast.ai/index.html


I added. Thanks


I haven't had time to read the entire thing, so I've skimmed over lots, but there doesn't seem to be much mention of complexity theory in the way I believe idlewords is talking about.

I'm very curious: What happens if the algorithms for general artificial intelligence, and the ability for an AI to improve itself are all NP-Hard problems? Is that covered?

It might be similar to the "intelligence combustion" scenario outlined. But that appears to be a scenario where do not need to be fearing superintelligence.


As it says in the paper, the evolution of hominids doesn't look like this; population genetics says that if hominid brains got larger, the marginal fitness returns on neurons went up.


Or in a more intuitive sense, humans didn't seem to need brains 1000x as large to get one unit of improvement in practical effectiveness over chimpanzee cognition. But yes, this stuff is complicated, hence the paper not being two paragraphs long.


Thank you, that is a very intuitive explanation.

You've probably read Scott Aaronson's Why Philosophers Should Care About Computational Complexity? [0]. This seems like the perfect area to apply a lot of the questions he brings up. That's what I was looking for as I skimmed through the paper. Maybe that's what idlewords was talking about as well.

[0] http://www.scottaaronson.com/papers/philos.pdf


Picketty's book does at least attempt to address this. He proposes a switch from income tax to wealth tax. He shows that most inequality is a result of the growth of existing wealth (most of which was inherited). Taxing wealth instead of income can help reverse, or at least stop the increasing levels of inequality. A very small percentage of wealth would need to be taxed to accomplish this.


Such a wealth tax existed in Nordic countries in the late 20th century, at least.

Wealth taxes in Sweden and Finland were eliminated around 2005 because capital gains tax was considered more equitable to various wealth classes and more easily administrated. The wealth tax laws had become a nest of loopholes that brought in relatively little income in the end.

It would be interesting to see a new wealth tax designed for the digital era.


There are still a few countries with wealth taxes[0]. I'm not sure if they work well or not for these countries. Picketty doesn't spend to much time on the subject of existing wealth taxes. He is more focused on a global wealth tax, which is much more ambitious. Something that would need to be designed for the digital era indeed.

Amusing find from the linked article: Back in 1999 Trump proposed a one time wealth tax of 14.25% for individuals worth more than 10 million.

[0] https://en.wikipedia.org/wiki/Wealth_tax


Woudln't this reduce incentives to save among low income individuals?


Picketty definitely does not ignore wealth inequality. Most of his book is devoted to explaining why the return on existing wealth is the most important factor in inequality. He proposes a global tax on wealth, not income, to help combat inequality.


For a more in-depth, but still funny and entertaining, explanation of quantum computing check out 'Quantum Computing Since Democritus' by Scott Aaronson, the co-author of this comic.


I'm curious if the effects on the brain from running are the same as from meditation [0]. Any long distance runner will tell you that you can definitely get into a meditative-like state. Personally, I've noticed similarities in my mental state from meditation and running for 30+ minutes.

https://www.washingtonpost.com/news/inspired-life/wp/2015/05...


What counts as 'proven skill in communication' to you? Other than how they communicate throughout the interview process, I'm not sure how else to test for this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: