I question the top answer as being a good habit at all. It's like saying "there's probably some magic going on that I'll never understand". There's a reason why you're wrong. You should probably find out and absorb the reason instead of pretending like the world operates in some mysterious way.
I don't think that's what he's saying at all. It's closer to following a scientific approach; what you "know" is simply a more or less wrong model of the reality. You may study something "magical" until you understand it completely, but you still must be prepared to revise your knowledge if suddenly you learn something new that goes against it.
Programming systems can be treated scientifically, but they can also be treated deductively (like math). It's possible to have full knowledge of what's going on if you treat it like a deductive system. If your primary way of learning is through experiment, then yes, you will be wrong a lot of the time. But then I would never claim you had a good reason for believing anything.
I can't view the stackoverflow link right now, but the top answer basically claimed "I feel strongly about my reasoning, but nevertheless, I could be wrong". How could that be unless you made a mistake? The system is logical!
If you really want to understand how things work from the very bottom, you can build a toy computer running on FPGA, there are some very cheap boards available now.
To make things even more interesting, you can write your own HDL compiler (into netlist, of course, anything lower level is, unfortunately, too proprietary).
From this point you can build up your abstractions. The shortest route to the high level you're already comfortable with is to implement Forth first, since it's very easy to build a simple stack-oriented machine on FPGA, see ( http://www.excamera.com/sphinx/fpga-j1.html ). Running your own Forth on top of such a CPU is easier than on something like x86.
Next step is to implement a Scheme compiler and runtime in Forth. Garbage collection can be somewhat tricky, but if you go for something trivial, like stop-and-copy, it won't take long to implement, especially if you have some assistance from your hardware.
Three totally manageable steps from the very bottom of the abstraction ladder - and you've got Scheme, an extensible, powerful meta-language, from which you can grow anywhere.
I recently wrote this bit on bootstrapping a Lisp-like language using the most trivial interpreter possible: https://combinatorylogic.wordpress.com/2015/01/14/bootstrapp... - you can find this approach useful, I did a similar thing on top of Forth and it was very easy, much easier than trying to implement a "proper" interpreter.
And once you know how to build the whole thing from the bare silicon, there won't be any place for "magic" in your understanding.
If you really want to understand how things work from
the very bottom, you can build a toy computer running
on FPGA, there are some very cheap boards available
now.
You don't even have to go that far, you can build something resembling the cheap early 80s/late 70s computers from off the shelf chips and 8 bit CPUs that are still sold (z80, i8051, 6502).
It'll just be a bit hard then to go all the way to the high level - i.e., memory-hungry, garbage-collected, with elaborate, computationally intensive compilation.
For this you'd need a bit more modern device with enough memory attached. At least something around 16Mb would be nice if you want to implement a decent Lisp. With z80 you'd likely be forced to stay at around C or Basic level of abstraction.
Pretty sure a 10mhz z80 with 64kb ram can handle a small forth or scheme dialect just fine (You'll have to program those interpreters in asm or c though!) :)
It won't be useful/"pragmatic" but I don't think that's what we're after here anyway.
Yes, but this way you'd need some magic - an external C compiler or at least an assembler. Instead of implementing everything from scratch, with most of the code developed on a board itself.
"You cannot be an expert in every field and actually follow the evidence to make up your own conclusion (which btw does not guarantee that your interpretation of the evidence is correct)."
It's pretty easy to follow the evidence in most situations, actually. That doesn't make me an expert. That makes me a person who understands the methodology of science, logic, and statistics. You have to know precisely what your assumptions (observations) are and how to make correct conclusions from those assumptions. The only constraints are your time, your determination, and your ability to get the original research (a constraint is not "guaranteeing your interpretation is correct").
Some people are highly opinionated and don't have the understanding to match. That doesn't mean you can't have any sort of understanding as a non-expert.
"As an aspiring physicist I feel helpless every time I'm at the doctor's office. I cannot just "follow the evidence"."
Doctors aren't necessarily better trained at making inferences and recommendations. Consider their education. Some of them are very poor thinkers.
> The only constraints are your time and your determination.
...The combination of which is often a luxury that experts in one field cannot afford, thus the statement that one cannot be an expert in every field. "Cannot" is most likely being used for practicality, not in the literal sense of information being totally unfathomable given infinite time and determination.
You both have good points but they really needn't conflict so harshly.
> You both have good points but they really needn't conflict so harshly.
I disagree, lumberjack's point is essentially an appeal to authority - which is dark-age style thinking.
Just represent everything in a machine readable set of axioms, problem solved. You don't need to be an expert in every field, you just need to have a basic understanding of first order logic.
Right, but someone still has to do the representation (encoding the information into the machine readable format), and how can you ever know that someone is encoding it correctly?
In addition, your assumption is that everything can be encoded in an axiomatic language (probably not true), and that we have enough information to encode it all even if it was possible.
> Right, but someone still has to do the representation...
The same people writing papers now.
> and how can you ever know that someone is encoding it correctly?
Reasoning engine. As new data is entered it is run against prior data, to the end user it would look almost like a spell check.
> ...your assumption is that everything can be encoded in an axiomatic language (probably not true)...
That is an extremely safe assumption to make, as the problem has been studied for a long time and I'm aware of no evidence that would back up your position.
> ...and that we have enough information to encode it all even if it was possible.
> That is an extremely safe assumption to make, as the problem has been studied for a long time and I'm aware of no evidence that would back up your position.
> Just represent everything in a machine readable set of axioms, problem solved.
We all look forward to the day that you or anyone else can do this.
emptytheory's suggestion: have tons of time and determination to solve the problem. Caveat: the problem has already been solved and applied by many experts many times, where the main criticism is that some of the time these experts' application over repeated attempts is not 100% consistent.
Your suggestion: encode all necessary knowledge to solve the problem so that a computer can solve it for you. Caveat: this is either being done already by domain experts, or you must go out of your way to navigate the problem based on logic and not experience (tedious, but potentially very rewarding) so as to encode this information yourself (taking some unknown amount of time).
Caveat in all cases: You are just as apt to fuck up along the way as anyone else who possesses the same logical faculties as you, which presumably at least some other experts in question would.
lumberjack's argument may not possess the logical upper hand, but it is a valid concern for people who are mortal, employed (or otherwise occupied with their time), and without access to a medical library. Perhaps the best way to put this is, I look forward to the day when you can prove their reasoning wrong in such a reproducible way rather than completely dismissing a conclusion due to some fault along the way.
> I look forward to the day when you can prove their reasoning wrong in such a reproducible way rather than completely dismissing a conclusion due to some fault along the way.
A set of axioms with a reasoner would do both of those things. That will be web 3.0, it is being worked on.
I disagree, this is a "fallacy fallacy" - you named a fallacy which lumberjack (apparently) used, but that doesn't actually make the argument wrong.
And you seem to ignore the fact that getting a degree takes most people multiple years, and that "student" is an occupation. If you follow the evidence, people can't learn everything because it takes way too much time.
I disagree, this is a "fallacy fallacy fallacy". You seem to ignore the fact that I did not suggest that people can learn everything. I suggested that people learn enough logic to use it as a tool to make learning everything else unnecessary.
Also, the scarcity of time being used to justify the economically reasonable appeal to authority, in the context of global warming / scientific method consensus, may be the most unintentionally funny thing ever.
Wow, I see I'm going to have to break this down Barney style in order to reach you:
The silly argument that started this all off was that you have to be an expert in every field in order to examine complex systems or problems that span multiple domains. This is simply not true, because a complex idea depends upon simpler ideas. These ideas can be formalized, where scientific theory occurs at the edge nodes and verification occurs at well connected nodes. This would allow an individual to select a layer of abstraction to work on - not unlike software development.
This isn't very far off from the present system of scientific journals and peer review.
>This is simply not true, because a complex idea depends upon simpler ideas. These ideas can be formalized, where scientific theory occurs at the edge nodes and verification occurs at well connected nodes.
Only this is a very naive reductionistic epistemology, and not enough to cover modern science.
That would only be true if the finest component of information in "modern science" could not be represented in true/false/unknown. I know that back in the day folks working on cybernetics struggled with something kind of like this in neural networks, where they were stumped by nonsteady state output (they were hoping to represent everything in true/false). The solution was to just increase the layer of abstraction in representing the output, leaving enough room on lower layers to describe nonsteady state as another potential output state. Problem solved. If you've got an example demonstrating your concern, that would be helpful.
The reductionistic part is in the very belief that there's such a thing as a "finest component of information" in the first place.
>If you've got an example demonstrating your concern, that would be helpful.
What I say is that sufficiently rich theories such as those we have today don't have "finest components" in the sense of being parsable down to some kind of "atoms" that are independent of the overall structure.
The whole intelligence lies in the connections between the components, and verifying that them are individually "correct" doesn't say much.
> The reductionistic part is in the very belief that there's such a thing as a "finest component of information" in the first place.
That seems like a major leap. I've heard people propose that there are limits to human understanding due to complexity, but this is the first time I've heard the suggestion that there is some level of information beyond any possible measurement. The lowest level I can think of is existence vs nonexistence - and you are essentially suggesting that there is some other state beyond measurement and therefor reasoning. Of course, such a thing would be impossible to prove... so the scientific method would be of no use. So if what you are suggesting is true, then it would have no influence on what I'm proposing anyway. Wait... you aren't religious are you? I'm not trying to pry or be insulting, but this suggestion would only really make sense in the context of trying to establish a place for religion in science.
As far as the rest, formal logic exists to do exactly what you say can't be done. Your argument sounds more like an appeal to emotion than anything else.
>Just represent everything in a machine readable set of axioms, problem solved
Great idea! Let me just tell my friends, Hilbert and Russel about it! Maybe my friend Kurt will like it too!
/s
Not to mention that most of the problems science has to tackle we cannot even begin to have them formulated in some concise "set of axioms" even if that worked in theory.
I can't be sure from such a short reply, but in this glib statement, and the one you made above, you seem to be unaware of the very real consequences of the Godel Incompleteness Theorem. This is what @coldtea is referring to.
In short: even for a relatively easy-to-quantify universe of discourse like Mathematics, this theorem implies that (of necessity) some propositions will not be provably true or false, or that the system will contain a contradiction. You have a choice of either incompleteness or contradiction (incompleteness seems better).
That's for Mathematics. Now, consider physics, biology, or sociology. And more important, consider that the questions present are at the frontiers, so are very much not reducible to codification. It's a real problem. If you'd ever worked on a really complex, multifaceted, end-to-end science problem (say, weather forecasting, or drug design), you'd have a little more humility.
You saw my link to the open world assumption above... so I'm not sure why you'd go on about the incompleteness theorem.
Can you think of an idea that cannot be represented as true, false, or unknown? Now consider a hypothesis. I'm not saying it would be simple or easy, just possible and preferable to the present system.
As far as my apparent lack of humility: in the interest of not wasting your time or my own, I've truncated my correspondence. From now on, just imagine that all my posts are prefixed with a paragraph in which I grovel before the throne of scientific greatness.
>As far as my apparent lack of humility: in the interest of not wasting your time or my own, I've truncated my correspondence. From now on, just imagine that all my posts are prefixed with a paragraph in which I grovel before the throne of scientific greatness.
You should write some code, that when you press the reply button it appends some form of lexical prostration that is derived from the comment space of the identity you're commenting to :P
In all seriousness, it appears that for the effort that goes into all the signaling that goes in within academia (and to the external world) to all the "real problems" people are solving, automatic approaches to all aspects of how research is conducted will happen because it is more efficient and consumes less energy than say a human being worrying about if their methods paper will be accepted and how to please reviewers, and etc…
I mean the fact that my PI hired me, as someone who didn't graduate from undergrad over all the phds who get rejected for volunteer positions, because i can slap some code together must say something about the direction things are going in this world. But when I tell the postdoc that the reason his spectrograms looks the way they do when he downsamples due to less constructive interference (while also trying signaling to appear humble because how dare some non-degreed folk pontificate on such things as a matter of established fact like the rest of the folks do around here, even when asked for help), he has to go ask the sr. research scientist the next day to only tell me that I was right… that's 24 hours his clunky matlab script could have ran! lol
I'm happy to report that I have zero experience in the postgrad industry. While I'd love to spend most of my time working in pure theory and potentially influencing an entire field, I really don't think I'd be able to put up with some of the antics I've heard about. There is plenty of silliness that occurs in the corporate world, with the information silos and kingdom building, but at the end of the day money talks and bullshit walks - with little delay.
This problem is being worked on, and I'm pretty confident that the solution will be based on the principals of the semantic web. I have a feeling that academia will be pretty late to the party when it comes to implementation though, if half of the stories I've heard are true.
Hey everyone, let's all point and laugh at the primitive still using first-order logic rather than stochastic type theory! What does he think this is, the 1970s? Wake up, bro: a whole century has passed.
> It's pretty easy to follow the evidence in most situations, actually.
I'm a physicist who has done work on data analysis in cancer genomics. The depth and complexity of cancer biology is enormous, and "following the evidence" is the work of years.
It's easy to follow superficial arguments about specific experiments. It is very, very difficult to get hold of the ambiguous morass that is the true leading edge, which is so far away from the layperson's event horizon they aren't even aware it exists, and so end up believing that it's not that hard to follow the evidence in most situations.
"It is very, very difficult to get hold of the ambiguous morass that is the true leading edge, which is so far away from the layperson's event horizon they aren't even aware it exists, and so end up believing that it's not that hard to follow the evidence in most situations."
What are you talking about here? How does that conflict with what I said? In my original post, one thing I asserted is that you should be aware of your assumptions. I don't see the relevance of saying "knowledge of the leading edge is hard to attain" or "the leading edge is ambiguous".
I don't think I ever denied that there are technical fields where coming to a conclusion takes time. Should I be apologizing for my use of "most"? The only point I was trying to make was that "guaranteeing your interpretations are correct" is not a constraint (see my response to lumberjack). As a result, being able to form correct conclusions is possible as a non-expert. Is this a surprising statement to anyone?
tjradcliffe's post reads essentially as "non-experts are so unaware/ignorant that they form naive beliefs such as yours". What a constructive comment! Maybe I should just shut up and listen to the experts!
"so by implication you're saying it's immoral to enforce laws against all lesser offenses"
Amazingly, the parent didn't actually say or imply that. "worse crimes go unpunished" doesn't imply "enforcing laws against lesser offenses is immoral".
So then, what was the point of that comment? What was it trying to persuade the thread about? Serious question; I'm not asking so that I can rebut you.
>So then, what was the point of that comment? What was it trying to persuade the thread about? Serious question; I'm not asking so that I can rebut you.
Well, for one the point is that you govern by example, and the example of giving the corrupt elit a "get out of jail free card" is bad for society. So, the comment could be read as: punish all guilty, don't selectively punish.
Another point to be made reading the comment is that there are essential crimes any moral citizen should abhor (such as torture and imprisonment without due process) that are going unpinished because those in power like them, and stuff that has no real merit to be illegal or to be punished that harshly but that it is, because those in power don't like it.
Nobody would read it as advice to not punish theft, domestic abuse etc because there's officially santioned torture that goes unpinished.
I didn't read it that way, and I don't see why someone would.
In fact, I don't see any critique of your views in this comment. The comment is entirely a rewording and expansion of the original comment.
I think you're confusing the comment reading as if torture should be punished with the comment reading as if you think torture should be unpunished. That is, you're reading a comment that disagrees with you, and assuming that it implies you hold the opposite side.
I don't think you think torture should go unpunished. However, you seem to think that saying the justice system is corrupt because it punishes Brown while not punishing torturers implies that Brown should go unpunished. This is not the case either, and the original commenter need not believe that. The systematic corruption of the justice system is clear no matter what your views on Brown's guilt in this matter happen to be.
Can you say why it is you think the original comment argued in favor of not prosecuting car theft? So far you don't seem to have done so, but responded by asking why anyone wouldn't think so, which doesn't explain why you think so.
You're not wrong to read it this way. I might be a little knee-jerk on threads like this. It is very, very apparent that skepticism about the Wired/Boing Boing narrative on this case is a minority view. Also, my current programming project involves advanced stat and calculus, two things I suck at, so I'm extra procrastinate-y today.
I guess my question is: so, we all agree that failure to prosecute CIA torturers and torture-preneurs is a travesty.
Now what? What does that have to do with Barrett Brown?
Is it just that we should militate for more prosecutions? I'll sign that petition. But I don't feel like that's a fringe issue; I hear that concern everywhere.
>I guess my question is: so, we all agree that failure to prosecute CIA torturers and torture-preneurs is a travesty.
Now what? What does that have to do with Barrett Brown?
One connection is that what Barrett Brown did before he got punished was basically hurt the same military-industrial-intelligence complex that put those torturers in place.
(Regardless of if one can find 10 other technical legal violations in his actions to hypocritically punish him for. So yes, I'm saying that the "justice system" cares more for threats of that kind than what he supposedly was sentenced for. It's not like its unconceivable for the law to be hypocritical, as some assume).
I don't even agree that the "CIA torturers" should be prosecuted (well, they should, but only as mere cogs doing their job). It's not like they were some "rotten apples", activing on their own. And it's that seeing of this as a more general phenomenon that connects it with cases like Barrett Brown's (and Mannings, et al).
Uh, what? Stratfor is not the military-industrial complex. The military-industrial complex, along with serious foreign policy journalists, made fun of Stratfor. And, of course, neither is its subscriber base.
No, it's just a tool it uses and with ties to governments, foreign diplomacy, policy makers etc. And it's not only about Statfor, it's about sending a message against this kind of journalism and against meddling with this kind of institutions and interests.
Whether "serious foreign policy journalists, made fun of Stratfor" doesn't say much. They also make fun of the CIA as incompetents who couldn't predict things like the fall of the Soviet Union and so on...
You don't think it's dangerous to suggest that we should decriminalize attacks on private journalism outlets simply because we don't like their perspective on our issues? That's what people who bomb abortion clinics think.
The military-industrial complex did not "use Stratfor as a tool". The military-industrial though Stratfor was a bunch of tools.
Here's ex- Atlantic and WaPo writer Max Fisher (now at Vox) on Stratfor:
I wrote it as a response to how you framed the grant-parent's comment, as if it was analogous to advising not to punish lesser crimes because grand crimes like torture go unpunished.
I'm trying to show that his comment doesn't imply that at all, and should be read differently (and obviously, at least to me).
I think the point being made is that the law is enforced selectively and with varying degrees of enthusiasm, to further the interests of specific political groups.
It's striking that someone as intelligent as the comment parent could not see this.
Saying "the US justice system seems only to protect the status quo because it vigorously enforces email leaks out of shady private spying companies while ignoring torture" doesn't imply that car theft should go unpunished at all.
But then, if you're an HN celebrity, you can probably count on being able to say anything and get upvoted, and then having anything said against you downvoted. You could test this by masking the username on a comment until after a vote, but HN would never do that because the people who could do that are all people that benefit from this effect, and the powerful never cede power. (I hope to be proven wrong, but cynicism is rarely wrong in the world where we live.)
I don't know why you're attributing motives for this phenomenon to me, because (a) I mostly agree with you, and (b) find it irritating, not satisfying.
(I mean, I agree that my comments will get upvoted much more quickly than yours no matter what I say, not that my comment above was so incoherent that it could only be upvoted out of bias).
The problem with masking usernames prior to votes is that it would make all comments anonymous, and spur a lot of pointless voting.
I'd much rather an HN flag that simply stopped awarding karma or, for that matter, comment scores above (say) 5 once a commenter crosses (say) 50k karma. Dan wouldn't even have to make it mandatory; he could just make it public whether or not you've opted-in to that "level playing field" feature. Most high-karma commenters would.
The top 10 commenter karma scores on the leaderboard were removed after I lobbied for it for months. I wish they'd extend it all the way down to that 50k threshold; at a certain point, you should just have "karma: lots" (like Slashdot used to) and no gradations between those users.
I'd still get bonus upvotes though, because there are people who more or less "subscribe" to my comments. Same thing with Patrick, George Grellas, and 'mechanical_fish. And, god willing, 'tzs.
Finally, believe me, my real professional circle is mostly made up of people who think very little of HN, and my karma here is not a source of "celebrity" for me, or at least, not in a good sense.
Some people can manage being under a lot of stress. Some people can't. It sucks that you can't take preventative measures--such as eliminating the stressful environment indefinitely--if you suspect serious psychological harm.
I think you misunderstand mental illness. While it is true that mental illness does come out in times of stress and a stressful environment doesn't help out, stress doesn't cause it except in select illnesses. (and even then, stress doesn't actually cause it).
I do agree with preventative measures, and there are some that can be taken, but this includes things like regular psychological screenings and physicals since some mental illnesses have biological markers: teaching people warning signs: teaching coping skills early on and making sure medication and real help are available to those that need it. A society with some empathy helps as well. But those are a long long way off and I'm not sure of the effectiveness of such things as a preventative measure: these might simply be a tool to catch mental illness early. Still positive but hardly preventative.
And the stress bit is a little misleading. My exhusband had schizophrenia and was on disability for it. Some days, it was ok. I worked, he did stuff around the house and took care of the dogs. Some days, the stress of having to shower was too much. It was impossible to eliminate the stress enough since so much of it was misreactions to very low stress levels. Unmedicated, it was all he could do to get the voices to give him mental peace - this led him to a suicide attempt. luckily, after that he always had the option to check himself into the hospital and go somewhere safe if it got to much for him (and I had the option to call and get him there if needed). Medicated, the voices were less, but didn't go away. He took his medication out of fear of things most of the time (most... ).
I don't deny there's a genetic component and I don't believe stress is the single cause. I'm not well read on mental illness. But in the context of this story, stress was a cause (or trigger, whatever you want to call it) for a certain kind of behavior. It didn't seem like Giulia had any problems before her job.
"Some days, the stress of having to shower was too much. It was impossible to eliminate the stress enough since so much of it was misreactions to very low stress levels."
I had a lot of experience programming in high school. When I practiced on my own, it never seemed like work because it was fun and I had experience telling me that I could accomplish something meaningful. Something made me feel confident in my ability to produce and discover.
When I was an undergraduate, a lot of my peers who didn't have a similar CS background struggled. I experienced this myself when I transferred into the mathematics program. I never had a serious engagement with mathematics until I was in university.
I think reaching the stage where an activity becomes natural requires a serious personal engagement. That is, you have understand the questions which guide the activity (your interests have to align) and you have to have the freedom to ask and answer your own questions (being able to solve your own problems). The activity has to become personal in some sense.
Statements of preference are different than assertions about the world. The comic is funny, but it does have an antiphilosophical spirit, especially if you think it relates to the present conversation.
I'm sorry. I care. (sorry (sorry)) ((((((sorry))))))
One piece of advice I would give is to be open with your kids. Don't be afraid to talk about certain things, including your own insecurities and dogmas. Speaking from my own experience, a restricted dialogue can lead to frustration, conflict, and anxiety. If a kid isn't comfortable asking questions, he'll guess at the answer and worry about being wrong. What happens when he's wrong? All I'll say is that it's pretty difficult to have a calm resolution when the subject matter cannot be discussed.
I'm making no guarantees about success... but at least you'll have a "healthy" relationship (healthier than mine).