Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A growing number of philosophers are conducting experiments to test arguments (aeon.co)
105 points by diodorus on May 3, 2018 | hide | past | favorite | 78 comments


> But Gettier suggested some counterexamples to this definition, by telling stories in each of which there’s a true, justified belief that he claimed isn’t a case of knowledge. For example, imagine that at noon you look at a stopped clock that happens to have stopped at noon. Your belief that it’s noon is true, and arguably it’s also justified. The question is: do you thereby know that it’s noon, or do you merely believe it?

I've slowly been converging upon the belief that there's no such thing as "truth". There are only models and predictions, and some models lead to better predictions than other models. (What then is a mathematical truth?)

For the clock example, "knowing" whether it's noon or not is useless in the framework of prediction. You instead develop a model with two (probability-weighted) possibilities: 1) the clock is broken and thus can't be relied upon as a correlate with other events or 2) the clock is functioning and this information can be utilized in some way to improve future predictions. From a utilitarian perspective, what value does knowing the absolute "truth" about the state of the clock add?


>What then is a mathematical truth

In the West, Aristotle put it as follows: "To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true"

Some 2 millennia later, Tarski, a Polish logician, more or less formalised just that. This result, among other things, spawned a brand new branch in mathematics: Model Theory. More on Tarski’s notion of truth: https://plato.stanford.edu/entries/tarski-truth/. A few decades later some analytic philosophers like Davidson drew on Tarski’s work and applied it to natural language, too. This resulted in this: https://en.wikipedia.org/wiki/Truth-conditional_semantics. But notwithstanding Tarski’s work, there’s still significant controversy surrounding both “truth in natural language” and the logico-mathematical truth predicate in philosophical circles. Here’s an article that covers axiomatic theories of truth in general: https://plato.stanford.edu/entries/truth-axiomatic/.


The clock problem is, as it seems most philosophical problems I seem to encounter, one of definitions. It's a game with words. It doesn't change the underlying facts. You had a good reason to believe proposition X, and X happened to be true, but your reason to believe X was not causally linked to X's fact-ness. Whether you want to call that circumstance "knowledge" or use some other symbol to describe it does not change the underlying fundamentals at all. And so all the millions of hours of thought that have been spent on this problem have been wasted.


This mistake of reifying of non-fundamental concepts and then thinking about them without reference to their component parts is an epidemic in philosophy, and it always has been.


The problem with this dismissal is that "knowledge" is not just a word, it does in fact refer to something solid and concrete (insofar as most of us intuitively believe there is a difference between genuine knowledge and just belief).

In the real world it is impossible to find human experiences that are "causally linked" to facts in the world, and so the implication of the argument is that EVERY experience you have ever had is possibly like looking at the broken clock.

And yet, most of us would believe that there is a difference between looking at a broken clock that says noon, versus a working clock that says noon. It would be nice to have a rigorous way to distinguish between the two cases - solving that problem might teach us something about the world.


>The problem with this dismissal is that "knowledge" is not just a word

"Knowledge" is just a word. It references many different concepts.

> In the real world it is impossible to find human experiences that are "causally linked" to facts in the world

Either you've misunderstood what causal linkage is, or else you are some odd brand of solipsist, but either way, this is incorrect. Causal linkage just means that that event participated in causing this event. The sun being in the sky is causally linked to me believing it's day time, for example.

> And yet, most of us would believe that there is a difference between looking at a broken clock that says noon, versus a working clock that says noon.

Of course there is. In one case, your belief is causally linked to fact. In the other case, it isn't, but still happens to be correct.


> Causal linkage just means that that event participated in causing this event. The sun being in the sky is causally linked to me believing it's day time, for example.

Great example! In Ptolemy's time "the sun being in the sky" would have meant that the sun was in a particular phase in its orbit around the Earth. Now we understand that the sun is not actually "in the sky" but it is a hundred million kilometres away. Our understanding of what it means for the sun to be "in the sky" will likely undergo several revolutions in the next 100,000 years.

Second, the actual facts are not causally linked to your belief. Your perception that the sun is in the sky is causally linked to your belief that it is daytime. If I am a schizophrenic and I hallucinate that the sun is in the sky even though it is night time then do I "know" that it is daytime?

All of this probably sounds overly pedantic but teasing apart the details is exactly what we're trying to do here :)


(Same person, different account.)

> Our understanding of what it means for the sun to be "in the sky" will likely undergo several revolutions in the next 100,000 years.

I wouldn’t bet on it.

> Your perception that the sun is in the sky is causally linked to your belief that it is daytime.

My perception is cause by electrical impulses from my eyes, which are caused by photons hitting those receptors. The photons themselves are caused by the sun, and their incoming vector is caused by its position relative to me. There is a causal chain, thus, causal linkage.

> If I am a schizophrenic and I hallucinate that the sun is in the sky even though it is night time then do I "know" that it is daytime?

Depends on what you mean by know. Your perceptions are not causally linked to a true belief, if that is what you are asking.


Imagine you are designing a software system. One use case requires you to model a fact as best as you can, so you write a fact-class for it. Your software system runs fine, but you notice the class doesn't accurately model the fact in some other use cases. You now know the class is insufficiently accurate. Do you still trust the result of your software?

Replace software with language, class with concept we use to refer to facts and you have to claim that any further thought into the design of your class is wasted, because you hold that to be the case for concepts, language and reality.


If all philosophers are talking about is how underspecified words are, that is fine, but there's little value in it. Language is not designed. So a word is underspecified. Nobody can fix it.

But I was under the impression that philosophers thought themselves more than mere dictionary debaters. Whenever I hear about this problem, my impression is that at least non-zero of them think there is a problem with the actual concept of knowledge, not merely the word.


we're just into another 'what's truth' cycle. next will be 'is truth communicable' then 'is truth subjective'


Gettier cases long ago convinced me that it is useless to talk about 'truth' apart from 'justification'.

Obviously, none of us can ascertain truth directly. The only way we can claim A or B is true is by appealing to some justification for our belief. Therefore, the idea that [knowledge = true and justified belief] collapses to [knowledge = justified belief].

From that point, we have only to discuss what constitutes better/worse justification(s).


I agree with you, but we have to be aware that by this definition two diametrically opposed "knowledges" can be equally valid if they have disjoint justification frameworks.

And that leads to relativism, which we don't want.


Tell that to physicists who have been grappling with the limitations of the classical model for the last 50 years ... we can model our own directly perceived reality well enough but once we get out into the depths (e.g. quantum levels) it all breaks down. My laymans opinion is not that these systems are necessarily unpredictable or unstable, just that they're so far outside the scope of our normal experience we simply can't comprehend them in any intuitive way. Save for a few gifted souls of course!

The classic "Rationalist" (as in Kant) explanation of mathematics is that things like pi are an a priori concept, or phenomena. An ideal if we will, that is innate within us that helps us to explain the reality without (the noumena). Mathematics relates to the interrelation of these concepts. Physics is a (very successful) attempt to relate these apriori phenomena concepts directly to the noumena.

Again, just a have-a-go layman on the philosophical front as well and I welcome any corrections or more sophisticated elaborations!


The word "truth" is loaded. Which is a pity, because the notion is pretty simple. Here's a simple proposition:

There is a world out there. This reality, universe, or whatever, is in some state or another, works in some way or another, and whatever we think about what it is or how it works has little bearing on what it actually is, or how it actually works. "Truth" then is merely an accurate depiction of our world (or parts thereof). The problem is ensuring accuracy, and that is impossible without infinite evidence. We can only approach certainty.

And there is indeed a point where we are certain enough, and further inquiry isn't worth the hassle.


> whatever we think about what it is or how it works has little bearing on what it actually is

We are a part of this world, so if you accept that you must accept that how you think about the world is a part of the definition and state of the world itself.

Which quickly leads to the conclusion that it is impossible to reason about a system from inside the system with absolutely certainty.


> We are a part of this world,

Hence "little", instead of "no".


My problem with the example is I don't understand how this philosophical problem can possibly translate to a language with more refined words for "know" and "knowledge." To me, it appears to be bickering over the definition of an English word. Happy to be shown why this isn't the case.

I believe I can make it a non-problem by defining some new words.

Belief - a thing one thinks is representative of the universe

Knowledge - a thing one thinks is representative of the universe, with justification

Billabooknowledge - a thing one thinks is representative of the universe, with justification, that turns out to be wrong, despite one not being aware it's wrong


I had the same thought: the problem seems to be the English word "know". In ordinary use the word does seem to imply "with justification", but it's unclear what should count as "justification", seeing as ordinary (non-philosophical) thought involves so much intuition and analogy. So perhaps it's a word philosophers should avoid rather than waste time worrying about. But who am I to advise philosophers how to spend their time?


Is knowledge not just the reliable predictability of a model’s outputs based on inputs?

I.e “you know it is noon” means you can predict that when you look outside the window it will be light, the church bell will be ringing, the clock will point to 12 etc. One or more of these corroborating factors provides that knowledge to you (eg you can still know it’s noon if the clock is broken because you can hear the church bells and see the sun above your head etc)


This is exactly what intuitionistic logic is all about. Instead of doing operations on variables which represent the truth value of a proposition, we can do operations on variables which represent the justification for a proposition. All we have to do is forego the law of excluded middle (A or not A is true). Which makes sense because we can't say we always have justification for A or have justification to refute A. Sometimes there are questions you can ask which you can never find the answer for or constrict justification.


> There are only models and predictions, and some models lead to better predictions than other models.

"Better prediction" requires evaluating a truth claim, so this viewpoint would seem to contradict the rejection of truth.


Indeed. I’d say more than models and predictions, what really defines “truth” is consensus in what are and are not acceptable beliefs.

As Mac pointed out in IASIP: Reynolds vs. Reynolds, you need a leap of faith to accept all the sciency writings about subatomic particles and whatnot.


I think you're highlighting the difficulty of capturing precise truths, and conflating theories with truth.

For example:

* As of writing this comment, I have five fingers on each of my hands.

* Humans must stay hydrated to survive.

* Within my office, the time is currently 4:45 +/- 1 minute.

Those are truths, though there may be pedantic holes in my communication.


> As of writing this comment, I have five fingers on each of my hands.

Well, if you are not a brain in a jar, maybe. As far as I am concerned, I don't have any undeniable evidence that you even exist, nor that I have hands.

That's a pretty useless kind of skepticism, but as long as you are going for objective "truth" you must face that kind of problem.


> Well, if you are not a brain in a jar, maybe. As far as I am concerned, I don't have any undeniable evidence that you even exist, nor that I have hands.

If I am a brain in a jar, do I have hands?

Well, then I would still have virtual/simulated/apparent hands. And, when I say "hands", by that word I mean those virtual/simulated/apparent hands. Hence, even in that scenario, I do indeed have hands.

The ordinary meaning of the English word "hands" encompasses both "ultimately physical hands" (of the kind I might have if I am not a brain in a jar and also not some AI living in a computer simulation) and also "simulated hands" (of the kind I might have if either of those hypothesises were true). So, either way, it is true that "I have hands". When I say that, I am not expressing any opinion on the truth or falsehood of those sceptical hypothesises. If I wanted to express an opinion on those sceptical hypothesises, I need to explicitly exclude them, i.e. "I have ultimately physical hands".


I believe what he meant is that all your “truths” are the same as the believe in the example. You have reasons to believe that those things are true, and certain observations made by you corroborated your hypothesis, leading you to conclude they were true.

For instance, I have 5 fingers in each of my hands: you see them, you use them, so they must exist and be real. I’m no philosopher, but I suspect their arguments would go towards how can you trust your senses and perception to prove ideas about reality...


What is your definition of truth? It seems to me like this is an issue of abstraction layers. The truth can be simply described within one layer, but adding layers requires more precise language.

It's a bit like arguing "I" don't move my arm because neural signals trigger muscles to move the arm. Both are true, the language is just more involved if we don't share a common definition of "I".


you only “must” face that “problem” when dealing with people who are either overintellectualizing uselessly, as you note, or with assholes. the distinction is slim!


Even then it depends on your definition of "I". If this reality and self is a brain-in-a-jar simulation, I still have five fingers - the brain may not, but the brain isn't me.

Though, this is a good example of what I meant by precision of truth. "I observe five fingers" is more accurate, and there are still better phrasings, but individual inability to precisely communicate truth does not imply truth does not exist.


Not merely pedantic miscommunication. What do you mean by "hands"?

We understand what hands are, but imagine trying to write a program that infers what hands are, and works automatically for non-humans as well.

And then it has to work for people who don't have five fingers, too.

There are a lot of assumptions embedded in words. None of these are generally interesting, which is why they're best ignored. But they exist.


> I've slowly been converging upon the belief that there's no such thing as "truth". There are only models and predictions, and some models lead to better predictions than other models. (What then is a mathematical truth?)

This seems like saying, "The truth is that there is no truth." Or the truth is that there is a set of models which are more or less the case and this meta-model is the actual truth. In both cases, the argument seems to implicitly rest upon the truth, or a belief of the nature of truth without getting out from under the thing that the argument attempts to deny. How can we talk about there being no such thing as truth without it turning into a sentence like:

This statement is false?


Do you not believe that the international network of atomic clocks that measures the passing of time actually exists? This is all that we mean when we attach numbers to particular points in time, i.e., the "truth" that the current time is "noon" can be judged by reference to that atomic clock network.


I think the problem with this is that with no notion of truth, there is no way to test your predictions.


Sounds a bit like the pragmatic theory of Truth as proposed by William James.

https://en.m.wikipedia.org/wiki/Pragmatic_theory_of_truth


What determines whether your predictions are correct?


I have a slightly different take than you. Suppose you believe A to be true. Furthermore suppose no evidence could be presented that would convince you that A is not true. For you then, isn’t it the case that A is indeed true? It may not be true to someone else but for you that truth exists.

I guess I’d say is person and time dependent. Perhaps you agree and would say there is no such thing as objective truth?


I wouldn't dignify that with the word "truth", and I don't think you should, either ("you" meaning sykh, not "you" meaning the person in your thought experiment).

This person has a very strong belief, so strong that they put the label "truth" on it. But that's a different meaning than the word has had in this discussion thread - so different that different words should be used, to keep us straight. The version you mention here might be called "strongly-held belief" - held so strongly, in fact, that the person holding it can't imagine that it could be questioned, let alone false.

But for the rest of the comments, I think that people are using "truth" to mean "correspondence to reality". I think that that is quite different from "strongly-held belief", because I don't think that strongly-held belief changes reality.


I for instance believe that 1+1=2 under the first order Peano axioms of arithmetic. This belief of mine is so strong that it would be imposssible to convince me otherwise. This is a truth for me (and for every other sufficiently educated person). But there are differing levels of assuredness in our beliefs. Some not as strong as others. Therein lies the rub when one talks about “reality”. Isn’t reality ultimately person and perception dependent? Are there objective, universal truths?

For me a strongly held belief is more or less equivalent to truth for the person who holds such a belief.


There are varying levels of assuredness, but they also line up pretty nicely with the varying levels of experience that people have. If you have more experience verifying something to be true, then that indirectly serves as justification for the beliefs that everyone else has, even if they are not personally so sure.

The truth is a telescope of verifications that trace back through history. It's not a personal thing. It doesn't stop at your skull.

Incidentally, the only time I ever hear people saying truth is relative or doesn't exist is when those people are simply trying to promote their own social standing or take advantage of people.


The personal thing is what a person perceives as truth. Everyone has a set of beliefs that they consider true. The set isn’t the same for everyone. Isn’t it the case then that truth, at least some portion of it, is person dependent?


No, it certainly is not. That only means that one's knowledge is limited, not that the things you have knowledge of are inherently subjective.

The fact that two people disagree about something does not automatically imply that they are both wrong, only that at least one of them is.


I think what you are describing is dogma, not truth.


How is that distinguishable from truth from the perspective of the one who holds said belief?


That is one of the hardest things to wrap your head around. I caught myself the other day saying "well, a lot of people..." when actually it was me, just me, sample size one.

"surely most people think the same way I do" or this group of people believe X.


Well, you're not exactly a "sample size one".

When you say "a lot of people" you also presumably witnessed, interacted with, talked to, and watched, other people that do the same.

I can e.g. say "a lot of people have iPhones in NY" just from walking around in Manhattan, even though I never read any particular study.

Seeing people and seeing that many of which you see everyday do X is still sampling.


Yeah, but I would think that humans are generally bad in "intuitive statistics", e.g. when drawing conclusions based on their observations. Of course most of this may come from our inability to observe accurately/objectively when not dedicating significant brain-power to this task.



This sort of empirical determination of different groups of people's "intuitions" may be interesting social science, but should it really matter to philosophy? Shouldn't terms just be defined carefully enough that intuition doesn't come into it?

Concepts like "justice" are just social constructs and not aspects of the Universe that exist outside of humanity. They will always be arbitrary.


differentiating fact from speculation is a difficult and important venture that requires vigilance throughout one's entire lifetime. It's very hard, and we all slip up because it's not really how our brains like to work.


> First, as the positive programme in x-phi shades into psychology and vice versa, some have asked: is experimental philosophy really philosophy?

Science is essentially a branch of philosophy ('Natural Philosophy), so in the broadest possible sense, yes. But I don't think what they are doing sounds like philosophy in the sense that most people would use it. It sounds like psychology. Which is fine! Psychology, after all, grew out of philosophy to begin with. I don't know why there should be a wall built up between the two fields.

Think about the dawn of computer science as another example. A lot of the foundational work was done by people who would have considered themselves philosophers, and most of the work firmly remained in philosophical and mathematical journals until people started physically realizing their work in the new machines, and then it became 'engineering' and 'science'. There's not a fine line where one guy was doing philosophy and another guy started doing 'computer science'. It was a gradual shift.


I think this particular shift is more reflective of academia at large - it's trendy, and has been for a long while, to "scientize" everything with more experimental data, empiricism and math equations thrown into papers. And analytic philosophy in particular is more adaptable to such a trend since it's often described as being "like science" in its attempts to accumulate tiny stepping stones of knowledge that build on each other.

Many established fields, on the other hand, retain a "philosophy of" subset that aims mostly to escape the technical aspects and continue probing basic assumptions. And I think that's a meatier discussion than the analytic approach, which can get very abstract without feeling profound - we have plenty of technical and social developments where ideological assumptions rule by default, because nobody's tried to champion even a slightly deeper approach.


Science departed from philosophy precisely when methodology about evidence and falsifiability of claims became involved. Without experiments to determine if a claim is false, a branch of study is really just one big session of just-so stories, where some authority explains what they think and why they think it, supposedly proving it if it's explained convincingly and eloquently enough.

Which is part of why I absolutely can't stand listening to philosophy. It's just not useful to hear people's opinions on things they can't possibly know for sure because of the very nature of the subject. What are you supposed to do with it? It feels like superstition with a veneer of academic legitimacy.

By the way, being able to spot one of these when you hear it is a useful skill in life: https://en.wikipedia.org/wiki/Just-so_story


I don't think you can criticise Philosophy for not "determining if a claim is false" because it doesn't make claims in the same way science does.

I see Philosophy more as an exploration of the shape of our thoughts. It identifies particular vague ideas we have, defines them in more detail, and gives them names so we can discuss them. This gives people who study it exposure to a wider variety of ways of thinking, and often makes it easier to spot assumptions in other contexts.

Indeed, at the point where you're doing an experiment to determine if a claim is false, you've already made a bunch of philosophical assumptions, such as the existence of a consistent material universe, and the validity of empirical inference. Which is not to say those assumptions aren't reasonable, but it's important to recognise that we're making them, rather than asserting that the results of science are "something we know for sure" and dismissing anything else as "just so stories".


Falsifiability, mh? Popper identified that in 1934. Before popper no scientist ever said "your claim is not falsifiable". So science before that was mere philosophy?


I guess you haven’t heard of logic? There’s two ways to gain knowledge.


Well, it's fine, except it wouldn't really fit the standards of what we call philosophy today. Doing psychology is fine, but why quit doing philosophy as well ?


> For example, imagine that at noon you look at a stopped clock that happens to have stopped at noon. Your belief that it’s noon is true, and arguably it’s also justified.

I'm going to butcher stuff, but could someone please do an ELI 5 why you can't just infer that the clock isn't working? When I look at clocks I look at the hour hand and minute hand. Then I verify my believe by looking at the second hand. That process takes longer than a second, so I immediately notice the clock isn't working and that I cannot be sure whether it's noon.

I actually have a bit of a tick when watching clocks in that I repeat this process multiple times at the weirdest moments when there is a clock available. I immediately notice a stopping or non-working clock.

I always felt like that -- in first hand experience type of stuff -- we know something is true because we've seen it a trillion times before, like gravity and throwing a tennis ball.


The clock doesn't really matter in this example. The point is some simple device that you use to relay on. If the clock exmple bothers you, you can think of a broken thermometer that happens to show the correct temperature.


Consider your two cases: (1) deciding what time it is by using the mechanism of a man-made clock; (2) inferring there's this invisible thing called gravity at work when a tennis ball is thrown that dictates the flight of the tennis ball and that it is predictable enough that you could catch a thrown tennis ball by intuiting the arc from the velocity of the throw.

In the first case you deduce the time. And, until the law of gravity is established, in the second case you induce the arc.

In the first case it is entirely reasonable to be perform your little verification step because man-made objects sometimes fail. In the second case there is no need to second guess the universe because your intuition that the laws which govern things like the flight of a ball are not susceptible to revision is correct.

I don't think you're butchering anything, I just think you have to be able to recognise when you are using the inductive method and when you are using the deductive method.

Next question would be: do these two methods exhaust the range of methods we employ to acquire knowledge? If not, what are the other methods?


Many large public clocks have only two hands (church towers, for example)



I must confess the title made me think about Monty Python skits: Philosopher Competition

https://www.youtube.com/watch?v=-2gJamguN04


"If you believe you are a chair, then in theory I can sit on you. Let's test; hold still..."


Doesn’t what becomes testable in the real-world move out of philosophy to a science?


So they're natural philosophers? In other words... scientists?

Empiricism is one way of providing evidence for a true statement, but -- from a philosophical perspective -- it is not obvious it is the best or only way. There are some things that are true that are simply beyond the ability of anyone to see or prove. That's supposed to be what philosophy is about (as opposed to psychology)


>Empiricism is one way of providing evidence for a true statement, but -- from a philosophical perspective -- it is not obvious it is the best or only way.

But it does stop being science when you abandon testable predictions.


> But it does stop being science when you abandon testable predictions.

No it doesn't. This is a common myth, but if you look at astronomy or biology you will obviously see that is not the case. Observational science does not require experiments.

Yes, experiments provide more rigorous evidence, but to deny that observation also provides evidence is quite frankly to throw the baby out with the bathwater. Our entire science of classification would be moot. And these aren't even soft sciences, they are also hard science.


A testable prediction doesn't require an experiment. Observational science is full of falsifiable predictions, just predictions that can't be tested without more time elapsed, more money spent, etc.--"that star in the Little Dipper will always appear stationary, with all the other stars rotating around it", "the carnivores we find in future will have shorter intestines than the herbivores", "no one will discover a language without subordinate clauses", etc.


What about systems of classification, like that animal is a zebra because it has x,y,z features? There's little to be tested there, but it is enormously useful and very foundational to science.


I'd see classification more as a means of organizing scientific knowledge, and less as scientific knowledge itself. (Like, the question of whether a result belongs to algebra or number theory isn't math. Those are just labels that we assign in the way we think we makes the body of knowledge most understandable.) A classification is often somewhat arbitrary--a biologist and a furrier might classify rodents differently, and both have the most useful system for their respective work.

Though if you specify the basis for your classification, then I think the classification itself can become a testable prediction. You might have a category for "animals with features X and Y", because you've only ever found animals with both. This is implicitly the testable prediction that those two features will occur together. If you found a black-and-white-striped horse-shaped carnivore, then the biologists would surely consider something to have been falsified...


Astronomy and biology absolutely feature testable predictions. Science really can't happen without that feature. It's certainly not a myth.

If you make a prediction that by its nature cannot be tested, you really can't do anything useful with it.


What about interpretations of quantum mechanics? I'd argue these are scientific, but nobody has any idea how to test them just yet.

They might do one day.

But if we weren't willing to talk about them before they were testable then nobody would be inspired to develop the necessary tests.

The fact that they're an essential part of the scientific process makes them science as far as I'm concerned. If it helps make up your mind, plenty of people called scientists spend time thinking about them.


There are lots of tests for quantum mechanics, it’s probably the most rigorously tested theory in physics.

Some parts of quantum theories are inferred, but they’re inferred from data collected in reproducible tests. Also, any part of quantum theory that hasn’t been confirmed by experiment has scientists working to do just that. If something was untestable and thought to never be testable in the future, then it wouldn’t be considered science.


Observation is empirical.

I genuinely think you’re confused about what science is. It is entirely based on empirical evidence. Broadly speaking it is one source of truth and knowledge, and while there are other sources of truth and knowledge that are also valid, being valid alone doesn’t make them science.


If it's entirely based on empirical evidence, then why is subject experience excluded from science?


Because rigorous scientific inquiry holds evidence to certain standards, like being measurable and repeatable for instance. Which is why fields that struggle to collect high-quality evidence, like psychology for instance (and other fields to a greater extent), are typically referred to as "soft sciences". While they follow some of the same methodology as other sciences, the evidence they collect and the measurements they make tend to be quite subjective and imprecise.

Psychology is probably a decent example here, as it has come up with a few tests that are reproducible (like IQ tests), and even manages to make some predictions given large enough sets of subjects, which is the only reason it's considered any form of science at all. Philosophy at the other end of the spectrum tends to almost never make any measurable predictions at all.


No argument there...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: