Introduction to the Philosophy of Science
PHIL 240/Ed Slowik
Contents:
Class Notes
for Readings in the Philosophy of Science/Schick
MAKE SURE THAT YOU READ BOTH THE CLASS
NOTES FOR THE SCHICK ANTHOLOGY, AS WELL AS THE CONCISE INTRODUCTION (THAT IS,
THERE ARE TWO PARTS TO THESE NOTES).
PHILOSOPHY 240: PHILOSOPHY OF SCIENCE
Edward Slowik
Class
Notes for Readings in the Philosophy of Science, T. Schick, Jr., ed.
Part 1. This section of the book concerns the
problem of demarcating "real" scientific theories (like biology,
geology, chemistry, etc.) form the so-called pseudo-scientific theories (like
astrology, ESP, UFOlogy, etc.). This problem has been, and still remains, one
of the most important issues in philosophy of science.
Chapter 1:
Ayer. A statement is
meaningful if and only if it is verifiable by observable evidence (i.e., some
observation can demonstrate that the statement is either true or false). By
this means, Ayer hopes to separate science from pseudo-science since he assumes
that real sciences make claims that are verifiable, such as, "copper
expands when heated"; whereas pseudo-sciences do not make claims that are verifiable,
for example, "The mind is comprised of the Id, Ego, and Super-ego."
(How do you verify the existence of the Id?) Ayer appeals to "in
principle" verifiability to explain why such claims as "The surface
of Pluto is rocky" are not pseudo-science: We can verify claims about the
surface of Pluto, but we just donÕt have the technology, time, money, etc., to
actually go there and verify such claims, but we can never "in
principle" verify pseudo-scientific claims (since we donÕt know how to verify,
for example, the existence of the Id, Ego, or Super-ego).
Chapter 2:
Popper. In this
article, Popper rejects AyerÕs theory since he thinks that mere verifiability
is too weak a concept to solve the demarcation problem. If a theory is vague
enough, it can make claims that are verifiable, or at least seem to be
verifiable, such as, "if youÕre a Taurus, then you are a stubborn
person". Thus, since astrology is a pseudo-science, but a simple
verificationist theory (like AyerÕs) canÕt separate astrology from real science,
we need to find a new method of solving the demarcation problem. Rather, Popper
thinks that the true sign of a scientific theory is that it is falsifiable;
that is, the theory makes some observable prediction that can show that the
theory is false. Pseudo-scientific theories, according to Popper, make claims
that are consistent with all known evidence, but they do not make claims that
can ever be shown to be false. Astrology, for example, never makes predictions
that are so specific or detailed that they can be shown to be false, but real
science does this all the time (e.g., "if the Copernican theory is true,
than we should see phases of Venus").
Chapter 3:
Kuhn. For Kuhn,
PopperÕs falsification is not sufficient to solve the demarcation problem
because many pseudo-scientific theories make claims that are falsifiable, and
have been falsified on numerous occasions: e.g., "if I have ESP powers, I
should score above chance in a Zener card test". So, astrology makes
falsifiable claims that are exactly similar to the falsifiable claims of the
real sciences, such as astronomy or physics. PopperÕs criterion should thus
judge that both astrology and astronomy have equal scientific worth—but, of
course, we reject this conclusion (since astrology is a pseudo-science).
According to Kuhn, a real science is one that shares a set of criteria for
determining what are the concepts and problems of the science, how one solves
these problems, and the standards of successful solutions to the problems
(among other things). These features are what constitute a
"paradigm", as he calls it. Kuhn claims that pseudo-sciences donÕt
have the ability to meet any of these criteria, since there is no agreement
among the practitioners of, say, astrology, on the basic concepts and problems
of this field, how to solve the problems, and what constitutes a successful
solution to a problem, etc.
Chapter 4:
Lakatos. Lakatos
agrees for the most part with Kuhn, but disagrees with the implied
"irrationality" of theory choice (as will be described much later in
class). LakatosÕ contribution to the demarcation problem (although it is an old
idea) is that a theory is falsified if another theory comes along that can explain
everything that the old theory explained, but that also makes predictions of
new facts that it would have been impossible (or at least extremely difficult)
for the old theory to make. E.g., EinsteinÕs theory explained everything that
NewtonÕs theory explained (more or less), but it also predicted new facts which
the older theory could not, or probably would not, have explained; such as the
orbit of Mercury, bending of light rays, etc..
Chapter 5:
Laudan. The main
argument that Laudan puts forth is that, although Creation science is
pseudo-scientific (i.e., Creation science accepts the Genesis creation story
from the Bible as literally true), it is not pseudo-science because it fails
some test of which supposedly separates real science from phony science;
rather, Creation science is pseudo-scientific because it has made claims that
are testable, and although those claims have been shown to be false time and
time again, many people still believe in it. Thus, contra Ayer or Popper,
Creation science does make claims that are verifiable and falsifiable.
Accordingly, the people who believe in Creation science must have
non-scientific reasons for accepting it; and this places Creation science in
the realm of the pseudo-scientific. Therefore, Laudan (apparently) rejects any attempt
to come up with a list of criteria that separate science from pseudo-science:
all such lists are inaccurate and ultimately fail to work in all cases.
As regards Judge
OvertonÕs ruling, Laudan goes on to criticize the 5 criteria which Overton uses
to resolve the demarcation problem (and thereby declare Creationism to be
pseudo-scientific):
#1, Science is
guided by natural laws, and #2, it is explained by reference to natural laws.
Laudan claims that there are many cases in the history of science where
"good" scientific theories were accepted but did not have natural
laws to explain why the theory worked (and many good theories may still not
have a natural law): e.g., Darwin did not have genetic theory to explain the
mechanism underlying natural selection, nor does plate tectonics yet know the
laws of "crustal motion"—but does that mean these theories
were/are pseudo-scientific? Thus, criteria #1-2 are not sufficient to label a
theory "pseudo-scientific" (because many real theories of science
would be labeled pseudo-scientific, as well).
#3, scientific
theories are testable, #4, their conclusions are tentative and not the final
word, and #5, they are falsifiable. Of course, as stated above, Kuhn claims
that Creation science is testable and falsifiable—and, in fact, has been
tested and falsified over and over again! Also, Creation science has continued
to change over the years and adapt its hypotheses to the new observational data
from geoelogy, biology, etc. (in violation of #4). Consequently, Creation science
meets criteria #3-5.
Chapter 6:
Ruse. In response to
LaudanÕs article above, Ruse claims that there are very important differences
between Creationism and real science which Laudan does not acknowledge. With
respect to LaudanÕs criticisms of OvertonÕs 5 criteria, Ruse argues:
While it is true
that older theories in the history of science often failed to meet conditions
#1-2, those theories eventually were shown to fit with, or formulate new,
natural laws (as in the example of the development of genetics and Darwinian
Evolution). As regards Plate Tectonics (in Geoscience), Ruse claims that
although no laws of nature have yet been devised for the cause of these
phenomena, this area of science does not appeal to "miracles", and
was only accepted when evidence for the theory showed that it wasnÕt a
miracle—but the same is not true of Creationism, which does appeal to
miracles. Concerning #3-5, Ruse rejects LaudanÕs claim that Creationism has
satisfied these criteria. Since Creationism takes as its core belief the claim
that the world was created by an act of God, this aspect of the theory cannot
in principle be verified, revised, or falsified. (Why?: because Creation is a
supernatural event that the Creationists hold to be outside the realm of
science and thus not explainable by natural law—and this sort of claim is
not revisable and not testable, and thus not falsifiable)
Chapter
39: Feyerabend.
Science, according to Feyerabend, is driven by politics and ideology. Science
has no method to distinguish fact from fiction that is any better than other
type of social institution (and even if it did, there is no logical
relationship between "facts" and theories, since "facts"
are also subjective and based on ideology and politics). Accordingly, it is
false to believe that science is better than religion because science deals
with facts (whereas religion supposedly deals only with faith).
"Truth" plays no role in determining which scientific theories get
accepted, states Feyerabend, since theory choice is decided by vote (i.e., the
scientists merely vote for the theory that they like best, and the theory with
the most votes is adopted). Since truth plays no part in science, the
state/government has no right to force people to study it (as opposed to some
other ideology). In a real democracy, people should be free to choose their own
science; thus it is possible that people may choose religion over science (for
neither is more "correct" or more "truthful" than the
other).
Chapter
40: Dawkins. Science
and religion are very different, according to Dawkins. Science is based on
observations, which can be independently verified by others, while religion is
based on scripture, which must be accepted on faith (either through faith in
the allegedly divine scriptures, or through faith in the allegedly
divinely-inspired religious leaders). Ultimately, since faith cannot be
reasoned with, it leads to conflicts and war. For Dawkins, faith is a disease
("virus of the mind") which can only be cured by the use of science and
reason. Also, if students were exposed to good science and reason training from
an early age, then science would provide an "awe-inspiring" emotional
satisfaction for the average person which is as every bit as good as that
allegedly supplied by religious faith.
Chapter
41: Plantinga. For
Plantinga, scripture is a divine revelation, but reason is a God-given
attribute, as well. Therefore, when science/reason and religion conflict, one
must examine the issues carefully to determine whether it is a faulty religious
interpretation of scripture or a false scientific theory that is the source of
the conflict—Plantinga thus assumes that scripture and science can always
be reconciled in the end. After examining the evidence for Evolution theory,
Plantinga rejects this theory and proposes some form of Creationism instead
(i.e., that the earth was created by God recently with all living species in
their present form). Rather than limit science to mere natural explanations of
the physical world, Plantinga assumes that scientific explanations should make
use of supernatural causes, and not just natural causes, in trying to explain
scientific/natural phenomena. He reasons that, if the best explanation involves
supernatural causes, then science should use these causes in its explanations
of the world.
Chapter
42: McMullin. There
can be no Theistic science (a la Plantinga), concludes McMullin. It is not
simply due to the fact that a theistic science incorporates supernatural
causes, but because it lack the proper method of science. This proper method
involves: systematic observation, generalization, testing of hypotheses, etc.
Plantinga wants a theistic science based on Christianity, but why limit it to
just that one religion? Yet, once you open up a theistic science to many religions,
then how do you test rival hypotheses? For instance, how do you set up a test
to determine whether "the Christian God created the world" or
"the Norse god created the world"? It seems that no conceivable
observable test could ever settle this debate—thus, according to
McMullin, this unresolvable dilemma shows that both theories are not
scientific. McMullin also rejects PlantingaÕs brand of Creationism, since the
evidence overwhelmingly favors evolution. (Besides, maybe evolution is GodÕs
chosen method of creation?)
Chapter
43: Atkins. There is
no reason to believe in the supernatural, according to Atkins. He uses
"OckhamÕs Razor" to argue that natural explanations are more simple
than supernatural explanations, and thus they are better explanations. According
to Ockham, if you have two theories which both explain some particular
phenomena equally well, then always choose the theory which is the simplest
(i.e., assumes the least number of theoretical entities or mysterious elements
in accounting for the observable data). Since supernatural explanations involve
the supernatural (obviously), which is beyond science and nature, these
explanations will always be more complex, and thus should be rejected for
natural explanations of the same phenomena. Will science always find an answer
to every question that can be raised, or can it resolve all scientific
problems? Atkins believes that since science has not encountered any barriers
in explaining the natural world so far, we have good reason to believe that science
will keep answering our scientific problems in the future.
Chapter
44: Gardner. While
science is the best means of explaining the world, Gardner holds that it cannot
explain everything. ScienceÕs job is to explain the fundamental laws of
nature—but it is not the job of science to explain
"everything". For example, even if science were to explain all the
laws of nature, one could still ask the question, "Why those laws of
nature, and not some other set of laws?"—and this demonstrates that
humans can keep asking questions over and above any explanation that science
(or any other methodology) can ever provide. Gardner also wonders if the human
mind may be limited in what it can understand (just like lower animals have
limited understanding in comparison to human beings). So, it may be impossible
to solve all scientific problems given our limited mental capacity (unless we
evolve to a higher, smarter level of being).
Chapter 7:
Hume. Hume held that
there are only two ways to justify your beliefs, by reason or experience (if
something cannot be justified in either of these two ways, then you have no
rational or deductive justification for believing it), and he used this
conclusion as a basis for undermining certain "a priori" (i.e.,
through reason alone) concepts of causation. Belief in causation, he states,
ultimately resides in a belief in the Principle of Induction—and to argue
inductively is really to argue that the future will resemble the past. Thus the
sun will continue to come up each morning, food will continue to nourish us,
walking out of high windows will continue to be dangerous, etc., because it has
always done so in the past! Since such claims involve "matters of
fact" (i.e., they are learned through experience, and not purely through
reason), we can only appeal to the truth of such claims if we assume that the
past occurrences of these "constantly conjoined" events—i.e.,
"eating the bread", and "being nourished by the
bread"—will continue to be conjoined in the future. This is how Hume
understands causation, for it is merely the constant conjunction of two events
(which are "matters of fact") in space and time: e.g., the
"striking of the match" has always been followed in space and time by
the "lighting of the match".
Of
course, Hume says that your belief that the future will resemble the past is
without justification. This, he says, is because the belief cannot be justified
either as a relation of ideas (i.e., by reason alone) or as a matter of fact
(i.e., by experience). Let us consider these points separately: (1) "The
future will resemble the past" is not true by reason alone, since we can
easily imagine that, in the future, things will go completely haywire, behaving
in new and unexpected ways; (2) "The future will resemble the past"
is also not a matter of fact—that is, it cannot be justified on the basis
of your past experience. To try to justify it in this way, you might say:
"My experience has been that, in the past, each new day has resembled the
ones that have gone before, i.e. the future has always resembled the
past. So I expect that, in the future, the future will continue to
resemble the past." But, in applying your past experience to the future in
this way, you are assuming that your past experience is an appropriate guide to
what the future will hold, i.e. you are assuming that the future will resemble
the past—and this is the very belief that you are trying to justify, so
you canÕt assume it in your attempt to justify it.
Therefore, the
belief that the future will resemble the past, i.e. the Principle of Induction,
is not justified, either as a relation of ideas or as a matter of fact.
However, Hume seems to think that we do have an idea of causation, and that we
base this notion on the fact that we have become conditioned by experience, or
"habit", to always anticipate or expect the effect ("the
lighting of the match") given the cause ("the striking of the
match"). Hume explains at length that this form of conditioning is crucial
to our survival, and that only a fool, therefore, would not "believe"
in causation—but, of course, we have no completely certain, philosophical
understanding of causation as some previous philosophers had thought.
Chapter 8.
Hempel. The
"Narrow Inductivist" (NI) account of science is the main target of
criticism in HempelÕs article. The NI theory supposedly consists in: (1)
Observing and recording the relevant facts of an experiment or scientific
investigation; (2) analyzing and classifying those facts; (3) using induction
to derive generalizations from those observed facts; and (4) testing the
generalizations further. Induction is the method by which you conclude that all
objects in a certain class have a property based on your experience of several
members of that class: i.e., "Every A that has been observed is an F;
therefore, all As are F". (For example, "Every swan that has been
observed is white. Therefore, all swans are white.") Hempel criticizes
this view for various reasons: (a) Facts canÕt be analyzed and observed in the
absence of hypotheses since there is not any guidance or constraint on what to
analyze or observe. (E.g., What is relevant to the burning of a candle, as a
test of combustion?: The color of the candle? The time of day? etc.) (b)
Induction can only generate simple hypotheses based on direct observation, such
as "copper expands when heated", and not the more complex (and
useful) hypothesis that make use of unobservable entities, such as "matter
is composed of atoms" (i.e., atoms are unobservable, so they canÕt
constitute the basis of an inductive case).
HempelÕs
solution is to invoke the "Hypothetico-Deductive" (H-D) method as an
alternative to NI. The H-D method has the following form (where "H"
stands for the hypothesis being tested, and "P" stands for a
prediction derived from H):
Premise 1) If H,
then P.
Conclusion) H
Example:
Premise 1)
"If the Copernican theory is true (H), then Venus should show a full set
of phases (P)".
Premise 2)
"Venus does show a full set of phases (P)."
Conclusion)
"The Copernican theory is true (H)."
As can be seen,
the truth of P gives strong support for the truth of H, but it is not
conclusive proof of HÕs truth. This is due to the fact that the form of the
logical argument used by the H-D method is called "Affirming the
Consequent", and this form of argument often allows for situations where
all the premises are true but the conclusion is false. For example, given that
oxygen is necessary for fire, one might argue:
Premise 1)
"If there is fire in this room (H), then there is oxygen in this room
(P)."
Premise 2)
"there is oxygen in this room (P)."
Conclusion)
"So, there is fire in this room (H)."
Of course, the
common fact that there is oxygen, but no fire, in most rooms shows the weakness
of Affirming the Consequent as an argument form. Even with this fault, however,
the H-D method provides inductive support for an hypothesis, since every time
the hypothesis pasts an observable test (i.e., a prediction derived from the
hypothesis turns out to be correct), the probability of the hypothesis being
true is increased. Accordingly, the H-D method is a means by which an
hypothesis can gain inductive support in degrees as it makes more and more
successful predictions.
Chapter 9.
Popper. The main
criticism that Popper launches at the H-D method (as above) is that it is not
necessarily the case that successful predictions make an hypothesis more likely
to be true. Many pseudo-scientific theories make successful predictions on a
regular basis, but the success of these predictions does not make the theory
any more "true" than if it did not make them (e.g., astrologers often
make successful predictions). Rather, Popper thinks that a different form of
argument is needed to test hypotheses, a method that tries to reject hypotheses
by trying to prove them false: the form of the argument is called "Denying
the Consequent":
Premise 1) If H,
then P.
Conclusion) not
H
This form of
argument is called "valid", which means that, if all the premises are
true, then the conclusion is true (so it is impossible to have all true
premises and a false conclusion). Consider the following example:
Example:
Premise 1)
"If the Ptolemaic theory is correct (H), the Venus should not show phases
(P)."
Premise 2)
"Venus does show phases (which negates the prediction, or simply, not
P)."
Conclusion)
"Therefore, the Ptolemaic theory is not correct (which negates the
hypothesis, or not H)."
One can easily
show that this argument form is valid, by using our previous example of oxygen
and fire:
Premise 1)
"If there is fire in this room (H), then there is oxygen in this room
(P)."
Premise 2)
"There is no oxygen in the room (not P)."
Conclusion)
"There is no fire in the room (not H)."
If those two
premises are true, then the conclusion must also be true (and it is true that,
if there is no oxygen, then there can be no fire). Popper believes that the
power of "falsification" (also known as disconfirmation) makes for a
better method of scientific testing, since it can conclusively demonstrate
whether or not a theory has been refuted in a test (and thus marks progress in
the sciences). And, of course, Popper does not believe that pseudo-scientific
theories can make predictions that can be falsified using the above (Denying
the Consequent) method (see the notes on Popper above).
Chapter
22: Carnap. In this
late article, Carnap merely asserts his belief that some form of
Observation/Theory distinction (OT distinction) can eventually be developed.
Chapter
23: Hesse. For the
Positivists, the meanings of the observation terms (O-terms) are independent of
our theoretical beliefs (T-terms), so that the meaning of the latter is fixed
completely by the former (see notes to Slowik: chapter 1 & 2). However,
Hesse argues that this is not possible, since O-terms, as well as the T-terms,
depend upon our theories of the world—and these theories often change,
thus changing the meaning of the O-terms and T-terms. For example: such T-terms
as, "mammal", "atom", etc., referred to (or "picked
out") a different set of O-terms in the past. But, with changes in theory,
these terms now refer to a different set of observable things—and this
demonstrates that the meaning of the O-terms cannot be determined separate, or
apart, from our larger theories of the world. (E.g., "mammals" once
referred to only land animals, but now that term also pertains to whales,
because our understanding of "mammals" has changed.)
Chapter
24: Hanson. The
"theory-laden" character of observation is HansonÕs main claim to
fame in the philosophy of science, for he was one of the first philosophers to
argue (at length) that our seemingly un-theoretically-influenced observations
are, you guessed it, theoretically influenced! In other words, the
observational terms and statements that the Positivists believe to be
uncontaminated by theoretical beliefs are, in reality, contaminated by theory,
so these O-terms cannot play the foundational role (of defining the T-terms)
assigned to them. (So, the OT distinction fails.) The example of Tycho and
Kepler sitting on the bench watching the sunset is the famous example Hanson
employed: since Tycho believes the Ptolemaic theory (that the earth is
stationary), he "sees" the sun move; but, on the other hand, since
Kepler believes the Copernican theory (that the earth moves), he
"sees" the earthÕs horizon move past a stationary sun. How could this
happen?: IsnÕt the sunset a clear example of an observation that involves only
O-terms, and is thereby free of all theory (and T-terms)? No, declares Hanson,
since the theories that each of these astronomers accept "determines"
the character of their observations—or, put differently, their
observations and perceptions of the physical world are "laden" with
theory (i.e., general beliefs, concepts, etc., which are all theoretical), so
there can be no theory-independent observations or facts. Hanson also replied
to the objection that "they both see the same thing but interpret it
differently": He reasons that "seeing" is not a two-part
process; rather, it is a one-part process. We donÕt see the sunset as a
mere change of distance between the two objects (sun and horizon), and then interpret
it as the sun moving (or earth moving)—instead, we just see the
sun moving (or earth moving)! He seems to agree that the same objective event
in the physical world (i.e., a relative change of distance between the earth
and horizon) is taking place before our two gazing astronomers (and they have
the same optical/retinal experiences), but that these two men are simply "seeing"
that event in different ways right from the start. Conclusion: there can be no
OT distinction since there are no O-terms not contaminated by theory.
Chapter
10: Duhem. DuhemÕs
presentation of the "underdeterminism" problem is a classic (but see
the class notes for Slowik: chapter 3).
Chapter
11: Lipton. Lipton
argues that many of the more simplistic accounts of scientific
confirmation/disconfirmation (such as PopperÕs and HempelÕs above) do not
accurately capture the real use of such methods by scientists. The problem with
these earlier accounts is that they only look at individual hypotheses as
regards successful predictions, and they do not examine the comparative success
of different hypotheses (i.e., the success of rival hypotheses when compared to
each other). "Real" scientific work compares and contrasts different
hypotheses to find the one that best accounts for the evidence (a process which
Lipton dubs, "contrastive explanations"). In the Semmelweis case,
HempelÕs version of the Hypothetico-Deductive method is not sufficient to pick
out the "cadaveric" hypothesis as the best explanation of the
increased infection in the First Division, since the H-D method would provide
equal support for the other hypotheses (overcrowding, diet, epidemic outbreak,
etc.), and not favor the cadaveric hypothesis—this is due to the fact
that the H-D method (on HempelÕs account) can only determine if an hypothesis
and its observable prediction are compatible with the observational evidence
(and the evidence of the infected patients is compatible with all the
hypotheses, when these hypothesis are examined in isolation). Thus, it is only
when you compare and contrast the relative successes and failures among these
rival hypotheses that you find one that stands out as the clear winner. Also,
PopperÕs "falsification" method (which is a variant of the inductive
H-D method, it should be recalled) is not sufficient to account for the
Semmelweis case: since the demand for a prediction that could deductively
refute the cadaveric hypothesis would likely lead to testing the claim,
"if there is no infection (from the students to the patients), then there
is no childbed fever", PopperÕs method would falsify the cadaveric
hypothesis! That is, since having the students disinfect their hands before
their evaluations only lead to the mortality rate of the First Division falling
back to that of the Second Division, the claim put forth above is actually
falsified in many individual instances (where patients came down with childbed
fever even though the students who handled them disinfected their hands). So
PopperÕs method would lead to the rejection of the cadaveric
hypothesis—and this hypothesis is clearly the best explanation of the
increased mortality rates. LiptonÕs conclusion is that hypothesis cannot be
tested in isolation, but their relative successes and failures must be compared
and contrasted to find the best explanation.
Chapter
17: Putnam & Oppenheim.
This paper summarizes the classical case for the reduction of higher level theories
to lower level theories. For Putnam and Oppenheim (P&O), the behavior of
entities at one level is determined by the behavior of entities at a lower
level, thus one can reduce the higher level theory to the lower (e.g., the
chemical elements can be reduced to the particles of physics: see notes to
Slowik, chapter 4). As they put it, "the objects in the universe of
discourse [T] are wholes which possess a decomposition into proper parts all of
which belong to the universe of discourse of [T*]." They also accept the
"Kemeny and Oppenheim" thesis of reduction: (1) the vocabulary of T
contains terms not in T*; (2) any observable data explainable by T are
explainable by T*; (3) T* is at least as well systematized as T (i.e., that the
simplicity and explanatory power of T* must be at least as great as T).
P&OÕs claim for a reductive science is dubbed the "Unity of
Science" (US) thesis, since it entails that there can be no
"special" sciences which are not reducible to a more basic level
(with physics being the most basic level, of course). P&O claim that the US
thesis has not been proven, but that it is a "working hypothesis"
(and is a credible hypothesis). More importantly, P&O claim that evidence
from the sciences has thus far provided confirming support for the US thesis
(e.g., the reduction of chemistry to physics, etc.), and that the US thesis is
also much simpler (OckhamÕs Razor) than the competing notion that there are
many sciences which are not reducible to one another (since a world where the
US thesis did not hold would seem to imply that the phenomena/events of the
world that comprise different sciences are in some way "disconnected"
and without any relationship to each other. For example, the behavior of
chemical elements would not depend on the behavior of physical elements, such
as protons, etc.—but, how can this be, since chemical elements are
collections of physical elements!
Chapter
18: Fodor. Fodor
rejects Putnam and OppenheimÕs US thesis since he argues that the behavior of
higher level entities, or at least the "content" of these higher
level theories, is not always determined by the entities at a lower level. For
example, certain laws of economic (such as GreshamÕs law) will remain the same
no matter what kind of currency is chosen: that is, whether the money is made
out of paper, wood, metal, etc., will not affect the laws of
economics—so, the content of the laws of the higher level theory
(economics) is not completely determined by the specific type of bodies at the
lower level (i.e., whether or not it is paper or coin currency). Another way to
put this problem is that if you were to try to define the "natural
kind" term "monetary exchange" (where "natural kind"
is a basic type of entity or grouping of things within a theory), then you
would have to list all the possible objects that could count as a
"monetary exchange"; but, since this list could be endless, the
definition of "monetary exchange" would then not be well-defined (in
fact, the definition would not be complete). Also, many other natural kinds
might have the very same infinitely extendable definition, such as "money
lending", thus blurring the distinction between these two theoretical
terms of economics: i.e., both "money lending" and "monetary
exchange" would have the same potentially infinite definition, and thus
the two terms would have the same meaning (which is false, of course).
Consequently, the natural kinds at one level of theory may not be important at
the level of another theory: e.g., different types of rocks, which are
important natural kinds for geology, are not important for physics, since all
bodies are comprised of the same natural kinds of physics (electrons, protons,
neutrons, etc.). At the end of the article, Fodor suggest that what is
important to reduction is not that the entities of one theory can be reduced to
the entities of another theory; rather, reduction should try to "explicate
the physical mechanisms whereby events conform to the laws of the special
sciences"—i.e., try to reveal those processes, laws, mechanisms at
the lower level theory that bring about the laws and processes at the higher
level of theory. On this view, the ontology of the world is still the science
of physics, but the "content" (or information) of higher level theories
(such as economics) cannot be simply reduced to the "content" of
physics. Or, in other words, although everything is made up of the particles of
physics, and no sciences violate these laws of physics, there is still a
hierarchy of levels of information/content in the different sciences which is
not reducible to physics. (This view is sometimes dubbed,
"supervenience", see Slowik, chapter 4.)
Chapter
19: Darden and Maull.
Darden and Maull (D&M) try to provide a theory that allows for a unified
science (i.e., no special sciences) but without having to reduce one theory to
another in the manner of Putnam and Oppenheim. In essence (as suggested above),
it may be possible to provide theories that unify the information/content of
different sciences by showing how the content of, say, two separate theories
(such as KeplerÕs planetary laws and GalileoÕs law of free-fall) can be
inter-related and better explained by a more inclusive theory that embraces
them both (NewtonÕs law of gravity). A "field", for D&M, is an
area of science consisting of a central core of problems, domain of facts, and
set of techniques for explaining those facts. Therefore, "interfield"
theories are theories that can unify the content/information of other fields by
explicating the relations between these fields; and this unification provides
for a deeper understanding of both fields without reducing one to the other.
(In the example above, the theories of Kepler and Galileo can be considered
different "fields" since they both focused their attention on a quite
different set of problems, facts, and techniques.)
Chapter
20: Dupre. Although
he doesnÕt deny that the "interfield" method of Darden and Maull
(D&M) might actually assist in unifying different sciences, Dupre believes
that the D&M method would seem to allow pseudo-sciences into the domain of
science, and thereby work against the unification goal. For instance, if all it
takes for a theory to be "unified" into the sciences is for the
theory to have its content/information related and integrated into a larger
theory, then it seems that pseudo-scientific theories, such as astrology, could
be likewise incorporated into a larger theory. To be more specific, someone
might come up with a theory that interrelates both astronomy and astrology,
thus including astrology in the domain of "legitimate" scientific
theories. Of course, anytime a philosophical theory of science grants
legitimate status to a pseudo-scientific theory, there is something seriously
wrong with your philosophical theory. In contrast to this approach, Dupre
suggests that the demarcation problem (see notes above) cannot be solved by
examining the content/information contained in theories—rather, it can
only be resolved by looking at what scientists do, i.e., at scientific practice.
Dupre belongs to a growing set of philosophers of science (partly motivated by
the later Wittgenstein) who think that science is best understood not as a body
of knowledge, but as a "practice" (which can be very roughly
described as a specific socio-cultural human enterprise with a set of
established methodological techniques, behaviors, virtues, etc.). Therefore, it
not the particular beliefs or theories of, say, Creation science that makes
this theory pseudo-scientific: rather, it is the fact that Creation scientists
fail to "do" the same things that real scientists "do"
(such as Evolutionists) that demarcates the former from the latter. Creation
scientists supposedly do not possess the appropriate intellectual
skills/practices that real scientists possess (e.g., they are not sensitive to
empirical facts, work from plausible background assumptions, sensitive to
criticism, etc.), thus Creation scientists are not "doing" science.
Chapter
21: Reisch. The
"practice" theory of science is rejected by Reisch since it is too
weak to carry out the demarcation of science from pseudo-science. Reisch points
out that the methodology of pseudo-scientific theories is often identical to
the methodology of real sciences. In fact, any close examination of the
methodology of Creation science shows that they behave very much like the
scientists in "legitimate" scientific theories: e.g., Creation
scientists have their own research institutes, journals, and often engage in
scientific disputes with one another over rival hypotheses concerning the
Biblical flood—and they use evidence to back up their claims, too!
Therefore, it would be very difficult to try to separate real science from
pseudo-science based only on how scientists practice their craft. (In addition,
much great scientific work has historically been carried out under
circumstances where the methodology was quite sloppy given our current
standards—e.g., NewtonÕs appeal to God as the stabilizing source of the
solar system—thus, should we reject NewtonÕs Theory of Gravity as pseudo-science
since he appealed to supernatural sources in his attempt to understand the
theory?) Consequently, the content/information of a given theory is important
in determining whether it is, or is not, science (e.g., the use of
"supernatural" agents in Creation science is a part of the content of
the theory that is unacceptable to modern science). Furthermore, although
science does not have a theory describing how science and pseudo-science are
demarcated, it is still the case that science can easily detect that, say,
Creationism is a pseudo-science due to the fact that Creationism cannot be
unified/integrated into the larger domain of scientific theories (and this
failure to unify is a result of the content of the pseudo-science, not its
methods/practices). Reisch concludes that unification should probably be seen
as a scientific problem, and not a philosophical problem, since science does
unify (and demarcate pseudo-scientific theories), whereas philosophers have
failed to come up with a workable demarcation criterion.
Chapters
12-16: Hempel, Salmon, van Fraassen, Kitcher. Hempel is treated in the Slowik
notes: chapter 5 (and skip M. Salmon, chapter 16). More objections to the D-N
model are raised in SalmonÕs article (chapter 13), but one of the more interesting
is the claim that the D-N model cannot account for statistical laws of nature
where the probability of the law obtaining in any particular case is less that
50%. As noted in the Slowik notes (chapter 5), the D-N model is supposed to
provide predictions of the explanandum (the event to be explained), so the D-N
model apparently cannot make use of such lawlike claims as, "smoking
causes lung cancer", because less than 50% of people who smoke get lung
cancer (and thus the prediction would be wrong more than half of the time).
Yet, as Salmon argues, even if the claim "smoking causes lung cancer"
does not hold in most cases, it still seems relevant to the explanation of why
a particular smoker did come down with lung cancer. Therefore, since this claim
is relevant to explanation, but the D-N model cannot make use of it, there is
good reason to reject the D-N model for a better account of explanation.
In a similar
manner, van Fraassen alleges that the D-N model cannot capture the full variety
of explanations manifest in the sciences because there is no single
relationship between fact and theory. The context of the inquiry determines
which form of explanation is the most relevant, and the context will vary from
case to case (obviously). That is, in some cases the appeal to causation will
work best, while in others it may be a statistical correlation (as in Salmon
above) that provides the best explanation, etc.—therefore, no single form
of explanation will work in all cases. For example, if we were seeking an
explanation for the cause of an individual smokerÕs cancer, as compared with
someone who had smoked all of their life but didnÕt get cancer, then appealing
to the alleged law "10% of smokers get cancer" (assuming it is a law)
would not explain much: we would still want an explanation for why that
particular person came down with the disease, and why the other smoker didnÕt
develop cancer. However, when looking over the cancer rates of an entire
population, the alleged law, "10% of smokers get cancer", does seem a
good explanation for the prevalence of cancer among the entire group of smokers
(as opposed to the lower rates among the non-smoking group). Consequently, van
Fraassen does not believe that explanation is as important an aspect of science
as other empirical factors. Rather, he believes that the only aspect of
scientific investigation that remains the same (in all of the sciences) is the
search for empirically successful theories that account for the data.
Kitcher rejects
van FraassenÕs claim, insisting that his "Unification" Theory of
explanation can be applied to all cases of scientific explanation. Briefly,
theories that unify our understanding of scientific phenomena give us a better
grasp of nature as a whole; where to "understand" an event is to see
it as a part of a larger pattern—and the more theories/phenomena included
in the pattern, the more understanding we have of nature. In particular, van
Fraassen is not correct in claiming that the empirical success of a scientific
theory is always the most important feature when deciding which theory among
competing rivals should be chosen. Many times in the history of science, the
empirical evidence may not favor either of two competing theories, but the
scientists will nonetheless adopt one of the theories because it does a better
job of "explaining" the current evidence better than its rival: e.g.,
the evidence from astronomy did not favor either the Copernican or Ptolemaic
theory for the first half-century or more after the Copernican theory was
presented—yet, most astronomers adopted the Copernican theory because of
its more consistent and successful explanations of astronomical data (such as
the close solar positions of Mercury and Venus). Moreover, contra the D-N
model, providing an explanation is not necessarily the same thing as providing
a natural law. In many historical cases, theories were accepted long before
their natural laws were clearly formulated or identified: e.g., the pattern of
explanation of DarwinÕs theory (which unified a great diversity of phenomena in
nature) led to its adoption by most biologists many years prior to the
discovery of the more basic laws of genetics, DNA, etc., which account for the
evolutionary process. Moreover, the "Unification" model can resolve
the problem of accidental generalization (see notes to Slowik: chapter 5),
since such alleged laws of nature as "if x is a person on the park bench,
then x is less than seven feet tall" would not be good candidates for
unification: that is, the overly localized and vague character of such claims
entails that they would not fit well with our other real laws of nature (since
there is no known natural process relating sitting devices to peopleÕs height),
thus their inclusion into the domain of scientific laws would lead to a very
non-unified and chaotic overall scientific
state-of-affairs.
Chapter 25
& 26: Kuhn, Laudan.
KuhnÕs article advances some of the major aspects of his theory of scientific
revolutions (but, see notes to Slowik: chapter 6). Laudan responds to Kuhn by
declaring that although there may exist no absolute standards of truth or
factual objectivity (i.e., no fixed Scientific Method), that does not mean that
we cannot compare paradigms and reach an objective conclusion on their relative
merits. Laudan offers two general criteria of scientific theory evaluation
(i.e., evaluating one theory relative to another to determine which one is more
effective in meeting the criteria of the Scientific Method): (1)
problem-solving effectiveness, which concerns how well the paradigm resolves
the outstanding difficulties and unexplained phenomena of the scientific field
in question (and ultimately demonstrates if the theory accurately corresponds
to the world); and (2) conceptual effectiveness, which concerns how well the
theory corresponds with other accepted beliefs and theories. For Laudan, the
theory that is able to satisfy criteria (1) and (2) better than its rivals is
the preferred theory. (See notes to Slowik: chapter 6)
Chapter
27: Latour and Woolgar:
The rejection of the "context of discovery" and the "context of
justification" is taken up by Latour and Woolgar (L&W). As discussed
in the notes to Slowik (chapter 7), L&W argue that scientific facts are
socially constructed. Reason and the Scientific Method play no role in the
outcomes, and acceptance, of scientific theories. In fact, L&W claim that
social factors, such as peer pressure, determine the outcome of scientific
disputes, as well as the facts allegedly discovered in scientific labs.
Specifically, L&W claim that scientific facts are "created" in
the laboratory through a social process that negotiates the outcomes of
experiments carried out using elaborate scientific apparatus and devices. There
are no scientific facts prior to the social negotiations among the scientists,
thus L&W conclude that the "facts" are dependent upon both the
social processes and the experimental devices used in the lab. L&W often
invoke the claim that "nature does not cause the outcome of laboratory
experiments, rather, ÕnatureÕ is the outcome of laboratory
experiments." There are many problems with this view, of course. For
instance, it often seems as if L&W are making the following (fallacious)
argument: "If an elaborate or artificial procedure/device (e.g., an
electron-microscope) is necessary to claim that an entity exists (such as a
certain molecule), then the existence of that entity is dependent upon those
artificial procedures/devices". But, this form of reasoning quickly leads
to silly conclusions: since I cannot claim that the moon exists without my
eyeglasses (because I canÕt see it), therefore the existence of the moon
depends upon my eyeglasses (?!). Likewise, L&W seems to offer the (equally
fallacious) argument: since we cannot talk about an objective material world
without using concepts previously formed in a socially governed process of
scientific investigation (i.e., the language of scientific investigation is a
social product), there is no objective material world; rather, there exists only
a socially constructed language and set of practices (?!). Yet, this argument
assumes the following principle: "if there exists an independent,
objective world, then it would have to be directly knowable through a
non-linguistic and non-conceptual process". But, why would anyone accept
such a crazy principle! Our language and our concepts are the means by which we
gain information on the world (along with empirical evidence, of course), so it
is absurd to claim that if we cannot gain knowledge without language and
concepts, there is no knowledge of the world at all. In fact, if L&W have
offered the above argument, then they have unwittingly resurrected the idealism
of George Berkeley, the seventeenth century philosopher who rejected the
existence of the material world (and only believed that minds and thoughts
exist). Berkeley argued as follows: since I cannot have the thought "a
tree existing outside of all minds and thought" without falling into a
contradiction (why? well, because it is a thought, and thus it canÕt exist
outside of all thought!), there are no trees that exist outside of our minds
(or, put differently, trees are just thoughts in our minds). Needless to say,
BerkeleyÕs argument is more a source of amusement than a serious philosophical
position.
Chapter
28: Cole. In this
article, Cole challenges the social constructivists by declaring that they must
prove that if society had been different, then the outcomes of experiments (or
science in general) would have been different. More specifically, since
scientific fact are socially constructed, a different society should make new
scientific facts (e.g., a society of scientists that did not believe in gravity
should result in a world where material objects no longer fall according to
NewtonÕs Laws). Of course, since the social constructivists have not shown that
a different society would make the outcomes of experiments any different, Cole
concludes that social constructivism has no evidence to support it. Moreover,
given the fact that many different societies have reached similar conclusions
on scientific matters in the past, the history of science would appear to
falsify social constructivism. Cole reasons that social factors do influence
the work of scientists, especially in their choice of work, but it does not
determine the outcome of that work—at least not completely, since social
factor more often inhibit how long it takes to reach certain conclusions,
rather than determine the ultimate content of those conclusions.
Chapter
29: Harding. (For more
details of HardingÕs views, see the notes to Slowik: chapter 8) One of
HardingÕs main arguments in this article is that Western science has used
sexist descriptions and metaphors throughout its history to make the
experimental method (largely introduced in the seventeenth century) more
attractive to scientists, who were/are all men. The main culprit is Francis
Bacon, whose main work appeared in the early seventeenth century, for he is
charged with having employed "rape" metaphors, and other descriptions
violent towards women, in advancing his case for the new experimental,
mechanical approach to natural philosophy (which we today recognize as a major
component of the history of the modern conception of science). Harding claims
that many of the biases towards women and minorities in modern science may be
due to the persistent use of these aggressive "rape" metaphors
throughout scienceÕs history. One of HardingÕs contentions is that since the
scientists of BaconÕs day literally accepted the "machine" metaphor
of nature (i.e., that nature was like a machine, and not like an organism), why
shouldnÕt those same scientists have taken an equally literal interpretation of
the "rape" metaphors? Problem: it is not clear that all the
scientists of the seventeenth century did accept the "machine"
metaphors, and reject the older "organism" metaphors (e.g., Leibniz,
and even Newton, who favored a God who directly intervened in the universe). In
fact, any close examination of the historical texts makes this alleged machine/organism
distinction of metaphors very hard to sustain.
Chapter
30. Soble. Soble
argues that Harding has misunderstood BaconÕs use of language and metaphors.
First, many of BaconÕs metaphors are very ambiguous: for example, the
"penetrating holes" metaphor, often cited by feminist philosophers,
can be interpreted in any number of ways, from a proctologist to a billiards
player (and not just a rapist)! Secondly, Bacon was dissatisfied with the
overly philosophical, and non-experimental, approach to science that prevailed
in his time (and was especially favored by the arm-chair Scholastic tradition).
Instead, he advocated a more active, rigorous version of scientific inquiry,
one marked by the carrying out of detailed and thorough experiments and field
work (which helps explains such metaphors as "hounding nature to get her
to reveal her secrets"—overall, Bacon knew that most scientific
discoveries come about only through testing and experimenting, and not by just
sitting back and philosophizing). Third, Bacon hoped that the "new
science" of his day would help to improve the condition of humanity, and
aid in peopleÕs everyday lives, since this was generally not the case in his
time. One of the main obstacles to this goal was the Medieval/Renaissance
tradition of the quasi-mystical practitioner of nature who had uncovered
"secrets of nature", but who refused to share this knowledge with the
general populace. These practitioners, who were often engaged in
(pseudo-scientific) studies of alchemy, metallurgy, and medical potions, were
loath to give up their status and prestige by sharing their knowledge with
outsiders (unless they were chosen apprentices to the field). Consequently, in
order to undermine the alleged monopoly on scientific knowledge that only favored
these practitioners, Bacon (who was Chancellor of England at the time that he
wrote much of his philosophical work, it should be remembered) favored an open,
public science dedicated to experimentation.
Chapter 31
& 32: Sayers and Richards.
Sayers puts forward the case (presented in more detail in the notes to Slowik:
chapter 8), that many of the anti-female biases in science have occurred under
the guise of "objectivity"; so, there must be something wrong with
the notion of objectivity. Like Harding, she believes that social aspects, such
as politically progressive ideas, need to be incorporated into science to
prevent future abuses. Richards counters these allegations by insisting that
just because "objectivity" has been misapplied in the past, that does
not mean that there is something wrong with this notion. Many past male-biased
views were considered objective, although they werenÕt, but how does that fact
undermine the notion of objectivity? As a means of demonstrating this point,
Richards asks if the fact that some people have mistaken fools-gold for real
gold likewise means that there is something wrong with the concept of gold.
(Answer: of course not!) Richards argues that the feminist philosophersÕ theory
of knowledge is derivative of the regular theory of knowledge (which accepts
such notions as "objectivity"); that is, it is really not a different
theory of knowledge at all. In fact, the feminist philosophers claim their
theory of knowledge is preferable to the regular theory of knowledge, but how
can they prove this assertion without first assuming that their theory of
knowledge is already better? (In short, we once again have the problem of
"incommensurability", as discussed with respect to Kuhn, since there
exists no method of comparing these alternative theories of knowledge to
determine which one is "better". And thus, the feminists canÕt claim
that their theory of knowledge is preferable unless they assume it from the
beginning.)
Chapter
33: Maxwell. This
articles is a classic attack on the viability of an observation/theoretical
term distinction. Maxwell claims that the distinction between these terms in a
theory is arbitrary, and thus the distinction cannot do the work that the
Positivists had hoped (see the notes to Slowik: chapter 2).
Chapter
34: Van Fraassen.
One of the few contemporary philosophers willing to defend an
observation/theoretical (OT) distinction is B. van Fraassen. In response to the
criticisms of the distinction, as raised by Maxwell, for instance, van Fraassen
argues that although no non-arbitrary distinction can be drawn between
observation and theoretical terms, that doesnÕt mean that there is no
distinction at all between these terms: even though there are many cases
"in the middle" where it is hard to draw the OT distinction (such as
"tectonic plate", etc.), that does not undermine the fact that there
are clear cases of O and T terms situated on the "ends" of the
spectrum of these terms (such as, "electron", for T terms; and
"blue" for O terms). In short, we can agree on the fact that many
terms are clearly O or T terms, even if we canÕt agree on them all. (Problem:
can a Positivist accept a "fuzzy" boundary between O and T terms, or
do they need a fixed and set distinction?) Yet, van Fraassen still believes there
is a non-arbitrary distinction to be made between O and T terms: while one can in
principle go to Jupiter, such that "JupiterÕs moons" is thus an
O-term, one cannot in principle see an electron, thus
"electron" is a T-term. (This strategy is also discussed in the notes
to Slowik: chapter 2.)
Van FraassenÕs
theory of science is based on his notion of "empirical adequacy", for
he rejects realism and the convergence argument (see notes to Slowik: chapter 9).
Theories are accepted, not because they are true, but because they account for
the empirical evidence ("they save the phenomena") better than any
competing alternative theory (and where the "better" theory is the
one that scores higher on the criteria of the Scientific Method: simplicity,
coherence with other accepted theories, explanatory power, etc.). Van Fraassen
argues that the convergence argument can be avoided if one simply regards the
success of science as a "brute" or basic fact—i.e., a basic fact
that does not need any further explanation, but must be simply accepted.
According to van Fraassen, scientific theories aim at empirical adequacy and
not at "truth". Often, van Fraassen seeks to account for the progress
of science using metaphors from Evolution theory: successful theories have
merely "won", or survived, in the competition amongst less successful
theories, just as certain species win out in the evolutionary competition among
other species. And, of course, the evolution of species doesnÕt aim at some
"truth" about living things—species just evolve.
Correspondingly, scientific theories do not aim at some final "truth"
about the world, they just evolve according to the doctrine of empirical
adequacy. In short, demanding that science provide the "truth" about
the world is asking too much of science. It should be noted, however, that van
FraassenÕs concept of realism may be somewhat simplistic. At times, he seems to
claim that "realism aims at literally true theories of the world".
But what does he mean by "literally true"? Does this mean that every
aspect of a scientific theory is really out there in the real world; for
example, if electromagnetic theory holds that "electrons have negative
charge", then do electrons (in the real world) have little minus
(negative) signs, "-", labeled on them!? Do the differential
equations that describe the motion of, say, a billiard ball exist in the world
in the same manner as the billiard ball? A realist would probably reject this
notion, since a realist does not believe that what our best theories tell us is
"literally true"; rather, the realist claims that our best theories
give us a "metaphorical" description of reality (such that the
theoretical entities of our mature theories describe, through the use of a
mathematical and conceptual structure, some underlying structure that is
analogous, or isomorphic, to the mathematical/conceptual structure of the
theory—see the notes to Slowik: chapter 9, for more on this).
Before
continuing on, there are a few (major) objections that can be raised against
van FraassenÕs "evolutionary" theory of scientific progress and
change. Quite simply, does it work? First, it would fail to demarcate real
science from pseudo-science (a problem often noted in this course): one can claim,
quite legitimately, that certain theories of Creationism (that the Earth is
only 6,000 years old) have "won in the competition" among less
successful theories of Creationism, and so they have achieved empirical
adequacy, too! (For example, most current Creationist theories accept the
evidence of geology, rather than reject it, as they often did; yet they strive
to render this evidence compatible with their Creationist views.) Therefore,
empirical adequacy does not guarantee that the theory under discussion is a
"real", as opposed to pseudo-scientific, theory of the world. What
does help to demarcate the science/pseudo-science problem, of course, is that
real scientific theories pertain to real entities, whereas
pseudo-scientific theories do not refer to real entities. Yet, van
Fraassen cannot appeal to any form of realism, since his concept of empirical
adequacy rejects realism.
Chapter
35: Churchland. The
OT term distinction, as favored by van Fraassen is attacked by Churchland in
this article. He argues that there is no difference in principle between
the observability of JupiterÕs moons and an electron. If we were to shrink
ourselves down to an appropriate size, then an electron would become
observable; and this change in "size" is not logically different
from the change in "position" that would make JupiterÕs moons
observable—differences in size or position are merely biological or
physical accidents, and thus are inadequate, or too arbitrary, to serve as the
basis of the logical/methodological distinction which van Fraassen wants to
draw between O and T terms.
Churchland
argues that the "criteria of adequacy" (i.e., the Scientific Method)
not only provides for theories that are empirical adequate, but it also
"tracks the truth", such that truthful theories are also selected in
the contest among competing theories. Churchland offers a thought-experiment to
make his case: if a person had all of his sense organs removed, but had
computers feeding him various beliefs about his local environment, the person
could apply the "criteria of adequacy" to judge the relative
simplicity, coherence, etc., of these various beliefs. Eventually, claims
Churchland, the person would form intricate beliefs about the world as
sophisticated and successful as our own; but, since the person has no sense
organs, nothing is "observable" (since O terms are defined as
deriving from our sense organs), and thus van Fraassen would have to claim that
the personÕs beliefs are not empirically adequate. Yet, for van Fraassen to
claim that the manÕs beliefs are not empirically adequate, while our exactly
similar beliefs are empirically adequate, demonstrates the inadequacy of
van FraassenÕs theory. Overall, the "criteria of adequacy"
(Scientific Method) must do more than simply provide for empirically adequate
theories, it must help to secure "true" theories as well.
Chapter
36: Hacking. Like
van Fraassen and Laudan, Hacking does not accept the convergence argument,
since he believes that false theories can be, and have been, successful (see
the notes to Slowik: chapter 9). Oddly enough, while Hacking is skeptical of
any concept of "truth" deriving from scientific theories, he seems to
believe that our theories really do pertain to some entities that exist in the
outside world! This view is often called "entity realism", since it
accepts that our mature scientific theories do pertain to really existing
entities, but it rejects the "reality" of the properties ascribed
to those entities by the scientific theory. Put differently, entity realism
is skeptical of scientific theories, and what scientific theories have to say
about the world, but is not skeptical of the entities that appear in our
scientific theories (huh!?). Scientific theories do not provide truthful
pictures of the world, he claims, although it is certainly the case that our
theories do pertain to real "thingies" in the outside world (my term,
of course). To argue for this form of realism, Hacking helped to pioneer a view
(sometimes) known as "experimental realism": when you can interfere
or manipulate a theoretical entity, then it is real ("if x can be
manipulated, then x exists"). In other words, we are justified in
believing in theoretical entities when we can manipulate them. For example,
since physicists manipulate electrons by means of a cyclotron—by shooting
them at one another—electrons must exist! The unobservability or
observability of the entities in question is irrelevant to this manipulation,
accordingly, so the OT term distinction serves no purpose. Problems: first,
many experimental manipulations of entities require other complex devices and
theories that presuppose their own theoretical entities. So, the actual
manipulation of entities becomes dependent on many other theories and entities,
and many of these additional theories/entities may be false. Thus, the
manipulation becomes ever more remote and liable for error. Second, and closely
related to the last point, many pseudo-sciences claim to "manipulate"
their entities, such as ESP, homeopathy, etc.: e.g., many people claim to use
their ESP powers to read other minds, thus allegedly manipiulating those
powers. Therefore, merely claiming to have manipulated entities is no guarantee
that you actually have! However, Hacking could try to respond to this criticism
by insisting that there must be good evidence for the manipulation of the
entities (i.e., verified by many experiments, in different ways, and by
different scientists), so that the insufficient evidence normally supplied by a
pseudo-science would fail to count as a real manipulation of entities (see the
notes to Slowik: chapter 9 for more on this). Third, there are many examples of
mature and successful scientific theories that do not manipulate their
theoretical entities although we believe that these entities have a good claim
to exist: e.g., astronomy, geology, anthropology, to name only a few.
Astronomers do not "manipulate" planets and stars, needless to say,
but the evidence for their "real" existence, as opposed to their
merely theoretical existence, is overwhelming—but, unfortunately,
HackingÕs demand for manipulability is not satisfied by this science.
Chapter
37: Fine. The
convergence argument is also rejected by Fine, since (as should be an obvious
claim by now) the success of science does not depend on truthful theories. The
histories of Quantum Mechanics and Relativity Theory demonstrate to Fine that
the entities in a theory do not need to be "believed in" by a
scientist in order for that theory to be successful. To counter both realism
and anti-realism about theoretical entities, Fine offers his "Natural
Ontological Attitude", or NOA. On the whole, NOA does not try to interpret
"truth", it just accepts and uses it (?!), whereas both realism and
anti-realism, of course, do try to provide an analysis of what constitutes
"truth" (i.e., "correspondence with reality" for the
realists, and "empirical adequacy", or other pragmatic
considerations, for the anti-realist). NOA, however, just accepts as true what
science tells us is true, without trying to define "true". Moreover,
since what science holds to be true is often an unobservable theoretical
entity, the OT term distinction is rejected by NOA.
Chapter
38: Brown. This
article brings together a number of criticisms against the previous articles,
especially van Fraassen and Fine. As regards van Fraassen, Brown insists that
any theory of science must explain the success of science. Specifically, a
theory of science must account for the following three observations concerning
science: (1) our current theories organize, unify, and generally account for a
wide variety of phenomena; (2) our theories are getting better and better at
doing this: in other words, they are progressing; (3) our current theories make
a significant number of novel observational predictions which turn out to be
true. Now, Brown argues that van FraassenÕs "evolutionary"
explanation for the success of science (see notes to Schick: chapter 34, above)
can successfully account for (1) and (2), but not (3), thus van FraassenÕs theory
fails. Why does van FraassenÕs theory fail to satisfy (3)? According to Brown,
if we follow the "evolution" analogy offered by van Fraassen, then a
successful novel prediction (which confirms the theory) would be analogous to a
species that survived a "radical change of environment", since the
future uncertainty of a novel prediction (whether or not the prediction fails,
and thus falsifies the theory) is matched by the equal uncertainty of a given
species surviving the transition to a radically new environment (whether or not
it will survive the evolutionary struggle, and continue to exist). However,
since most species do not survive a radical change of environment, whereas our
best theories do successfully make novel predictions on a regular basis, the
"evolution" thesis of the success of science offered by van Fraassen
is not acceptable. In short, Brown argues that the empirical adequacy of a
theory (such that it organizes and unifies a large number of phenomena, etc.)
is no reason to believe that it will make novel predictions (i.e., past success
is no reason to beleive that the theory will continue to unify and organize a
large number of new phenomena). But, since our best theories make novel
predictions all the time, van FraassenÕs theory of empirical adequacy fails to
account for (3), and thus is not acceptable. Another criticism of the
"evolution" thesis has been put forth by P. Kitcher, who thinks that
van Fraassen is simply wrong in believing that evolutionary scientists merely
accept the evolution of species as a "brute fact" not requiring any
further explanation. Darwin, in fact, worried greatly about the unobservable
processes that were responsible for the variation in species; and this also
explains why the development of modern genetics, and our understanding of DNA,
were seen as powerful confirmation of DarwinÕs theory. So, contrary to van
Fraassen, evolutionary scientists and scientific realists both search
for the unobservable factors and entities responsible for, respectively,
evolution and the success of science.
Brown, however,
does not think that a realism concerning "truth" can resolve this
problem either, since (as before) many false theories have made novel
predictions. Therefore, a view of science that deems the success of science as
due to the increasing accuracy of our theories as they increasingly
approximate, or grow ever closer to, some ultimate set of truths about the
world (sometimes known as "verisimilitude realism") is equally
unwarranted. (This is BrownÕs argument—but, see the notes to Slowik:
chapter 9, as regards Laudan, for a response to this form of criticism of
scientific realism). NOA fails, too, since there is more at stake than simply
accepting at face value the language of scientists, and thereby refusing to
enter into the philosophical debate. (In fact, one of the main criticisms of
FineÕs NOA is that it constitutes merely a refusal to take a philosophical
stance on the realism/anti-realism debate—but, a refusal to take a
position is not a position at all!) Brown claims that the decision to accept a
language does not resolve all the philosophical disputes, for many people may
use the same language but interpret it differently. For example, both Bible
fundamentalists (who accept every word in the Bible as literally true) and Bible
liberalists (who think most of the Bible is not literally true) accept the
language of the Bible, but they interpret it differently, of course. Therefore,
just accepting the chosen language of a specified field (whether it be science
and the realism/anti-realism issue, or religion and the literal/non-literal
issue, etc.) does not resolve all the disputes intrinsic to that field.
Brown favors
realism, but believes that a different argument needs to be given to make the
case for realism. "How-possible" explanations are the key component
of his thesis. In contrast to "why-necessary" explanation, which
attempt to show why something necessarily had to follow,
"how-possible" explanations simply strive to demonstrate how it was possible
for something to occur. (Example: Darwin used "how-possible"
explanations since he only tried to show how it was possible for a
species to evolve in a certain way given the environment, etc., but it was not necessary
that it had to evolve in only that way.) Realism should adopt "how-possible"
explanations, states Brown, because it makes more effective use of the success
of science without falling into the problems of positing a fixed
"truth" (which, of course, is the basis of the rejection of realism
by so many philosophers, as above). Briefly, a theoryÕs being true explains how
it is possible to make novel predictions, so realism can avoid the problem
that Brown raised above (of false theories making novel predictions) while at
the same time satisfying criterion (3). Whether or not this is a plausible
version of realism, and I have grave doubts, I leave to the reader to
decide.
Edward Slowik
Chapter 1:
Logical Positivism
The Logical
Positivists (or Logical Empiricists) were a very influential group of
philosophers who were largely motivated by the developments in science and
logic in the early part of the twentieth century (e.g., Quantum Theory,
Relativity Theory, and the developments in formal logic by philosophers and
mathematicians, such as Russell, Frege, Hilbert, etc.). The Positivists wanted
to "clean-up" philosophy, and the philosophy of the natural world, in
particular, by attempting to rid philosophy of all "metaphysics"
(where "metaphysics" refers to those questions about the world which
are seemingly beyond any form of verification or testing through observational
evidence—e.g., "Reality is the Absolute", "Time is the
eternal now", etc.—How could you ever verify such claims?).
Consequently, the Positivists endorsed the "Verificationist theory of
meaning" (see notes from Part 1 of Schick). Overall, the positivists
wanted to provide an analysis of the physical world which did not entertain
notions that went beyond what we directly experience, but which, instead,
remained directly confined to our actual observations of the natural phenomena.
Laws of nature
were the main concern of this program, since laws of nature are what do much of
the work in science: that is, (i) laws of nature provide the causal relations
of the science (e.g., "gravity causes the object to fall"); and (ii)
laws of nature provide the explanations of natural phenomena (e.g., "the
object fell to the ground because the Law of Gravity is true"). Thus, the
Positivists wanted to provide a linguistic representation (i.e., based in
language) of causal regularities, but they did not want their formulation of science
to actually state that "causation" was something in the world that
we directly observe. Rather, since the Positivists were greatly influenced
by HumeÕs analysis of causation (see notes from Schick, chapter 7), they wanted
to follow Hume and regard an alleged "causal" relationship between
two events (say, "heating the copper", and "the copper
expands") as just a correlation between two events in space and time
(i.e., the two events are "constantly conjoined", or, more simply, "one
always follows the other") since all we directly experience are
just these constant conjunctions (and we never experience any kind of deeper
relationship between the two events). To avoid any kind of mysterious or
metaphysical treatment of the laws of nature, the Positivists thus strived to
formulate natural laws as (material) conditional statements: "if...,
then..." statements, to be specific. It was assumed that a proper
formulation, using the logic of such conditional statements, could capture, or
mirror, the "lawful" relationships supposedly observed in the natural
world—but without having to state that the one event really did cause
the other event.
For example: the
common (or vulgar) statement, "gravity caused the object to fall",
would be (roughly) translated as the statement, "if the object is released
at a certain position x in a gravitational field, then it will fall with
y acceleration (for the values of x and y that correspond to that
particular objectÕs position and that particular gravitational field,
etc.)."
Another example:
the common statement, "heat caused the copper to expand" would be
(roughly) translated by the Positivists to read, "if heat of x
degrees is applied to a piece of copper of y magnitude, then it will
expand at z rate (where, once again, the details of the expansion rate
are specified, etc.)."
Notice that in
both cases, no causal relationship is alleged to hold, or occur, between the
two events: the Positivist analysis of these alleged causal relationships are
purely in terms of an "if..., then..." correlation between two
events.
Another major
component of the Positivist program was to separate the language of science
into two main categories: those statements that pertain to observations, and
those that pertain to theories. Observational terms (O-terms) are those
statements that refer to, as you could probably guess, observational objects,
properties, etc.: for example, "red", "square",
"hot", "chair", etc.. Theoretical terms (T-terms) refer to
the unobservable "entities" or things postulated by a theory:
e.g., "electron", "tectonic plate", "chromosome",
etc. The Positivists believed that "real" scientific theories, as
opposed to "pseudo-science" (see notes for Schick, Part 1), would
always provide an observation term, or set of such terms, for every theoretical
term that appears in a theory. On the other hand, in pseudo-scientific
theories, the T-terms often (if not always) fail to be linked to O-terms: e.g.,
such T-terms as "Super-Ego" in Freudian analysis, "psychic
power" in ESP, etc., fail to have a clear, unambiguous, and distinct set
of O-terms that are accepted by all their practitioners (—How do you
determine the observational consequences of the "Super-Ego" or
"psychic power"?). Accordingly, the Positivists tried to provide
"correspondence rules" that connect every T-term with a unique
O-term. An "explicit" definition (or "biconditional") was
the first attempt at setting up this connection of T-terms and O-terms: i.e.,
if the T-term obtains, then the O-term obtains, and, if the O-term obtains,
then the T-term obtains (or, put simply, if you have the T-term, then you have
the O-term, and visa-versa). If a T-term has no determinable observational
consequences (O-terms), therefore, it should be rejected as pseudo-scientific.
This distinction between the O and T terms, by the way, is often known as the
"observation/theory distinction" (OT distinction). The O-terms were
believed to be unproblematic in that they could always be determined without
difficulty, whereas the T-terms were more problematic (since they referred to
unobservable entities). However, the correspondence rules (c-rules) allowed the
T-terms to be defined by the O-terms, and thus gain meaningfulness, since they
now acquired meaning from the O-terms. Without the c-rules, T-terms are
meaningless (since they are not linked to the O-terms, and thus have no
meaning).
The benefits of
the Logical Positivist movement, by the way, were that they allowed the
inferences or predictions that we draw from scientific hypothesis to be stated
in logical form, such as the deductive "denying the consequent" form
(i.e., falsification), or the "affirming the consequent" form of the
Hypothetico-Deductive method (see notes, Schick, part 1).
Chapter 2: Problems with Logical
Positivism
Trying to get the OT distinction to work
was a difficult task for the Positivist project, since they proposed many
versions of the OT distinction without finding a clear, unproblematic basis for
the distinction. Below are many of the versions along with their problems:
1) "Ease of Application"
criterion: A term is an O-term if it can be easily applied by a practitioner of
the science (and a T-term if not easily applied). Problem: too many clear
instances of T-terms become O-terms under this rule, so the criterion is
"too broad" (i.e., includes too many things). For example, many
people can easily apply the term "motion" to the sun, and thus
declare that "the sun moves" is an O-term (which is an embarrassment,
since it would seem to be a T-term). Also, many doctors claim to correctly
"see" cancers, viruses, etc., when examining patients (and they are
often correct), but since they obviously cannot see the microscopic
entities (or so it is claimed), it would seem to be wrong to claim that they
can see "viruses", etc.
2) "Instrument" criterion: This
was (and is), by far, the most accepted manner of drawing the OT distinction. A
term is an O-term if it does not require the use of an artificial instrument to
apply the term correctly (otherwise it is a T-term). Part of the motivation for
this criterion was the belief that instruments, such as telescopes and
microscopes, in particular, "contaminated" the observations obtained
from those instruments with T-terms (since these instruments use optical
theories, and thus the O-terms become involved with theory, in violation of the
OT distinction). Problems: What counts as an instrument? Do eyeglasses count as
an instrument (since they also follow optical theories), so that everything a
person sees through their glasses is theoretical?!! (Or, light coming through a
window pain?, etc.) Thus, the instrument criterion seems too broad, as well.
3) "Operationalism": This form
of LP theory became almost its own separate philosophy (under the leadership of
P. Bridgman). What the operationalists tried to do was provide an operational
test or experiment to play the role of the O-terms. For example: in the c-rule,
"heat = mean molecular kinetic energy", the operationalist would
replace the O-term "heat" with an experimental test, such as "a
thermometer reading above 100 degrees" (or something similar). Thus,
"a high mean molecular kinetic energy = a high reading on a
thermometer", etc. In this way, all O-terms could be replaced with
observable test procedures. A large part of the motivation for this theory stemmed
from the difficulty in trying to determine a class of unique and unambiguous
O-terms, since many such terms, like "heat", "solidity",
"solubility", seem to be already somewhat influenced by our accepted
physical theories. Yet, operational tests (supposedly) solve this problem:
e.g., the complex O-term "solidity" would be replaced by the more
observable test procedure, "the object resists penetration by a punch test
machine, etc., to degree x (where x is a large value)." Of course, the
operationalism does solve the problem with the "instrumentalist"
criterion, as raised above, but it could be seen as trying to resolve the
problem with the "ease of application" criterion. Also, the
operationalist criterion can be seen as a useful means of separating science
from pseudo-science, because "real" sciences can provide test
procedures that are verifiable, but pseudo-sciences supposedly can not: e.g.,
"ESP power = a high score on a zener card test". Since an alleged
psychic often fails to score above chance when given a zener card test, and it
is further claimed that this failure does not entail that the person lacks the
ESP ability (maybe they had a bad day?), it follows that the proposed test
criterion fails to capture the meaning of the T-term "ESP power". But,
this is not the case with the c-rule, "a high mean molecular kinetic
energy = a high reading on a thermometer", since the high thermometer
reading always occurs when the T-term (molecular kinetic energy)
occurs.
Problems: (A)
Different test procedures would appear to be measuring different features of
the world: For example, "heat-as-measured-by-a-thermometer", would
seem to be a different aspect of the natural world from,
"heat-as-measured-by-a-thermostat", since they are two different test
procedures (and T-terms are now defined by these test procedures,
recall)—but, this seems to be an embarrassing development, since both
test procedures are measuring the same aspect of reality, of course. We believe
that measurements, using either a thermometer or a thermostat, are measuring
the same physical property, and not two different properties.
(B) Another problem with operationalism
is that it seemed to rule out all instances of test procedures that could not
be performed due to lack of resources for the test, or other accidental
contingencies that would not allow a test: For example, it seems that questions
about the temperature on Pluto are meaningful questions, but we just do not
have the money, time, etc., to go to Pluto to actually carry out the test—so,
is the T-term statement "temperature (or molecular kinetic energy) on
Pluto" a meaningless claim, since it has no test procedure? (Here, we are
using "temperature" as a T-term to replace the more cumbersome
"molecular kinetic energy".) No, claim the operationalist, because "if
you were to go to Pluto, and place a thermometer on its surface, then the
thermometer would have a reading, etc.", and thus the T-term,
"temp. on Pluto", is meaningful given that particular test procedure.
This is not a bad attempt to solve the problem, but, unfortunately, this move
to the "subjunctive" (i.e., "would", "were",
which seems to involve "dispositional" properties) raises a whole new
set of problems. Namely, it opens up the door for all of the pseudo-science to
use the same sort of "would of" talk in trying to claim legitimate
scientific status for their theories (i.e., they can claim to be dealing with
dispositional properties, as well). For instance, the ESP theorist can now
claim that the T-term "psychic power" can be given the following
operational test procedure: "if you were to conduct a zener card
test, and all the conditions were ideal, then the subject would score
above chance on the test." The ESP theorist can now claim that their
operational tests are thus no different in kind than the operational tests of
the astrophysicist, so both are equally legitimate T-terms (i.e., "temp.
on Pluto" and "psychic power")—yet, needless to say, this
is not a conclusion that the Positivists would like very much!
One of the main objections to the OT
distinction, however, is the apparent impossibility of ever obtaining an
acceptable separation among O and T terms. (See notes to Schick: Hesse,
Hanson.) O-terms are "theory-laden" according to Hanson, so it is
impossible to obtain a set of unproblematic O-terms that are not contaminated
by theory since the very O-terms that you select are influenced and dependent
upon your accepted theories of the world.
Chapter 3: Underdetermination.
This chapter presents what is (was) probably
the main concern among 20th century philosophers of science: the
"underdetermination" of scientific theories by evidence. As you will
recall, falsification took the following form:
Premise 1) If H (hypothesis/theory), then
P (observational prediction derived from H)
Conclusion) not H
This
argument form is valid (see notes to Schick: Ayer, Popper), and was believed to
unproblematically refute a scientific theory (and "real" scientific
theories, as opposed to "pseudo-science", must make predictions that
can be, in principle, falsified in this manner).
However,
as many philosophers/scientists eventually pointed out (Duhem and Quine, in
particular), this picture of science is much too simplistic. What really
happens in science is the following (where "A" stands for an
Auxiliary Hypothesis that helps to connect H to P):
Premise
1) If (H, and A1, and A2, and A3,.....), then P
Premise
2) not P
Conclusion)
not (H, and A1, and A2, and A3,.....)
where
the conclusion is also equivalent to: not H, or not A1, or not A2, or not
A3,.....
The
"auxiliary hypotheses" are additional assumptions which are
necessarily involved in deriving the observational prediction from the theory
(or, more simply, the auxiliary hypotheses are the many concepts, beliefs,
etc., which are closely tied to the theory). What the conclusion of this more
sophisticated treatment of falsification thus demonstrates is that you can
never test the hypothesis H in total isolation from other hypotheses and
beliefs. The failure of the test prediction P does not entail that H is
necessarily false (as in the original Popperian version); rather, the failure
of P entails that "not H, or not A1, or not A2, or not A3,....."—and,
therefore, it could have been either the auxiliary hypothesis "A1",
or "A2", etc., which caused the failure of P, and not H. For example,
it may be the case that H is actually true, but that the failed test prediction
P occurred due to a false auxiliary hypothesis, say A1. Falsification therefore
cannot decisively show that a failure of P means a false H, since the failed P
could have been due to one of the many auxiliary hypotheses. So, it would seem
that a theory is "underdetermined" by the evidence; that is, observational
evidence can never conclusively disprove a theory, as our analysis above
demonstrates. The "Duhem-Quine" thesis, as it is known, basically
raises this last point to the level of a philosophical/scientific maxim: Any
seemingly disconfirming observational evidence can always be accommodated to
any theory. In other words, any failed P can be "blamed" on some
false auxiliary hypothesis, thus allowing one to maintain that the theory H is
still true. There are many famous examples of this form of reasoning in the
history of science:
Example
1: The Ptolemaic theory, H, predicted that "the planet Venus should not
show a full set of phases", P; but, when the prediction failed (i.e.,
Venus did display a full set of phases when observed through GalileoÕs telescope),
the die-hard Ptolemaic astronomers did not interpret this failed prediction
(not P) on their theory H (and thus reach the conclusion, not H), rather, they
blamed the failed prediction on the reliability of the telescope as an
instrument for observing the celestial bodies! So, they claimed that the
following auxiliary hypothesis assumed in the test, "the telescope
provides accurate observations of the heavenly bodies", labeled A, was
false, while H was true.
The
above example may convince one that any use of Duhem-Quine underdeterminism is
unjustified and amounts to a mere ploy to sustain a failed theory; but, this is
not so, since all theories, whether successful or not, fall back on this line
of reasoning when a test prediction fails. Example 2: For instance, the
Copernican theory, H, seemed to entail that, P, "the fixed stars should
exhibit a stellar parallax "effect" (since we observe a pair of such
stars from different positions on the earthÕs orbit around the sun)". When
the prediction was not obtained in the sixteenth and seventeenth centuries,
however, the Copernicans did not blame the failure on H, but, instead, blamed
not-P on the following auxiliary hypothesis, A, assumed in the test: "the
stars are close enough to see stellar parallax with present-day
telescopes". As it turned out, of course, the Copernicans were correct,
for A was indeed false, and H was true.
Accordingly,
not all instances of the use of Duhem-Quine underdetermination are examples of
"bad" theories trying to "worm their way out of" a
disconfirming test result—many "good" theories of science use
"Duhem-Quine"-type arguments on numerous occasions to overcome failed
predictions. In fact, as many philosophers of science have claimed: "all theories
are born falsified". So, if we were to automatically throw away all
theories that fail an observational prediction, there would soon be no
scientific theories left, at all! It is normal for a theory to fail a test;
but, if it is a "good" theory, then its problems and "bugs"
can eventually be overcome and rectified (hopefully), and the failed test can
be accommodated and explained (possibly by discovering a previously unknown
false auxiliary hypothesis, etc.).
It
should be noted, at this point, that "underdeterminism" is used in
two ways: first, it refers to the fact that any theory can be reconciled to
disconfirming evidence, and second, that more than one theory, usually quite
many, can account for the same observational data. This second sense of
"underdeterminism" follows directly from the first, of course, since
if any theory can be reconciled to the evidence, then there will be many
competitors trying to explain the same observational evidence. Overall, one of
the philosophical lessons to be drawn from underdeterminism is that one cannot
isolate and test a theory H apart from our larger collection of other beliefs
and theories. The failure of the OT distinction goes hand in hand with this
development, since just as you cannot isolate an individual T-term and define
it with an individual O-term (by a c-rule—see notes to chapter 3), you
cannot isolate an individual scientific hypotheses, H, with a given prediction
P. One of the responses to the failed attempts to establish an "explicit
definition" of individual T-terms with O-terms was to envision the process
(whereby T-terms get their meaning through observation) as a definition of all
the T-terms en masse (i.e., all together at once). This view, a form of
meaning holism, is often described using the famous "web" metaphor of
beliefs: since all of the beliefs of a given theory H are, to some degree,
interconnected and dependent upon each other, the meanings of each individual
T-term cannot be given a precise definition in isolation from all of the other
beliefs that comprise the web. (For example, in Newtonian Mechanics, such
T-terms as "inertia", "gravity", "mass", etc.,
are all interdependent in that they each depend upon one another, to various
degrees, for their individual definitions.) Consequently, just as all the
T-terms of a theory H cannot be isolated in an unambiguous and non-arbitrary
way, the many separate hypotheses and beliefs that make up a bigger theory,
such as Ptolemaic or Copernican theory, likewise cannot be isolated and tested
individually. (Furthermore, it should be borne in mind that with the
acknowledgement of the "theory-laden" nature of observation, and the
overall collapse of the OT distinction, many meaning holists, like Quine,
believed that our interconnected "web of beliefs" contained both T-terms
and O-terms, and not just the former.)
Why
is underdeterminism a problem? To answer this question, it is important to
recall PopperÕs original motivation for presenting the
"falsification" criterion. As a means of separating "real"
science from pseudo-science, the function of falsification is apparently
seriously crippled (if not destroyed) in the presence of underdetermination,
since any disconfirming evidence can always be accommodated by, say, astrology
or ESP theory—and, their means of "explaining away" the disconfirming
evidence is exactly the same method as employed by legitimate,
"successful" scientific theories in accounting for their failed
predictions (e.g., the case of Copernican theory and the failed stellar
parallax prediction). To be more specific, just as the Copernicans blamed the
failure to observe stellar parallax on the assumption that the stars were close
enough to observe that phenomena, an ESP theorist can similarly claim that the
failure of their test of ESP powers was due to, say, "the presence of
skeptically minded people whose negative ESP energy blocked the positive ESP
energy of the subject". (So, in more detail, the ESP theory, H, was
correct all along, but that auxiliary hypothesis, A, "the conditions are
ideal for manifesting ESP powers", was false.) Once more, the specter of
the "demarcation" problem haunts our Positivist program!
What
is at stake in the debate over underdeterminism is the status of
"scientific rationality": Is there a rational basis for comparing the
relative merits of competing scientific theories (such that one can be
unambiguously identified as the "better", or more successful,
theory)? Some philosophers, such as L. Laudan, claim that the issue of
underdeterminism has been greatly exaggerated. While he agrees that it is often
the case that many theories can explain the same observational evidence, or can
accommodate any failed test prediction, it is not the case that all of these
theories explain the observational evidence, or failed test prediction, equally
well. In fact, as Laudan points out, any theory can be simply reconciled to
a disconfirming prediction if one simply removes that portion of the theory
that generated the failed prediction. (So, the failure of the Ptolemaic theory
to predict the phases of Venus can be remedied by, say, merely removing the
concept of "phases of Venus" from the theory!) However, most theories
that attempt to accommodate a failed test prediction in this manner will be
greatly weakened, if not rendered outright inconsistent, by this very
maneuver—and thus these theories will suffer in comparison to
"better" scientific theories that accommodate the prediction without
such draconian measures.
Second,
it is often the case that blaming the failure of a test prediction on an
auxiliary hypothesis, A1, is simply not very plausible because we have
independent verification of the reliability of that auxiliary hypothesis. In
other words, A1 has been used in many other theories, and it has worked well,
so why should we doubt itÕs truth now?! In the case of GalileoÕs telescope, for
example, the auxiliary hypothesis, "the telescope is a reliable instrument
for viewing distant objects", was confirmed in many other instances
(especially when it was turned to the observation of terrestrial phenomenon, such
as a distant ship sailing towards a city port). So, this auxiliary hypothesis
achieved a high degree of verification prior to its use in the observation of
VenusÕ phases. Consequently, simply dismissing this auxiliary hypothesis is not
a very plausible tactic for the Ptolemaic astronomer.
Third,
Laudan thinks that one must separate two important interpretations of
underdeterminism in this debate: egalitarian (which holds that there are an
infinite number of theories, all incompatible with one another, that can
account for the empirical evidence) and non-unique (which holds that there will
be at least two different, and incompatible, theories that explain the
evidence). As can be seen, non-unique underdeterminism is not a major problem
for the "Scientific Method" (which is the name usually given for the
"rules of thumb" used to determine the relative merits of competing
scientific theories; e.g., simplicity, explanatory scope, consistency with
previous theories, ability to make novel prediction, etc.). If there are an
infinite number of theories, all equally successful, then the Scientific
Method is clearly in trouble. (What could serve as the basis for picking a
"preferred" theory if they are all equally good?). Yet, as Laudan,
points out, this does not occur in science, since there are normally compelling
reasons, based on the Scientific Method, for choosing one, or a select few, of
the competing theories. (Returning to our example, a Ptolemaic theory that wonÕt
discuss the phases of Venus, or dismisses the evidence of the telescope, is not
a theory that scores high on Scientific Method criteria—for, it has
sacrificed explanatory scope or power, as well as consistency and simplicity,
in accommodating its failed test predictions. The Copernican theory, of course,
does not have these problems, so it will be the preferred theory according to
the Scientific Method.) So, given non-unique underdeterminism, it is still the
case that the Scientific Method plays a crucial role in science: more
specifically, if the Scientific Method narrows down the group of candidates to
a select group, then the role of scientific rationality has not been
undermined; rather, the Scientific Method would seem to have been vindicated as
an important tool in
science.
Chapter 4:
Reduction.
One of the most
basic and important aspects of the scientific/rational interpretation of the
natural world is the role of reduction in providing comprehensive and simple
explanations of complex natural phenomena. In "reducing" one theory
to another, it is claimed that the entities of a given theory, T, can be
"reduced" to the entities of another theory, T*. Usually the theory
that is being reduced is a "less basic" theory of the physical world relative
to the theory that is doing the reducing. In essence, to say that one theory T
has been reduced to another T* is to claim that the events, properties, and
objects that make up T are really just combinations of the events, properties,
and objects that constitute T*. For example, the reduction of chemistry to
physics in the 20th century was accomplished when it was proven that the
entities, properties, etc., that make up chemistry (such as hydrogen, oxygen,
nitrogen, etc.) were really just combinations of the entities, properties, etc.
that make up the more basic level of physics (namely, electrons, protons,
neutrons). Thus, one could "reduce" any discussion of the phenomena
of chemistry to a discussion of the phenomena of physics (since chemistry, at
its deepest levels, is just physics). Reduction is important in science because
it simplifies the ontology of science (where "ontology" is the study
of the "things", beings, objects, that exist): if chemical elements
are just elementary particles of physics, then reducing chemistry to physics
has made the description of the ontology of the natural world much simpler
(since we can confine our concerns to just those particles of physics, as
opposed to the more complex scenario including both physical and chemical
elements). Of course, science prefers "simpler" theories, since most
scientists adhere to "OckhamÕs razor" argument: i.e., always choose
the theory with the simplest ontology that explains the natural phenomena (as
long as this explanation is as good as the explanations offered by rival
theories)—in short, "donÕt multiply entities beyond necessity".
The details of
reduction, as developed by E. Nagel, can be divided into two parts:
A theory T
reduces to a theory T* if and only if, (1) Connectability, or
"Bridge-Laws", for every theoretical term M that occurs in T but not
in T*, there is a theoretical term N that is constructable in T* but not in T
such that, for all objects x, x has M, if x has N. In short, for every
theoretical term in T, there is a theoretical term in T* that roughly plays the
same role, or has the same function, as the theoretical entity in T (e.g., for
the theoretical term "hydrogen" in chemistry, there is/are
theoretical terms that play the same role in physics, namely, "an atom containing
one electron, proton, and neutron"); and, (2) Derivability, if L* (the
conjunction of all the laws of T*) and bridge-laws (as defined in #1, above),
then L (the conjunction of all the laws of T). In other words, the conjunction
of all the laws of T* and the bridge-laws allow one to derive logically the
laws of T.
Problems were
raised (as usual) with respect to this formulation of scientific reduction.
Briefly, it was noted that if a disconfirmed theory was reduced to a confirmed
theory (as is often the case in reduction), then the theoretical elements in
the (confirmed) reducing theory that play "the same role" as the
(disconfirmed) theoretical elements in the reduced theory would likewise seem
disconfirm the reducing theory, since it incorporates those disconfirmed
theoretical elements, as well. For example, since Newtonian mechanics regards a
bodyÕs "mass" as invariant (not variable) with respect to its speed,
but Einsteinian mechanics does hold it to be so variable, reducing NewtonÕs
theory to EinsteinÕs theory implies that one find a theoretical element in
EinsteinÕs mechanics that functions like Newtonian "mass"—but,
Einsteinian mechanics would then contain the concept of invariant mass, and
this theoretical concept has been falsified/disconfirmed in many experiments
and observations (namely, at high speeds and in large gravitational fields).
However, if one tries to avoid this difficulty by insisting that the concept in
EinsteinÕs theory that plays "the same role" as "mass" in
NewtonÕs theory is really nothing like the concept of Newtonian
"mass" (i.e., Einsteinian mechanics does not contain a concept
similar to Newtonian "mass", so Einsteinian mechanics is not
falsified), then it does not appear that Newtonian "mass" has really
been reduced to EinsteinÕs theory. (In other words, Connectability and
Derivability, as above, would not really hold for the alleged reduction of
NewtonÕs theory to EinsteinÕs, thus demonstrating that NagelÕs account of
reduction is intrinsically flawed.) These sorts of problems, as well as the
awareness of the "holistic" interdependence of the theoretical terms
that make up a theory (see notes to Slowik: chapter 4), have lead philosophers
away from the rigid term-by-term reduction advanced by Nagel. That is, Nagel
believed that the individual T-terms of the reduced theory could be connected
to various individual T-terms of the reducing theory in a sort of one-to-one
matching. But, the holistic interdependence of these T-terms rules out this
form of strict isolation and individuation of their definitions. Consequently,
the emphasis of some theories of reduction, such as Kemeny and OppenheimÕs, was
placed on overall explanatory power, rather than the reduction of individual
T-terms. On their approach, what counted in reduction was that the reducing
theory explain any observational data that the reduced theory explained.
(See the course
notes to Schick: chapters 17-21 for more on reduction.)
Chapter 5:
Explanation.
One of the
primary tasks of a scientific theory is to provide explanations of natural
events. Consequently, the Positivists attempted to explain
"explanation", to put it bluntly. The most well-known attempt comes
from Hempel, who formulated the "Deductive-Nomological" (D-N) account
of explanation. In short, the D-N model strived to explain particular natural
events by showing how they could be derived from a natural law. The form of the
explanation would be a deductive proof, with the event to be explained
functioning as the conclusion of the argument (and the natural law as one of
the premises). The D-N model also accounts for the ability of successful
theories to make accurate predictions, since to derive the conclusion from the
premises in a D-N model is to make a prediction of what will occur if the
premises are true. Accordingly, every instance of a D-N explanation utilized:
(i) A law of
nature
(ii) The
initial (or antecedent) condition (where both (i) and (ii) comprise the
"explanans")
(iii) The event
to be explained (known as the "explanandum").
For example:
1) If copper is
heated, then it expands
2) the piece
of copper x is heated
3) the piece of
copper x expands
In this D-N
explanation, the conclusion (3) is "explained" by premises (1) and
(2), where premise (1) is the natural law. In the D-N model, as can be seen,
natural laws are provided with a conditional statement formulation
("if..., then...") so that the allegedly metaphysical concept of
"causation" has not been utilized (see notes to Slowik, chapter 1 and
2, on the problem of causation and the use of conditional statements). The form
of this argument is known as "affirming the antecedent" ("if A,
then B" is the first premise, "A" the second premise, and
"B" the conclusion), and it is a valid (i.e., if all the premises are
true, then the conclusion must be true).
However, as you might have guessed,
problems were soon raised for this account of explanation. First, if one is
confined to mere conditional statements in the treatment of laws of nature,
then how do you separate the "real" laws of nature (such as premise
(1) above) from accidentally true coincidences, usually dubbed "accidental
generalizations". For example, it may be the case that all my past
experiences of the world have verified the following conditional statements:
"if a person is sitting on that park bench, then they are less then seven
feet tall", or "if Ed drinks a first beer, then he immediately drinks
a second beer", or "if x is a gold object, then x weighs less than
20,000 lbs." These statements have held true of all my past
experiences—so, if causation is just the fact that two events have always
been conjoined in my past experience (see the notes to the Hume chapter in
Schick), then these other conditional statements would seem to qualify as laws
of nature, too! But, we believe that these claims (such as, "if x is a
gold object, then x weighs less than 20,000 lbs.") are simply accidentally
true coincidences, and that they could easily be proven false (i.e., you could
prove that these claims were not natural laws if you were to, returning to our
examples: find a seven foot tall person and put them on the bench; get Ed out
of the bar after he has had his first one; and collect enough pieces of gold
together and melt them into one big chunk weighing over 20,000 lbs.).
Trying to separate the real laws of
nature from the accidental generalizations proves to be more tricky than it
might at first seem, unfortunately. One might try to state that laws of nature
support "counterfactual" claims, whereas accidental generalizations
do not (where a counterfactual claim is one that, while not true, could have
easily been true: e.g., it could easily have been the case that a seven foot
tall person sat on the bench). So, if we rewrite our law of nature so that it
makes use of a counterfactual conditional statements (using the subjunctive,
"would", "were", etc.; see notes to Slowik: chapter 2),
then it would read, "if the piece of copper were heated, then it would
have expanded". While this statement seems plausible, the corresponding counterfactual
of our accidental generalization does not seem to be equally supported:
"if x were a person on the park bench, then x would be less than seven
feet tall" (i.e., we can easily imagine that a basketball team is in the
park, and thus the statement would be falsified). Accordingly, one might claim
that accidental generalizations do not support counterfactuals, whereas real
laws of nature do. Unfortunately, as noted in chapter 3 above, this move opens
up the door for a pseudo-science to be counted as a legitimate science, since
the ESP theorist, to use our previous example, could claim that their laws of
nature support subjunctive conditionals (which are counterfactuals), as well: "if
you were to have conducted a zener card test, and all the conditions were
ideal, then the subject would have scored above chance on the
test." The ESP theorist can now claim that their natural laws, in
counterfactual form, are as equally plausible as the so-called legitimate
counterfactual claims of physics (that are also never observed to hold true);
e.g., "if this thermometer were to be have been placed in the
center of the earth, then it would have read temperature z". Once
again, the pseudo-scientist could thus hold that there is no difference in kind
between their counterfactual conditionals and the counterfactual conditionals
of the physicist—i.e., both have never been observed to hold
true—so both are equally legitimate natural laws. As noted above, this is
not a conclusion that the Positivists would like very much! In fact, the
positivists had problems with counterfactual conditionals in general, since
they seemed to be beyond verification (through observation), as in the
following example: "if x were not acted upon by any external force, then x
would move inertially"—but this never occurs, since you would have
to remove all the other objects in the universe to obtain observational
verification of this prediction.
A second problem, closely related to the
first, also raises concerns over the inability of the D-N model to separate
accidental generalizations from real laws of nature. The following example of a
natural law would seem to an acceptable use of the D-N method:
1) If a flag pole is of height x, and the
sun at angle y to the horizon, then a shadow will have a length z.
2) the flag pole is of
height x, and the sun at angle y to the horizon.
3) the shadow has length
z.
Overall, the
conclusion of this D-N explanation does appear to explain the length of the
shadow. But, since the D-N model does not appeal to causation, it seems that
one could easily rewrite the natural law in a different form, and that this new
formulation would be as equally verified by past experiences of flag poles on
sunny days as the original formulation (3). For instance, we could have put
forth the following D-N explanation:
1) If a shadow has a length z, and the
sun is at angle y to the horizon, then a flag pole will have a height x.
2) the shadow has length z, and the sun
is at angle y to the horizon.
3) the flag pole has height x.
Therefore,
premises (1) and (2), explain the conclusion (3)!? Of course, this
seems very silly, since the height of a flag pole is not explained, in the
normal sense of "explained", by the length of a shadow and the angle
of the sun. But, why is the explanation in the first D-N case (1-3) successful,
whereas in the second D-N case (1-3) the explanation is not successful? Both
explanations are identical according to the tenets of the D-N model: so, what
went wrong? The most likely answer to this puzzling asymmetry is the
presence/absence of causation: the first D-N case (1-3) is successful
because we know that flag poles and the sun cause shadows; moreover, the
second D-N case (1-3) fails because we know that shadows and the sun do not
cause the height of flag poles (although you can infer what the height of the
pole must be given the length of the shadow and the height of the sun—but
drawing an inference is not causation). Consequently, the D-N model seems to be
lacking a major component of all, or at least many, plausible explanations of
natural events, namely the appeal to causation. Likewise, it was the failure to
employ the concept of causation that left the D-N model incapable of separating
real natural laws from mere accidental generalizations. For these reasons (and
many others) the D-N model did not prove to be a long-lived theory of
scientific explanation (at least in its original incarnation).
Chapter 6:
Kuhn and Historicism.
The concept of a
"paradigm" is the key component in KuhnÕs philosophy of science.
Briefly, a paradigm is a guiding framework of theories, hypotheses, and
concepts which shape and determine a personÕs understanding of the
world—in other words, the "theory-ladenness" of observation and
meaning "holism", as in the "web" metaphor of beliefs, are
in play, here (since the interconnected beliefs that make up a theory condition
and determine your interpretation of the world; see notes to Slowik: chapter 2
& 3). Newtonian mechanics, Darwinian Evolution, Freudian psychology, are a
few instances of a paradigm according to Kuhn. For example, an evolutionist
will likely interpret the data accumulated from molecular biology as
confirmation of their view that species have evolved over millions of years
according to a mechanistic process of genetic variation and natural selection;
while a creationist will regard the same data as vindicating their theory that
God simultaneously formed distinct species a mere 6,000 years ago. In short,
the paradigm you hold largely determines the nature of your scientific
"facts", since the ambiguity and vagueness inherent in the majority
of scientific experiments and observations routinely admits numerous
conflicting theoretical interpretations (i.e., underdeterminism)—and it
is this conjunction of observations and theoretical interpretations which
constitutes our scientific "facts". The separate functions of a
paradigm are as follows: the paradigm describes the entities of the particular
science (planets, atoms, etc.), how these entities behave, the questions that
can be legitimately asked concerning them, the techniques used to answer these
questions, and the criterion of success and failure of the various answers.
Given the wide popularity of this theory, any comprehensive belief system,
regardless of whether or not it is scientific, currently runs the risk of being
dubbed a "paradigm".
One of the more
controversial aspects of KuhnÕs paradigm theory is his concept of meaning
dependence or "incommensurability". According to Kuhn, the meaning of
a term is determined by the paradigm as a whole (the "web" metaphor);
so that the term "mass", for example, has a quite different meaning
in Newtonian mechanics than it does in Relativistic mechanics (due to the
different structures of the respective theories). So, you have to know the
relevant paradigm to know what these words stand for in each relevant context.
More often than not, the style and usage of the "things" that the
term refers to (i.e., "mass") has changed drastically when viewed in
a different paradigm. Therefore, Kuhn draws the lesson that a strict or exact
comparison of similar terms from different paradigms is, in principle, an
impossible task (which he dubs "incommensurability"). The
incommensurability issue turned out to be one of the most contentiously debated
aspects of KuhnÕs theory, since it seemed to imply that theory change (i.e.,
when a scientist converts to a new theory) is a non-rational process. In short,
if each paradigm has its own exclusive meanings for its terms, then each
paradigm becomes a separate language (such that the paradigm is not
understandable from the perspective of a different paradigm), so there exists
no basis for comparing one paradigm to another to determine their relative
success or merits: if there is no common language, then there is no way to
compare information at all. Consequently, science is "irrational" due
to the fact that there exists no method of evaluating competing paradigms to
determine which is the "best", and thus the notion that science
"progresses" from weaker theories to better theories is likewise
undermined.
In addition,
KuhnÕs argued that a scientific "revolution" occurs when an old
paradigm is replaced by the adoption of a new one; but, given
incommensurability, the switch is really only a matter of choice (or so at
least some passages in Kuhn seem to read). Kuhn believes that no scientific
paradigm can lay claim to the "truth" in an absolute sense, since all
paradigms have their own unique evolution and meaning dependence of terms (as above).
Despite the existence of some general "rules of thumb" for comparing
the relative merits of paradigms, such as, simplicity, consistency,
quantitative precision, etc., these cross-paradigm criteria are not fixed and
inviolable. Rather, any method for comparing paradigms can be revised or
rejected in the course of practice. Scientific revolutions occur when a
paradigm accumulates too many "anomalies", which are data or
experiments that conflict with the prevailing paradigm. Prior to the
revolutionary phase, a science is in a "normal" phase when it is, on
the whole, successfully solving problems and effectively dealing with
anomalies. However, if enough anomalies accrue to a paradigm which it is unable
to resolve, a revolution usually entails. Yet, since Kuhn was quite aware of
the role of underdeterminism in saving theories from falsifying experiments, it
is always possible for a given practitioner to retain their old paradigm
regardless of the number of anomalies that accrue to the theory—this
demonstrates, once again, that no point is ever reached when
"reason", and the Scientific Method, deem a person
"irrational" for holding on to the older paradigm.
Many criticisms
have been raised against KuhnÕs philosophy, but we will only examine a few.
First, one may question whether scientific theory change is best represented by
the kind of radical breaks of "world view" sanctioned by the Kuhnian
approach. Overall, Kuhn seemed to suggest that revolutions come in complete
steps, with abrupt transitions between world views; as in the transition from
Newtonian to Einsteinian mechanics. Many critics have alleged that this view of
scientific revolutions is not supported by the historical record, or, at the
least, is not typical of most theoretical transitions in the sciences. Scientific
revolutions do not occur all at once, in complete steps; rather, they evolve
over long periods of time in short, careful steps. In many ways, it is
impossible to establish a strict dividing line between successive scientific
paradigms. For example, Copernicus is often credited with overthrowing
Ptolemaic astronomy, but his use of the perfect circular orbits of that earlier
tradition has often led to his being branded "the last of the Ptolemaic
astronomers". So, is Copernicus a member of the "Ptolemaic"
paradigm, or a member of the "Copernican" paradigm? Copernicus, on
the whole, has much more in common with Ptolemy (i.e., both accepted circular
orbits, epicycle-deferent, etc.), than with Kepler (who accepted the movement
of the earth, it should be noted, but who rejected epicycle-deferents, and
postulated elliptical orbits, etc.)—consequently, the very demarcation
between competing paradigms starts to become problematic, and seemingly
arbitrary, once the actual historical details of the scientific "revolution"
are examined (which is very bad news for Kuhn, who needs a sharp division
between paradigms for his theory to make sense). One might try to salvage
KuhnÕs thesis by declaring that all scientists who "stretch" the
rules of their paradigm are, in fact, creating a new paradigm. Yet, if this
doctrine were accepted, then nearly every scientist in history would possess,
by default, their own unique paradigm (when they apply the paradigm to some new
or unexamined phenomena, or just think about the paradigm differently); and,
consequently, the concept of a revolution would no longer appear to be even
applicable. To summarize, Kuhn believed that a paradigm has a well-defined set
of beliefs that define the paradigm, and hence separate it from all other paradigms.
Unfortunately many examples from the history of science seem to suggest that
the hypotheses and concepts that comprise any "world view" (paradigm)
are often quite variable and inconstant, and are not the same for all members
of a given paradigm. For another example, John Earman has argued that there is
no single concept in General Relativity that has been accepted by all (i.e.,
each and every) practitioner of this alleged paradigm. Yet, General Relativity
is an amazingly successful theory, even though the uncertainty among its
practitioners (concerning the status of its claims) would seem to suggest that
the theory should be in a "revolutionary" state and thus ripe for an
"overthrow", or, at very least, the lack of agreement on fundamental
issues should render the theory unsuccessful and unfruitful. (In short,
uncertainty over the concepts of a theory is one of the principle
characteristics of an imminent revolutionary phase, eventually leading to the
overthrow of the paradigm.) Interpreted in this light, EarmanÕs criticism also
raises problems for KuhnÕs notion of a "normal
science"/"revolutionary science" distinction.
Likewise, KuhnÕs
notion of the incommensurability of scientific terms is highly dubious. Returning
to our example, physicists can easily understand and relate the different
meanings of "mass" in Newtonian as well as Relativistic
theory—and they often use these respective theories without any undue
difficulty while working with separate phenomena, such as testing the
elasticity properties of bodies (Newtonian) or the properties of subatomic
particles at high speeds (Relativity). Therefore, a radical interpretation of
KuhnÕs doctrine of incommensurability (where the meaning of a term is completely
different in a rival paradigm, often dubbed "total
incommensurability") is not supported by the historical evidence. (Oddly
enough, Kuhn would seem to agree with this last observation; see, The
Structure of Scientific Revolutions, 2nd. ed.; 1970, p. 202). It should be
noted, finally, that Kuhn seemed to make the less radical claim (in his later
years) that the terms in a scientific theory were only "partially
incommensurable" (to separate it from his earlier "total
incommensurability", if he did indeed originally hold the more radical
thesis). In "partial incommensurability", only part of the
meaning of a scientific term depends upon the paradigm, such that "the
terms of one scientific paradigm cannot be translated into the terms of another
scientific paradigm without remainder (that is, without something being left
out, or lost, in the translation)." Obviously, once total
incommensurability is rejected, and partial incommensurability is adopted as
the preferred view, the problem of the overall incommensurability of theories
is undermined—it is undermined because now there is (presumably) some
"paradigm-free", or "paradigm-independent", basis for the
comparison of the meanings of terms between competing paradigms. And once a
paradigm-free (or paradigm-neutral) basis for theory comparison is operative,
practitioners from different paradigms can now compare and contrast the
meanings of their terms and the content of their respective theories. By this
process, the "relativistic" worries about the irrationality of
scientific change, and the failure to account for scientific progress, are
drastically reduced, if not eliminated. In short, given "partial
incommensurability", it is true that the terms of one paradigm may not be
perfectly translated into the terms of another paradigm, but that doesnÕt
prevent one from gaining a fairly accurate understanding of the meaning of that
other paradigmÕs terms (since, just as in the translation of a foreign language
into English, we may not gain a perfect translation of every term in that
language into English, but we can obtain a translation that is generally fairly
accurate and thus allows communication and the transfer of knowledge, facts,
etc.). Finally, Kuhn seemed to think that the "rules of thumb" for
evaluating competing scientific paradigms did allow for some notion of the
"progress" of scientific theories (such that science can be said to progress
from less successful/accurate theories to more successful/accurate theories).
The implications of KuhnÕs early work was that he denied scientific progress
(as explained above); however, in his later work, he merely claimed that these
"rules of thumb" (the Scientific Method) were not a fixed an
unchanging set of criteria operative in all cases of scientific change—the
application of these rules changes from case to case, and from practitioner to
practitioner, hence they are more like a set of "values" that guide
science, rather than a set of logical laws which function in a necessary,
law-like manner in all cases.
Chapter 7:
Social Constructivism.
The Social
Constructivist school, established in the 1970s, has its origins in the work of
Kuhn, Feyerabend, and many of the other critics of Logical Positivism. It is
one of the most influential and popular branches of contemporary philosophy of
science, especially among social scientists and post-modern thinkers.
One of the
targets of the social constructivists is the alleged distinction, popular among
the Positivists, between the "context of discovery" (which concerns
the historical and social context involved in the development and acceptance of
scientific theories) and the "context of justification" (which
concerns the use of logic and methodological principles to justify the
acceptance of scientific theories). Of course, the Positivists emphasized the
"context of justification" in their work, since they were mainly
interested in demonstrating the rationality and privileged status of science,
while largely ignoring the "context of discovery". The Positivists only
appealed to the context of discovery (i.e., history and society) to explain
away failures of scientific reasoning, but ignored such social and historical
factors when examining the acceptance of successful theories: For instance, the
Positivists would claim that the retention of Ptolemaic astronomy by many
scientists in the seventeenth century was due to historical and social factors
(namely, the Church) that "blinded them" to the superior merits of
the Copernican theory; yet, those few thinkers who accepted the Copernican theory
are exempt from an historical/social analysis since they correctly used
"scientific rationality", or the "Scientific Method" (which
thus explains why they choose the "correct" theory). In contrast, the
social constructivists believe that even those instances of correct scientific
thinking (as by the Copernicans) demand an historical/social analysis. The
modern-day social constructivists, as a result, do not believe that any
separation between the context of discovery and the context of justification can
be accomplished: in short, the historical, social, political, personal, etc.,
factors that are involved in the formulation of theories also determine the
method of their justification. Consequently, the Scientific Method, as well as
the laws of logic and mathematics, that are used to evaluate and justify
scientific theories (and determine their status as, say, real science or
pseudo-science) are relative to the particular social group and historical
setting (just as the scientific theory is relative to a particular group and
setting). Consequently, the notion of scientific "objectivity"—
understood as a body of facts, logical and methodological rules, etc., that are
not influenced by social factors—is an illusion, since all criteria and
methods of evaluating theories are relative to the social context; and,
apparently, all social contexts are equal (i.e., there is no privileged method
of evaluating claims to knowledge about the natural world). To summarize the
main view of social constructivists: scientific knowledge is not discovered, it
is socially constructed. The particular social, historical, political, etc.,
context in which a theory is developed determines the "truth" of that
theory, and the "facts" of that theory.
Needless to say,
social constructivism seems to lead to a virulent form of "(social)
relativism"; i.e., that all truth is relative to society. But, the social
constructivists try to respond to the charge of relativism by insisting that
the structure of society determines the outcomes of scientific debates and
experiments. It is not "objective nature" that determines the outcome
of scientific work, as most scientists believe, rather, it is the nature of
society that fixes the outcome of such scientific work (and thus eliminates the
specter of relativism). Yet, is this really a plausible view of scientific
facts and theories? There are a number of problems: First, the social
constructivists seem to believe that there exists some factor in society that
determines the outcome of all, or most, aspects of the scientific enterprise.
Yet, on closer inspection, this appears rather ridiculous—every outcome
of an experiment, say, that "the electron moved left instead of
right", would have to be explained by some factor in society that determines
that "left" was an acceptable resolution of the experiment, while
"right" was not acceptable. But, it seems crazy to believe that there
exists such a strict set of social factors that would completely determine such
outcomes of scientific work. Second, the social constructivists have not really
eliminated relativism since they would have to admit that a different social
arrangement and setting could bring about an entirely different science. In
fact, the social constructivists might be committed to the view that a
different science could reach entirely opposite conclusions from our own
science given the right social structure (e.g., bodies fall up, rather than
down; species de-evolve, rather than evolve; etc.)—and, of course, this
conclusion is a great embarrassment for the social constructivist school. In
response to this last problem, the social constructivists would probably claim
that while it is not true that "bodies would fall up" in a different
society/science, it is clearly the case that a different society would
construct a different theory and explanation of the phenomena of bodily
free-fall (i.e., a different society would provide a different explanation for
the motion of bodies towards the center of the Earth when dropped). This claim
is obviously correct, since the Aristotelians/Scholastics of the Middle Ages
did in fact accept a different theory of free-fall; yet, the construction of
different theories to describe a certain natural event (such as free-fall) does
not show that these events are socially constructed, rather, it only
demonstrates that the explanation of natural phenomena is
underdetermined by evidence (see notes to Slowik: chapter 2 and 3): that is,
many different theories can be constructed to explain the same event, but that
does not undermine the "objectivity" of that event (free-fall), it
only undermines the "objectivity" or, more correctly,
"uniqueness" of the scientific explanation of that event (e.g., there
is more than one explanation of the same event: it could be Newtonian gravity,
or it could be the natural Aristotelian motion of the terrestrial body towards
the center of the universe, etc.). The fact that there exist many scientific
explanations of the same natural event does not show that there is no objective
reality to that event, it only demonstrates that there are numerous
interpretations of that same underlying reality. In fact, the fallacy in the
social constructivist argument can be easily discerned if one applies it to
other cases: e.g., "Since the same penny (object/event) appears as a
circle, ellipse, line, etc., depending on your perspective relative to the
penny (different theory of that object/event), and since these ascriptions of
shape cannot be all true of the penny (since the shapes are contrary to one
another), the penny must not have an objective property of shape." Of
course, this conclusion is quite false, and the social constructivist would
probably agree. But, if the social constructivist agrees with the objection
that we have just raised, and accepts that it is the explanation of events that
is socially constructed, and not the event itself, than it appears that social
constructivism is just another rendition of Duhem-Quine underdeterminism (and
we already know about that)! Finally, social constructivism suffers from some
of the typical consistency problems inherent in all theories of relative truth:
if truth is relative to society, then the social constructivist theory is only
true if the society accepts it; and if society rejects the social
constructivist theory, then it is really false. So, since the majority of
scientists reject social constructivism (and this is true of most scientists),
then social constructivism, is false!
(See notes to
Schick: chapters 27 & 28, for more on social constructivism)
Chapter 8:
Feminist Philosophy of Science.
One of the
central tenets of feminist philosophy of science is the charge of a male bias
and sexism in the very fabric of science, at least as science is practiced
today. Now, there are many different "philosophies" grouped under the
general title, "feminist philosophy of science", so one must be
careful to distinguish the various views and arguments of these thinkers when
critically analyzing their work. However, in what follows, we will mainly
investigate the views of Sandra Harding, for her arguments represent the core
beliefs of an important part of the feminist philosophy of science community,
views that were very influential at the outset of this movement in philosophy
of science.
For Harding (The
Science Question in Feminism, 1986), modern science is flawed because it
has dominated by a male-biased outlook that is hostile to women and
minorities. This masculine bias has
led science into a faulty and skewed view of the natural world that has
ultimately contributed to the suppression of females and minorities as well as
caused environmental damage. Specifically, the fact that most scientists are
white Western males has resulted in a science that favors the biases and power
structure of white Western males, a power structure that has allowed the
inequalities among people and societies and the environmental degradation of
the present day. One of HardingÕs claims about the problems of modern science
lies in its devotion to "abstract" thinking—i.e., the
mathematical, quantitative, analytical approach to science, as particularly
manifest in physics. A science that is less concerned with abstraction and
quantitative thinking, and is more open to "subjective" and
"qualitative" characteristics, will make for a more "objective"
science. Therefore, although Harding seems to value "objectivity",
her definition of this term is quite different from its commonly understood
meaning. "Objectivity" is often synonymous with value neutrality and
the exclusion of all subjectivity; or, in other words, most people interpret
"objectivity" as the avoidance of all the personal opinions and
biases that can distort oneÕs conclusions on a particular issue. Harding, on
the contrary, believes that the "objectivity" of science would be
best increased if it were to incorporate "politically progressive
ideas" into its very structure and content. Presumably, these politically
progressive ideas would aim to redress the inequalities among the sexes and
races. In short, the attempt by scientists to remain neutral on
moral/political/social issues in conducting scientific work has actually
contributed to the oppression of women and minorities since "value
neutrality" is itself a masculine, pro-male bias that ultimately benefits
men over women. Harding believes that as more women and minorities enter
science the biases currently in place will be eventually removed.
Turning to the
analysis of these views, HardingÕs interpretation of objectivity and
subjectivity in science is quite problematic. For example, What would a more
"subjective" science look like? How do scientists make their
experiments more subjective and less quantitative? Some of the suggestions
center on the use of qualitative interviews and a more involved role for the
person conducting the experiment; but, itÕs hard to see how this counts as more
subjective than objective (i.e., DoesnÕt the attempt to gather more information
increase the objectivity of the experiment?). Also, a more active participation
and system of interviews, etc., doesnÕt seem relevant to most mathematical
natural sciences, but it does make sense for social sciences: So, does the
whole case for a subjective science only apply to the social sciences? (that
would certainly limit the applicability of feminist philosophy of science, and
thus constitute a problem). Overall, there is a sneaking suspicion that a
"subjective" science, one which aimes to incorporates politically
progressive ideas, is a science which is biased towards certain views,
and will be unwilling to accept evidence and conclusions that are contrary to
those prevailing social/political views. Yet, no matter how laudable those
political goals may be, the very notion that science should incorporate a set
of political/social values, and favor those results and conclusions favorable
to those values, has a potential to do harm both to science and the world.
History is replete with examples of politically-biased scientific movements
that led to erroneous views and results that, even more disturbingly, had a
role to play in the oppression of many people: e.g., "Nazi" science
(which was biased towards the NaziÕs concept of an "Aryan race"), and
Lysenkoism (which was a pseudo-scientific theory of biology officially
sanctioned by StalinÕs Soviet regime). Of course, Harding would probably
respond by claiming that the goal of value-neutrality is impossible, since all
peopleÕs beliefs and opinions are biased to some degree, even though they may
have tried to remove those biases—so, if were stuck with biases, why not
try to embrace those biases which promote politically progressive ideas? (The
influence of Kuhnian and social constructivist philosophies of science is
implicit in this form of argument, by the way, since it holds that social
biases are intrinsic to all scientific theories.) However, just because it is, in
principle, impossible to remove all of the biases from our world view, that
does not mean that we should abandon the attempt to remove all these
biases! (Analogously, does the fact that an artist canÕt make a
"perfect" art work mean that all attempts should be abandoned to make
the work as perfect as can be?).
Additionally, if
a male bias is responsible for the women-oppressing aspects of science today,
then there should exist evidence that the science conducted by women and
minorities is actually different from the science produced by men. That is, the
conclusions and results of science carried out by women should be both
non-oppressive to women, and more subjective, non-abstract, non-quantitative,
etc.—yet, there is no evidence to support this view. The scientific work
of women and minorities seems to be no different from the scientific work of
white, Western males: both are equally abstract, quantitative, mathematical,
and all the other characteristics of "objective" (as understood in
the sense criticized by Harding). In short, evidence does not support HardingÕs
view of a difference in the output of science based on the gender of the person
conducting the work. Finally, one of the disturbing aspects of this more radical
part of the feminist philosophy of science movement (which doesnÕt represent
all feminist philosophy of science, of course) is its apparent unfalsifiability
(in PopperÕs sense of the term): since their view regards objective and
critical analysis as favoring the present male biased approach to knowledge and
nature, any criticism of the feminist philosophy of science movement (as
offered above, for example) could be judged to be just another instance of this
aggressive male-bias, and thus the criticism has no legitimacy (and can be
conveniently ignored)! Consequently, these feminist philosophy of science
concepts and ideas could be seen as unverifiable, or better yet, unfalsifiable.
To conclude, it
should be noted (quite strongly) that a large number of feminist philosophers
of science, if not the majority, reject the radical relativist interpretation
of science described above. In fact, most of the criticisms raised above were
first formulated by other feminist philosophers of science. Furthermore, much of
the work of feminist philosophy of science is devoted to analyzing and
exploring the conceptual/methodological biases towards women that have played
such a prominent role in the history of science. That is, viewed historically,
women have not been treated well by science: many theorists and theories have
portrayed women as inferior to men, and science has also largely neglected to
examine specifically female issues in the history of science (most notably, the
specific medical concerns of women). However, the sad history of the biases of
gender, race, religion, etc., that have plagued the history of science should
not be used to judge the overall value and content of science, especially as
science is practiced today. Throughout history, most social institutions, if
not all, have been as equally sexist and racist as has science (and some much
more so)—thus, all human institutions would be found wanting if judged by
past errors and biases (such as politics, religion, crafts/trades, industries,
etc.). In fact, a major contribution of the feminist philosophy of science
movement has been its examination of this history, that is, many feminist
philosophers of science are interested in studying the role of gender biases in
the history of science (i.e., how and why such biases have developed in
science)—and this investigation is, indeed, a very important and valuable
contribution to our understanding of science.
Chapter 9:
Realism/Anti-Realism.
The central
problem in the philosophy of science (at least for this prof.) is the age-old
debate on the "ontological" status of theoretical entities (where
"ontology" is the study of the things/objects/beings that exist). The
"realists" believe that the entities that appear in successful (or
mature) scientific theories really do exist, and that the properties of those
entities are, more or less, accurately represented by mature theories. The
realist also believes that there are objective facts about the world that are
independent of the conceptual frameworks (i.e., theories) used by scientists to
access and interpret those facts. Anti-realism, on the other hand, believes
that all facts are dependent on the conceptual frameworks adopted by
scientists, and thus the anti-realists are skeptical, to varying degrees, about
the existence of theoretical entities. At this point, you might be wondering
why anyone would accept such a crazy view! Well, the problem is quite simple:
many, if not most, of our successful past theories have turned out to be false!
(E.g., Ptolemaic Astronomy, many aspects of NewtonÕs theory of gravity and
force, all the theories of a young Earth and medical diseases, etc., the list
is endless.) Therefore, since most of our past successful scientific theories
made use of "entities" which we now believe are false (e.g., a
stationary Earth, an unchanging Newtonian mass, an imbalance of bile fluids,
etc.), the success of scientific theories must not depend on whether or not we
accept the existence of the entities postulated by those theories—that is,
the question as to the existence of a theoryÕs theoretical entities seems
irrelevant to the success of that theory, and, since we know we have been wrong
in the past about which entities really do exist (and will probably be wrong
again in the future), we should abstain from regarding scientific entities as
having an actual existence outside of the context of the particular scientific
theories that we currently accept. Many anti-realist, consequently, were
"instrumentalists" when it came to scientific entities; that is, they
believed that such entities were just "useful fictions" for
organizing and summarizing experimental, observational data. Theoretical
entities did not exist in the actual world; rather, they only existed in our
theories as a useful means of making predictions about observable phenomena. Of
course, this form of reasoning is crucially dependent on some sort of
observation/theoretical distinction, as will be discussed below.
One of the most
famous, if not the most famous, argument for realism is the
"convergence" argument (although it has many different names, such as
the Òno miraclesÓ argument). Put simply, the convergence argument claims that
if the entities postulated by our best scientific theories did not exist (even
approximately), then the success of science would be a miracle! How could all
of those scientists, working in different labs around the world and often using
different experimental procedures, reach the same results in their
separate experiments if the entities in their scientific theories did not exist
(at least roughly as the theories claim they do)? In short, the work of
scientists "converge" in the sense that the results of their
experiments and conclusions come out (more or less) identical. For example,
many different physicists, working in many different labs and using many
different testing procedures, have reached the same conclusions as regards,
say, EinsteinÕs Special (or General) Theory of Relativity: the time distortion
effects, paths of light rays, variability of mass effects, etc., have been
verified by many different scientists using a wide variety of experimental
techniques—and how could this have occurred unless the theoretical
entities employed in EinsteinÕs theories (namely, his concepts of space-time
and mass) existed more or less as Einstein described them? If the entities in
theories did not exist, claim the realists, it would be much more likely that
the results and conclusions of a scientific theory would come out different
each time a scientist used the theory; that is, the results would
"diverge", rather than converge. But, since the results of our best
theories do converge, the only explanation for the success of our best
scientific theories, as well as the success of science as a whole, must be
scientific realism (as it is also called). Second, the convergence argument also
applies to the Òprogress of scienceÓ, which is the undisputed fact that our
best theories in any given science are always replaced by better theories, such
that the new replacement theory provides more accurate predictions and more
detailed descriptions and analysis of nature than the theory that was replaced.
In short, the progress of science seems to suggest that science is continuously
getting closer to (ÒconvergingÓ to) a final or ÒtrueÓ scientific theory that
would accurately describe reality (i.e., a final, ÒtrueÓ theory), and this can
only be possible if scientific realism is correct.
There have been
many objections raised against the convergence argument (of which some are
described in the notes to Schick: chapters 33-39). Here, it is worthwhile to explore
the criticisms raised by L. Laudan. Briefly, Laudan claims that the convergence
argument fails because many past successful theories were not true (as just
described above)—and if the success of theories does not guarantee their
truth, then the success of these theories does not guarantee that their
entities really do exist (i.e., because the theory is false, its entities must
not exist)! Put differently, the success of theories has nothing to do with
whether or not its entities exist, so how can their success be used to argue
for the existence of their entities? Various responses have been formulated by
the realists in attempting to meet LaudanÕs challenge: First, just because a
whole theory is false, that does not mean that every part of the theory is
false. To argue that "what is true of the whole is also true of all of the
parts that comprise the whole" is to commit the fallacy of division. (It
is a fallacious form of reasoning because we are familiar with many cases where
this form of argument leads to a false conclusion: e.g., "the whole
airplane can fly, thus all of the parts that make up the airplane can fly,
too."—this is a false conclusion, since seat belts and screws canÕt
fly!) In other words, the success of past theories, even after we know that
they have been falsified, may be due to the simple fact that many parts of
these theories really did accurately reflect the world. For instance, even
though the "perfect circles" of Ptolemaic astronomy are rejected in
modern astronomy, it is nevertheless true that the complex epicycle-deferent
system used in this ancient theory often gave a very close approximation to the
elliptical orbits of the planets, which is the accepted modern theory.
Likewise, even though the concept of an unchanging, invariant "Newtonian
mass" does not hold true at high speeds and in large gravitational fields,
it is still the case that the Newtonian concept gives extremely accurate
results at low speeds and in less intense gravitational fields. (In addition,
EinsteinÕs theory retains a notion of invariant mass, the "rest mass"
of a body, which functions much like NewtonÕs concept.) So, the realist can
accept that "false" theories can be successful, but that admission
does not entail that all of the concepts, entities, or parts that make up these
theories are equally false—in fact, the realist can claim that the
success of these false theories was largely due to those "true" parts
of the larger theory (i.e., some of its concepts and entities) that actually,
if imperfectly, represented the natural world. (Returning to out examples, the
correct "parts" of the Ptolemaic and Newtonian theories are,
respectively, the close approximation of the epicycle-deferent system to
elliptical orbits, and the close approximation of Newtonian mass to Einsteinian
mass at slow speeds and weak gravitational fields.) Finally, some of the
examples of "false yet successful theories" offered by Laudan are
somewhat dubious: for instance, Laudan counts the "spontaneous
generation" theory of biology (that life can spontaneously generate out
of, say, rotting meat), and the "humoral" theory of medicine (that
illnesses are due to an imbalance of certain biles or humors in the bodyÕs
fluids) as successful theories of science?! Overall, it is a difficult stretch to
incorporate such beliefs into the domain of "successful scientific
theories". These views did offer an explanation for various observed
phenomena, and a systematic practice for investigating them—but they
failed to meet the most important aspects of the Scientific Method: in
particular, they did not offer much information about the unobservable entities
that play such a crucial role in their respective theories, i.e., how these
entities and processes (spontaneous generation and humoral fluids) could be manipulated
to make new predictions that could verify/falsify the relevant theory. Without
the manipulation and systematic testing (via new observable predictions) of a
theoryÕs theoretical entities, there would be nothing to separate real
successful scientific theories from successful pseudo-scientific theories! In
short, a "successful" scientific theory is not just one that provides
an explanation and a set of concepts—if that is all that is involved in
qualifying as a successful scientific theory, then most pseudo-sciences would
have to be included in that category.
Given the
foregoing debates and controversies surrounding the realism/anti-realism issue,
one might ask if there are any prospects or strategies for developing a viable
realist theory of science that does not fall afoul of the many objections
raised above. While all philosophical theories are subject to criticism (which
is the first conclusion all philosophy students reach, I might add), a recent
trend towards a more sophisticated form of realism, sometimes called
"structural" or "perspectival" realism, appears promising
(at least to me). In KossoÕs book, Appearance and Reality, an analogous
view of realism is dubbed "realistic realism"; so I will briefly
discuss KossoÕs theory. First, as has been noted often, it is certainly the
case that falsified theories can make successful predictions, and thus they can
generally continue to function as useful theories of science. Why? And, more
importantly, How? Well, one way to approach this problem is to recall the
discussion (with respect to Laudan above) concerning the "truthful
parts" of a falsified theory. That is, false theories can continue to
provide useful information because various aspects of the theory accurately
resemble some aspect of reality. This insight can be used, accordingly, to
argue for a limited form of realism about scientific theories. To quote Kosso,
"We cannot know everything [through our scientific theories], but we can
know a lot." (p.182) More specifically, our best theories give us a
partial picture of some ultimate reality out there in the physical world, and
this partial picture provides real information of that world; but, of
course, it is not the complete picture, and thus there are many facets of
reality missed by our best theories.
In addition,
there exist many different ways of gaining information on the world, and the
content of the information we acquire is partially, but not entirely,
dependent on the choice of framework (i.e., concepts and experimental
apparatus). For instance (using KossoÕs example), we can decide to measure the
temperature of a body using either a thermometer scaled in Fahrenheit or
Celsius, so that a reading of, say, 32 degrees F for one thermometer is matched
by a reading of 0 degrees C on the other thermometer. Does this prove that the
temperature of the body is purely relative to the choice of measuring
equipment (which is, by the way, merely a conventional choice)? Put
differently, is there no objective aspect of reality that corresponds to
our measurement (and which caused the readings on our respective thermometers)?
Of course not! While the choice of measurement regarding the temperature scale
(either Fahrenheit or Celsius) was a mere convention, once we choose a
measuring convention, objective reality determines the outcome of the
experiment! If we choose Fahrenheit, then water will freeze at 32
degrees (and not at, say, 50 or 70 degrees F). If we choose Celsius, then water
will freeze at 0 degrees (and not at 10 or 32 degrees C). In short, the choice
of measuring convention, or concepts, or framework, is up to us; but once we
choose a particular framework, etc., objective reality determines the outcome.
To quote Kosso, "Once the initial choice of language [i.e., theory,
concepts, framework, measuring convention, etc.] is made, then nature dictates
what is true and what is false. We choose how to say it, but we cannot
choose what to say" (p. 108).
Oddly enough,
this sort of realization is partly what prompted the entire Logical Positivist
program in the first place (and which nicely brings us back to the beginning of
this course)! The analysis of EinsteinÕs General Theory of Relativity seemed to
allow for different interpretations of the geometry of physical space: either
Euclidean or non-Euclidean (where, roughly, a Euclidean space is flat while a
non-Euclidean space is curved). As Poincare had earlier pointed out, a person
on the surface of a curved space could accept one of two conclusions concerning
the geometry of that space: (1) the space is actually curved, since certain
measurement of length, with "meter sticks", reveal that various
properties of the space are not Euclidean (e.g., the circumference of a
very large circle in the space would not equal "pi times the diameter of
the circle", as it does in Euclidean space); (2) the space is flat, since
the disconfirming measurements with the meter stick (which seem to prove that
the space is not Euclidean, and thus not flat) can be "explained
away" by merely claiming that some "strange force" exists in the
space which distorts the meter sticks used in the measurement
(underdetermination again). That is, if the strange force did not distort the
meter stick, then the measurements of the space would reveal that it was
actually flat (so that, in the absence of the force, the circumference of a
large circle would equal "pi times the diameter of the circle", as
mandated by Euclidean space). Poincare claimed that both options (1) and (2)
are equally compatible with the personÕs experience of that space, so the
observable evidence cannot determine which theory is preferable. (In fact,
maybe the person really does live on a flat surface with strange forces, so
that the "curved space" interpretation is actually false.) What does
this example prove?: Is the geometry of space merely a convention, so that we
could accept either a flat or curved theory of space? Of course, there may be
good reasons to accept that the curved space is the "real" geometry
of the space. (For instance, the curved space theory is simpler, for it does
not need to postulate "strange" forces, whose presence in the flat
space theory may also conflict with our other beliefs about forces—that
is, the scientific Method may favor one interpretation over the other).
Regardless of this debate, one thing remains true: not all aspects of the
choice between (1) and (2) are conventional. If one decides to retain a flat
space, then one must postulate the existence of very specific (strange)
forces that distort the meter sticks. On the other hand, if one decides to
directly accept the measurements conducted with the meter stick, then one must
conclude that the space is curved (and not flat). Consequently, as in the case
with our alternative thermometer measurements, once you conventionally choose
a framework or a set of beliefs about the world (respectively, that the space
is flat, or that the meter sticks provide accurate measurements), nature
determines what you can further say, or what conclusions you must
reach, about the world. In short, the constraints that nature places on our
conclusions about the world, once a framework or language is chosen, is what
"objectivity" means, and this constitutes a form of
"realism"!