The art of
knowing.
Knowledge. Information is out
there in the world in massive amounts. This is not knowledge, but it is
potential knowledge. This potential knowledge is in the objective world
of reality, which all knowledge seeks to describe. There is also real
and vast knowledge in this objective reality, which has been invented
and tested by our ancestors, and collected for us to draw upon over
thousands of years. Our problem as learners is a matter of somehow
transferring the knowledge and potential knowledge in this objective
world into the subjective world within our minds. In this way the
information and knowledge becomes our knowledge, as it is understood by
us. But how do we make this transference? We can invent our own
theories and and test them ourselves, but this is a very laborious way
of acquiring knowledge. We can also take the theories presented to us
by others and test them, but this too would be unimaginably laborious.
If we had to rely on testing every theory ourselves we would actually
have time to learn very little. We would most likely still be cave men
and unable to progress further.
This, of course, is not the way we normally learn.
Most of what we learn, is a matter of assimilating and accommodating
the theories of others, into our personal models of reality, without
testing them. The philosopher Karl Popper calls this the
transferring of world 3 knowledge to world 2; World 3 being our
cultural heritage, the body of information (knowledge?) held in common
in books
computers and other forms of media; while world 2 is the private and
subjective world of knowledge within each individual.
Every
moment of our lives we are presented with information by (so called)
experts or authorities. The question is how can we know if this
information is correct so we can decide whether to accept it into our
model of reality or not? The problem is how can we trust the
theories of others sufficiently to be willing to include them in our
own personal models of reality? How can we trust this
information coming from outside ourselves and untested by us? How can
we discern what is true in world 3 so we can incorporate into our own
personal maps of reality?
Now
you might think the answer to this is both easy and obvious. But this
is not the case. Do you think some information is likely to be true
because it is presented in a newspaper? Well, you might be a bit
skeptical of what is printed in newspapers. What about a prestigious
scientific journal? Can you simply accept information because it
appears in a prestigious scientific journal? If you think you can you
would be dead wrong. But if we can't trust in scientific journals what
can we trust in?
The
short answer is that we can't trust any authority, anyone or anything.
How then do we go about choosing what to accept into our personal
models of reality? Well there is both good and bad news there.
In
his book "Nonsense
on Stilts" Massimo Pigiucci Quotes from the work of Alvin I
Goldman to provide a novice with some some way of assessing the worth
of a 'so called expert'. He tells us there are five criteria a novice
can use to assess the worth of a possible expert as follows:
The
five kinds of evidence that a novice can use to determine whether
someone is a trustworthy expert are:
-
an examination of the arguments presented by the
expert and his rival(s);
-
evidence of agreement by other experts;
-
some independent evidence that the expert is,
indeed, an expert;
-
an investigation into what biases the expert may
have concerning the question at hand;
-
the track record of the expert.
Consonance, the best way to know. One
way a novice can choose is on the basis of how well the new information
fits with the rest of what he/she knows. A novice can judge the
validity of the new information by how well it fits together with all
the information in his/her personal model of reality. Of course this
kind of validity is only reliable if the novice already knows a lot
about the subject matter in question. The more a novice knows about the
subject matter, the more he/she has knowledge of the field, the better
he/she can judge if the new information is likely to be correct or not.
In other words in order to judge if new information in a particular
subject field is likely to be correct or not, a novice basically has to
be an expert.
A
novice can say to himself, 'Is this new information consonant with what
I already know, is it even expected from what I already know?' This is
most easily accomplished by a novice when the so called expert is not
an expert at all and the novice may actually have some knowledge that
can easily catch the impostor out. Of course the information a novice
has already accepted into his/her personal model of reality, is only a
better way to judge the truth of new information, if it is
comprehensive, internally consistent and (well) correct itself. If what
a novice knows/believes is already wrong, then of course, information
may appear consonant with it, that is also wrong. In this way people
can build models of reality that are in fact wrong.
In
line with Goldman's first criterion, if a novice is adept at using
logic, he/she can examine an expert's position in other ways. A novice
can examine the opposition to our expert and assess how congruently
that opposition also fits with the novice's model of reality. If the
opposition to this expert fails badly in this regard this lends some
credence to the expert. Likewise if the opposition succeeds in being
consonant with the novice's model of reality, then this should decrease
his/her confidence in the said expert.
There
is also area where the position of an expert can be examined by use of
logic and that is by assessing whether the expert's position is itself
internally consistent. Of course if the position of the expert is
internally consistent this does not make it right but if it is
inconsistent then we can certainly infer that that it is wrong. This
too may be applied when examining the
opposition.
The judgment of other experts. If a
novice does not have a lot of information already accumulated on a
subject in his/her personal model of reality, it becomes much harder to
judge whether new information is expert or not. The novice finds that
he/she has to fall back on the judgment of other experts. But experts
vary in their expertise and in their reliability. The problem then
becomes, "How can we know which experts to trust?
While
it is possible that the majority of experts in a particular field can
be very wrong, and in fact are every time there is a huge change
happening in their field, this in no way devalues the statistical
relevance of an expert being supported by the majority of fellow
experts in his field of endeavor. Support from a large number of fellow
experts in the same field of expertise has to be a strong form of
inference of expertise, if that field of endeavor is recognized by the
scientific community as a whole. In other words, while it is always
possible to find someone with a PhD who will support some pseudo
science, it is very unlikely that the majority of PhDs, in a
scientifically accredited field, will support such things. The
agreement of large numbers of accredited others in a field of endeavor
must be significant when a novice is assessing the worth of a possible
expert.
Independent evidence that the expert is, indeed,
an expert. The individual scientist may have credentials but
these credentials have to be the right kind of credentials. Not all
PhDs have the same value when an expert is being assessed by a novice.
For a start credentials in a different field from that, in which the
expert is supposed to be expert, do not count. Also, while many
universities only hand out PhDs to people who really know their stuff,
other universities hand out degrees rather easily to people who have
learned very little, and some so called universities are themselves
unaccredited and bogus. Finally, when examining a degree to assess
trustworthiness of an expert, a novice should also assess whether the
field itself is accepted by the scientific community in general. Having
a PhD in a field such as bible studies that is not acceptable as a
science denies it any value for the novice assessing its scientific
trustworthiness.
In
assessing an expert in terms of his degrees a novice should therefore,
check if the degree is from a respectable university that has the
reputation of not handing out degrees easily. A novice should also
check if the degree is in a subject that is held to a strict scientific
methodology of a science acceptable to the general scientific
community. Finally a novice should check if the degree, the so called
expert has obtained, is in the subject for which he/she is supposed to
be an
expert.
The biases of the expert. Bias is a
strange phenomenon. While obvious biasing elements in the life of an
expert, such as where his/her funding is coming from, should indeed,
give a novice pause in assessing the trustworthiness of the so called
expert, such elements are in no way a conclusive argument that the
expert is even biased. On the other hand, certain institutions (such as
some think tanks) are notorious for only employing people, who are in
fact biased toward the think tank's agenda. When assessing an expert in
terms of his biases a novice should both consider the number of biasing
elements in the expert's life and whether those biasing elements
themselves do or do not have a reputation for having biased (so called)
experts in their pockets.
The track record of the expert.
The track record of a proposed expert is of course the amount of times
he has been right in the past. Sometimes this is easy to judge where
his expertise has been called upon several times in the past and the
advice given or work done on those occasions proved effective. For a
novice, it can be quite difficult for him/her to assess if an expert
has been successful in the past or not, in any particular area of
knowledge. Peers can assess one another's successes by means of how
often their work is sited in other research. For a novice this is not
practical. Thus novices have to be content with peer reviews of past
work which they can easily confuse with popularity in news reviews. A
Nobel prize in the field would of course would be a good indicator of
track record, but how many experts have those?
The
trustworthiness of the expert is not everything. It's not
enough to apply the above criteria when assessing whether the
information given by someone claiming to be an expert is trustworthy
information. Knowing about whether an expert is an expert or not, even
if such an assessment could be accurate, does not in itself validate
the trustworthiness of any information, although it is certainly a good
start.
Approaching
truth. Some people think that science tells us what is true
or that it should. But this is not possible because science is
continually in a state of improving its theories. Science is
continually amending, shoring up, rearranging, tearing apart and even
completely axing old theories and replacing them with newer ones that
are superior in what they can predict and explain. The most we can say
is that the new theories are nearer to the truth than the older ones
and that while science approaches closer and closer to the truth, it
can either; never reach a final truth because it is beyond the limits
of human ability; or if it does reach a final truth we would have no
way of knowing that it had. In the book
"This Will Make You Smarter"
Carlo Rovelli explains it:
"There is a widely held notion
that does plenty of damage; the notion of 'scientifically proved.'
Nearly an oxymoron. The very foundation of science is to keep the door
open to doubt. Precisely because we keep questioning everything,
especially our own premises, we are always ready to
improve our knowledge. Therefore a good scientist is never
'certain.' Lack of certainty is precisely what makes conclusions more
reliable than the conclusions of those who are certain, because the
good scientist will be ready to shift to a different point of view if
better evidence or novel arguments emerge."
|
Governments,
big business and other biased groups stand to lose or gain as a
consequence of what is believed to be true. Thus, they may attempt to
pervert and cover this approach to the truth for their own ends. Be
that as it may, science is the only real tool we have for trying to
uncover the truth and scientists are the seekers of that truth. The
shadow of truth thus revealed, is only possible through the many
successive, tentative
reinterpretations
that we know as scientific
progress.
|
Pseudo
science and faith based explanations. Although science cannot
tell us what is true it can demonstrate its superiority to pseudo
science and the supernatural in terms of explanation and prediction.
Science has three criteria that separate it from pseudo science.
Naturalism.
Firstly, science takes as given, that there is a natural cause for any
effect or outcome. As far as science is concerned supernatural causes
are not causes at all but rather admissions that we do not have a clue.
Instead of saying we have no idea how or why something happened, we
excuse our ignorance by saying god got personally involved. Pseudo
science and the supernatural do not require a natural explanation,
although they may sometimes give one.
Theory.
Secondly, science requires that any explanation must be in the form of
a theory that explains observable events and is internally consistent.
In his book "Nonsense
on Stilts" Massimo Pigliucci explains it as follows:
"The
presence of coherent conceptual constructs in the form of theories and
hypotheses is also a necessary component of science. Science is not
just a collection of facts, as Francis Bacon thought. Theories are
creative productions of the human mind and reflect our best attempts at
making sense of the world as it is."
Empiricism.
Thirdly, science requires that a theoretical hypothesis must be
empirically testable. This is the main thing that science does that
makes it superior to any other kind of explanation. All scientific
theories are not only tested in the research that produces them as
results, but all testing must be open for others to duplicate, and is
indeed duplicated before any kind of acceptance can begin. On the other
hand pseudo science relies on personal testimony, coincidental
occurrences and just plain faith. Unfortunately the personal testimony
of a friend tends, in many minds, to be more persuasive than scientific
research. However, the beauty of scientific research is that with the
right equipment and expertise anyone can duplicate the research and
check for themselves. This process is the backbone of science and is
called empirical testing. When certain fields of science fail to do
this, they can drift into being pseudo science, as were the cases of
cold fusion and eugenics.
Good
science and bad science. Unfortunately scientists are human
beings, with all the fallibilities of humans, and in evaluating the
trustworthiness of scientific research a novice needs more and better
ways of evaluating expertise than just examining the alleged experts.
In this regard we need to look at ways a novice can distinguish good
science from bad science and how and why good science can show its
superiority over bad science in terms of explanation and prediction.
Trusting the information taught in schools and
universities. The major way important information reaches us
is though the institutions of education.
While the process of review for textbooks, the process of construction
of curriculums, and the ability of teachers to teach from these, is far
from perfect, it is still the most trustworthy expertise we will
encounter in our lives. Of course not all textbooks and curriculums are
equally good or correct. Almost all textbooks probably have some typos
or technical errors and some subjects that are taught in colleges and
universities are completely unscientific and no textbook written on
that subject could be trusted as being scientifically accurate.
However, despite all that, textbooks are far more likely to contain
accurate information than any other sort of book.
There is a vast apparatus of experts that decide
which textbooks and curriculums will be used in schools and
universities. There is also a massive checking apparatus of
experts, teachers and students that continually vetting and vetoing the
choices of the original experts. In fact the information that comes to
us through schools is so well checked and rechecked and held back till
experts are absolutely sure, that it actually out of date by the time
it reaches the students. Later in university when students tend to get
more up to date information there is far less likelihood of the
information being correct. It is more up to date but also more
speculative and cutting edge and thus there is a good possibility the
information might be wrong. Indeed at that level students can be
exposed to several studies and may contradict one another. Students are
then learning, not facts, theories or even how to find such, but rather
learning what tentative theories might be worth checking in the
student's own research studies. Perhaps the best thing about schools
and the institutions of education is how well they perform this one
function.
The cosmic joke. It is
through the understanding of knowledge that we try to predict what will
happen in the universe and thus have some control over it. Humans have
come a long way in being able to do this. We have invented a principle
that ensures we will be able to do this called cause and effect. If A
then B. If A occurs then so will B or if you do A, B will occur.
Unfortunately, if we stop to think about this at all, we know that this
cannot be all there is to cause and effect. Just because B follows A
does not indicate there is cause and effect taking place. While a cause
must precede an effect the condition of two events following one
another in itself is insufficient to infer cause and effect. This is
called the post-hoc fallacy. In his book "Everything
is Obvious Once You Know the Answer" Duncan Watts puts it
like this:
"If
a billiard ball starts to move before it is struck by
another billiard ball, something else must have caused it to move.
Conversely, if we feel the wind blow and only then see the branches of
a nearby tree begin to sway, we feel safe concluding that it was the
wind that caused the movement. All this is fine but just because B
follows A doesn't mean that A has caused B. If you hear a bird sing or
see a cat walk along a wall, and then see the branches start to move,
you probably don't conclude that either the bird or the cat is causing
the branches to move."
The trick is of course to some how distinguish
when there is a causal relation and separate those from the others.
Sometimes the principle of cause and effect works beautifully and we
can predict what will happen to a very fine degree. Other times its
usefulness is imprecise or nonexistent.
In
a sense, this knowledge of the limits of knowledge, this knowledge of
where our knowledge ends, is the most important knowledge we have. When
we know, what we do not know, all manner of things become open to us.
We start to see what questions to ask to find new knowledge and we
begin to understand that it is better to do nothing than act on a
delusion where we think we know something but actually do not.
It is important knowledge, but it is also a cosmic
joke. Unfortunately the universe simply refuses to be completely open
about what is happening at any one time, thus we can incorrectly
perceive causation where there is none or little. The universe seems to
play tricks on us. We try to find causation by discovering patterns
where phenomenon always occur together. We call this correlation.
Unfortunately many phenomenon that occur together do so randomly and
there is no causal connection between them. Even stranger there are
occurrences of correlation called coincidence where there seems to be,
or that there should be a causal connection, but there is none. On top
of that, when causation does work, it usually only works statistically.
In other words there is only a probability that if A occurs that B will
follow. B usually follows A but there are a few occasions when it
doesn't. These may be causal exceptions or they may simply be what
science calls outliers. Outliers are not exceptions from the causal
rule, but are simply that part of the probability that did not conform.
All
this considered we should be very careful not to accept authorities and
experts just because that's what they are. Charlatans have been using
this feature of the universe to convince us of things that were very
wrong, for as long as man had walked the earth. David H. Freedman in
his book "Wrong"
calls this kind bamboozling the "Hitchcock effect" after a Hitchcock
story that seemed to typify how it worked as follows:
"That
story revolved around a man who receives a series of mailed predictions
that all prove correct, at which point he is ready to trust the
infallible predictor with his money - but as it turns out, the
predictor had started off mailing various predictions to a large number
of people, then focused each subsequent mailing on the increasingly
smaller subset of people who had received only the predictions that
happened to prove correct, until he had one victim who had by pure
chance received all the winning predictions. It sounds like a
far-fetched scheme, but in fact we often pick our leading experts this
way - that is, we look back to see which expert among a field of many
happened to call it right and then hold up that expert as having
special insight. But we need to remember that if there are many experts
predicting many things, some of those predictions will have to prove
right, though it may be entirely a matter of luck or even bad judgment."
"There
are also non-crowd-related variations on this theme. Any one expert may
be able to sort through her long history of various predictions and
find the few that proved correct, holding these up to our attention
while glossing over the others. Or an expert can make somewhat vague
predictions, fortune-teller-style and then sharpen them after the fact
to make the predictions seem highly accurate."
We
do not even need this to be a scam. If we believe, we tend not to
notice when a predictor is wrong, but we are primed to notice when the
predictor is correct. When an expert asks us to believe they are
stacking the cards so we will only notice when they are right.
Karl Popper and science. Karl
Popper shows that when we are learning we, first conjecture a possible
solution to a problem. Then we either actively test it, as a scientist
would, or accept it till the events of life seem to corroborate it or
refute it. Popper not only explains that this is the way we must learn,
but that other ways of trying to learn, if we try to adhere to them,
such as induction, are inefficient and can lead us far away from the
truth. For this reason Popper feels that following his understanding of
learning as a principle is the most efficient way of conducting
science. Popper explains this as follows in his book "Unended
Quest":
"I suggested that all scientific
discussion start with a problem (P1) to which we
offer some sort of tentative solution - a tentative theory
(TT); this theory is then criticized, in an attempt at error
elimination (EE); and as in the case of dialectic, this
process renews itself: the theory and its critical revision give rise
to new problems (P2) Later, I condensed this into
the following schema:
P1 -> TT -> EE -> P2,
a schema which I often used in lectures."
Popper shows that in science each individual
scientist creates and revises conjecture from analysis of their own
sensory input. However, he also shows that the same sensory input can
only be perceived through the lens of existing theory. Thus it follows
that, without theories about how the world works, incoming sensory data
is meaningless.
Popper further shows that no amount of
corroboration can ever validate a theory, but that a single instance of
deviation can invalidate it. Not only that, but the amount of
corroboration does not even improve the statistical probability that
scientists are correct. Methodologically theories can not be refuted
unless scientist formulate them in such a way as to show just how they
could be refuted. Scientists should therefore as an article of method
not try to evade refutation. Instead scientists should formulate and
present their theories as unambiguously and clearly as possible, to
invite refutation. Thus a truly scientific theory, is one that openly
exposes itself to testing.
Of course Popper's schema
for learning is by no means universal in its use as the base method for
doing scientific research. Indeed scientific research is often
conducted in ways that seem very far from this ideal. Just how far
science deviates from this ideal has been exhaustively catalogued by
David H. Freedman in his book "Wrong".
As Popper suggests, good science should be about inventing theories and
testing them by trying to prove that they are wrong. One would expect
that most of such theories would turn out to be wrong, just as a
probability statistic, but that would be good science. However, the
wrongness being considered here is, "How good is actual scientific
testing, and can it and its experts be trusted?" Freedman does not
suggest or intend to imply that science is bad, but rather he wishes to
alert us to stop and think before rushing to accept new information
from experts. He intends only to advise us that there is good reason to
be skeptical of individual scientific papers. Here are some facts from
his book that should give us pause before we accept any new research as
gospel.
-
Fallibility.
All scientists are fallible and can make genuinely innocent
methodological errors.
-
Sloppiness.
Some scientists do sloppy work that has little value.
-
Bias.
Some scientists (perhaps many) are led astray by their own personal
beliefs.
-
Intimidation.
Some scientists are pressured into misrepresenting data by those who
have power over them.
- Corruption.
Some scientists are corrupt and fake their work to increase or maintain
their positions, increase their prestige, or support their expensive
tastes.
-
Limits.
A
great deal of testing is itself flawed.
Fallibility. All scientists are
fallible and can make genuinely innocent methodological errors. In his
book David H. Freedman tells a story about research undertaken by
Albert Einstein and Wander Johannes de Haas who measured something
called the g-factor (how much an iron bar would twist in a magnetic
field). They conjectured that the g-factor should be precisely 1 for
each atom. After about a year of fine tuning they were able to measure
a result of 1.02 using highly sensitive instruments. Unfortunately
their measurements were way off and subsequent experiments consistently
produced a g-factor of about twice that value. The point is that
Einstein and de Haas had simply measured incorrectly. If someone like
Einstein can make an error in measurement then surely any scientist
could make such an
error.
Sloppiness. Some scientists do sloppy
work that has little value. There are many and various ways for
scientists to be sloppy with data.
-
Mismeasuring. Like Einstein all
scientist can and do make mistakes in measurement, most of them much
more sloppy that of Einstein. According to Freedman, scientists have
been known to misread blood pressure, height and heart rhythms and have
given wrong dosages and even wrong drugs. They have misrecorded the
location of subject's homes. Even when they measure correctly, they may
do so on people who do not properly represent the population such as
the young, the old, alcoholics, drug abusers, illegal immigrants and
the homeless.
-
Surrogate measurements. The most
common way for scientists to be sloppy is by making surrogate or proxy
measurements. Surrogate measurements are those where you attempt to
discover changes in one thing by measuring something else. Unless a
long standing cause and effect has been shown between the two things
one cannot assume it to be so. This however happens very often in
research for reasons of convenience. We study animals instead of humans
because applying research to humans might damage or even kill them. We
study the flow of blood in the brain with functional magnetic resonance
imaging to try and discover what is happening in the brain. This fMRI
does not tell us directly what is happening in the brain which would
require cutting the brain open, but rather tells us how much blood is
flowing where in the brain. It might be telling us what is going on in
the brain or it might not. It is convenient not to have to cut brains
open to experiment, however.
-
Tossing out
or ignoring data. One way is to draw a line in the wrong
place between data that is bad or contaminated and data that is
inconvenient. One can throw out relevant data that is inconvenient.
This can be done intentionally or excusably accidentally or simply
carelessly as in sloppy work. There are many ways of tossing out data.
If data is truly contaminated that section of the data should be done
again not simply ignored or left out. Another way of tossing out data
is to simply fail to submit the whole of the research for publication.
If the research disproves what the researcher set out to prove, the
negative finding may well be more important.
-
Moving the goalposts. Another way to
be sloppy is an exercise in self deception where the scientist seeks to
discover some positive finding in the research after the research has
refuted the theory it had set out to prove or disprove. The research is
then presented as if it had set out to prove or disprove that positive
finding. This is akin to footballers moving the goalposts after the
ball has been kicked in order to insure the ball goes through them. In
research misconduct terms this is referred to as using a
retrospectoscope.
-
Correlation. Just because one factor,
behavior or condition correlates with another does not necessarily
imply there is a causal relation between them. Correlations have the
same validity, whether they are presented as the findings of science,
or as random occurrences, or as coincidental patterns that we call
superstitions. If two factors always or nearly always occur together it
is possible one may be the cause of the other. There are however other
possibilities:
-
One
other possibility is that there is a causal relation between the two
but that it is only a part cause. The fact is that there can be
multiple causes. The syndrome known as schizophrenia was thought at one
time to have been solely caused by a dynamic that occurs in families
called a double bind. These days psychologists are more inclined to a
theory that schizophrenia is caused by genetic programming. However, it
is more likely that both of these phenomena may be part causes and that
there may be other cases as well.
-
Also
another possibility is that the relation between the two is not causal
but simply a predisposition for something to occur. They may simply be
risk factors. To add to the confusion, it is possible that a number of
different predispositions, while each on its own is not causal, may if
acting together, become a cause.
-
Yet
another possibility is that the correlated factors may both be caused
by a third factor. In his book David H. Freedman provides an example: "It may be true that a lack
of sleep is linked in some way with obesity, but it's a big jump from
there to conclude that if someone starts getting more sleep, they'll
lose weight. It may be, for example that people who sleep less also
loosely tend to be people who exercise less, or eat less healthfully,
or have a hormone disorder, or are depressed - in which case it could
be any of these factors, rather than sleep levels, that needs to
addressed in order to affect obesity. That would mean the link to sleep
is pretty much incidental, mostly useless, and misleading."
-
It
is also possible that we may mistake the direction of flow of
causation. What we think is a cause may be an effect and what we think
is an effect may be the cause. There is a clear correlation between
what people eat and how fat they are. We believe that eating a lot of
food or fatty food causes people to get fat. But if we look at what fat
people eat we will probable find they eat lean food and diet food.
Causation is flowing the other way. The fact that people are fat causes
them to eat lean and diet food.
-
There
is even the possibility that causation might flow in both directions.
We call this a chain reaction. A cause creates an effect but that
effect becomes the cause of other effects which turn cause even more
effects and so on.
-
Finally
there is a possibility is that the correlated phenomena simply occur
together by chance without there being any causal link.
David
H. Freedman has this to say about it in his book "Wrong":
"We hear about these 'people who do this are most
likely to do that' studies all the time... But they're among the most
frequently misleading of all research studies, and for a simple reason:
so many interconnected things are going on in people's lives that its
often nearly impossible to to reliably determine that one factor is the
main cause of some behavior, condition or achievement.
Bias. Some scientists (perhaps most)
are led astray by their own personal beliefs. We each perceive the
world through the lens of our beliefs. It colors what we see and all
the data from our other senses. It determines how we analyze that data
and and the meaning we give to that data. Both Popper and Kelly focus
on how this is both essential to, and causes problems of accuracy for
learning. David H. Freedman in his book "Wrong"
comments on the work of Thomas Khun who arrived at the same sort
understanding:
"Thomas
Khun, the MIT science historian who famously gave the world the phrase
'paradigm shift', argued in the early 1960s that what scientists choose
to measure, how they measure it, which measurements they keep, and what
they conclude from them are all shaped by their own and their
colleague's ideas and beliefs."
How
is it that, while most scientists fall into the many traps of bias,
some scientists seem to excel at being fairly unbiased and manage to
come up with useful findings again and again? Such people could be said
to have good biases. Jack Cuzick of Cancer Research UK says, "Some
people have a good nose for sniffing out right answers." This
does not help the non expert in deciding who to trust because to the
non expert, good biases and bad biases look exactly the same. It is
likely that biases can be mitigated somewhat if scientists formulate
and present their theories as unambiguously and clearly as possible, to
invite refutation, and then conscientiously attempt to disprove them.
Of course if biases are in fact good biases they will inevitably
produce good results more than once and probably often.
Intimidation. Some scientists are
pressured into misrepresenting data by those who have power over them.
Some of this is just common sense. If a scientist works for a company
and makes a finding that is not in the company's interest the company
is obviously going to put pressure on them not to publish. If a set of
data tends to put the company in a poor light that company is going to
pressure the researcher to toss out or ignore that set of data. Imagine
scientists at tobacco companies reporting finding that tobacco could be
a factor in causing cancer.
Whistle
blowers are not appreciated or tolerated in most walks of life and this
is very true of science. David H. Freedman commented about this kind of
pressure in his book "Wrong":
"Gerald
Koocher, the Simmons College dean who studies research misconduct, has
gathered online more than two thousand anonymous accounts of research
misconduct that wasn't otherwise reported. 'I wasn't surprised when I
got a lot of people saying, 'I was afraid my boss would fire me if I
blew the whistle on what he was doing,'' he says. 'I was more surprised
to get people saying, I caught my research assistant fabricating data,
so we fired them or moved them out of the lab, but we didn't report it
because we were afraid our grant money wouldn't be renewed.'
...Nicholas
Steneck, the ORI researcher, confirms the plentiful evidence showing
the reluctance to report misconduct. 'Almost every time I speak to a
group, at least one or two students or young researchers will come up
to me afterward and say, 'This is what's going on. What should I do?''
he told me. 'Or they'll say, 'I'm not going to do anything about it
until after I leave the lab' - but why would they report it after
they've? It's almost signing your own career death warrant to blow the
whistle."
Corruption. Some scientists are
corrupt and fake their work to increase or maintain their positions,
increase their prestige, or support their expensive tastes. Here is
what David H. Freedman says about this in his book "Wrong":
"Most of us don't like to think of scientists and
other academic researchers as as cheaters. I certainly don't. What
could motivate such surprisingly non trivial apparent levels of
dishonesty? The answer turns out to be pretty simple: researchers need
to publish impressive findings to keep their careers alive, and some
seem unable to come up with those findings via honest work. Bear in
mind that researchers who don't publish well regarded work typically
don't get tenure and are forced out of their institutions." Also
given the number of scientists in the world statistically there are
likely to be quite a few who are just using science to feather their
nests. There have been quite a few famous hoaxes uncovered, where data
and evidence have been knowingly and intentionally tampered with, by
famous scientists. One famous incident was the the story of the missing
link.
The
Piltdown Man (often referred to as the missing link) is a famous
Anthropological hoax concerning the supposed finding of the remains of
a previously unknown early human by Charles Dawson. The hoax find
consisted of fragments of a skull and jawbone reportedly collected in
1912 from a gravel pit at Piltdown, a village near Uckfield, East
Sussex, England. Charles Dawson claimed to have been given a fragment
of the skull four years earlier by a workman at the Piltdown gravel
pit. According to Dawson, workmen at the site had discovered the skull
shortly before his visit and had broken it up. Revisiting the site on
several occasions, Dawson found further fragments of the skull.
The
significance of the specimen remained the subject of controversy until
it was exposed in 1953 as a forgery. Franz Weidenreich examined the
remains and correctly reported that they consisted of a modern human
cranium and an orangutan jaw with filed-down teeth. Weidenreich, being
an anatomist, had easily exposed the hoax for what it was. However, it
took thirty years for the scientific community to concede that
Weidenreich was correct.
The
Piltdown hoax is perhaps the most famous paleontological hoax in
history. It has been prominent for two reasons: the attention paid to
the issue of human evolution, and the length of time (more than 40
years) that elapsed from its discovery to its full exposure as a
forgery.
Another
famous case of a scientist faking his results was the work of Woo-Suk
Hwang. Woo-Suk Hwang is a South Korean veterinarian and researcher. He
was a professor of theriogenology and biotechnology at Seoul National
University. He became infamous for fabricating a series of experiments,
which appeared in high-profile journals, in the field of stem cell
research. Until November 2005, he was considered one of the pioneering
experts in the field, best known for two articles published in the
journal Science in 2004 and 2005 where he reported to have succeeded in
creating human embryonic stem cells by cloning. Both papers were later
editorially retracted after they were found to contain a large amount
of fabricated data.
Hwang
has admitted to various charges of fraud and on May 12, 2006,
he was indicted on embezzlement and bioethics law violations linked to
faked stem cell research. The Korea Times reported on June 10, 2007
that The university had expelled him (he was dismissed on March
20, 2006) and the government rescinded its financial and
legal support. The government has subsequently barred Hwang from
conducting human cloning research.
Limits. A great deal of testing is
itself flawed. It is flawed by the limits of its very nature. David H.
Freedman concludes that there are four different basic study designs
that have varying degrees of trustworthiness but none of which is is
completely trustworthy. They are:
-
Observational studies: These are the least
trustworthy.
-
Epidemiological
studies: These can be more trustworthy if large and well
executed.
-
Meta-analysis
or review studies: These can be even more trustworthy if
carefully executed.
-
Randomized
controlled trial studies: These are the most trustworthy if
large and carefully conducted.
- Observational studies: These are the least
trustworthy. They consist of researchers observing how a small group of
subjects respond under varying conditions. This could be physicians
observing patients, technicians observing volunteer subjects, bribed
subjects like criminals offered a reduced sentence, or it could be an
animal study. Because the test subject samples are small and unlikely
to be representative individual studies of this sort must be suspect
and hardly ever conclusive. They also suffer from confounding variables
and researcher bias.
- Epidemiological
studies: These can be more trustworthy if very large and well
executed. These studies involve following a large group of people (as
many as tens of thousands) over months, years or even decades. Such
studies suffer from two important drawbacks. Unlike the small studies
where different variables can be controlled for and the subjects can be
observed every minute of the test, these studies involve subjects that
cannot be observed all the time and where variables cannot be properly
controlled for. Such studies often have to rely on subject self
reporting for their data. Also the type of research elements that such
studies tend to look at, often involve such small changes that the
slightest imprecise measurement can play havoc with their results. Such
studies also have to be suspect and are very unlikely to be
conclusive.
- Meta-analysis
or review studies: These can be even more trustworthy if
carefully executed. These studies consist of data taken from many
previous studies which is combined and reanalyzed. These tend to be
untrustworthy because the studies they are composed of tend not to be
simply duplicate studies. If 50 people did exactly the same study the
meta-analysis of those studies would be straight forward and
trustworthy, but the fact of the matter is that they not exactly the
same. Meta-analysis studies are usually made up of studies, that are
not only trying to prove different things, but have controlled for
different factors. If a study is left out because something wasn't
controlled for it will distort the meta-analysis if that factor is
unimportant to the findings. On the other hand if a study is included
where a factor that was important to the findings was not controlled
for that too will distort the meta-analysis. Also obviously it is easy
to be biased or corrupt in this sort of analysis where what is included
and what is left out can be of crucial importance. Also these
studies can be distorted by researcher's failure to publish many
studies. What is more it has been shown mathematically that
meta-analysis based on data from studies that were unreliable
in themselves, while more likely to be reliable than the original
studies, are still more likely to be wrong than right. So these types
of studies must also be suspect and inconclusive.
- Randomized
controlled trial studies: These are the most trustworthy if
large and carefully conducted. Nevertheless randomized controlled
trials, or RCTs cannot be automatically trusted simply as a matter of
course. Controlled means that there are at least two groups in the
study. Controlled means that there are at least two groups in the
study, typically in medical trials, one of which gets the treatment
under study, while the other gets a placebo. In non medical trials the
second group simply experiences no intervention by the researchers
while the first group does. Randomized means subjects are randomly
assigned to one group or the other, to avoid confounding variables, and
usually neither the subjects nor the researchers know who is in which
group until all the data are gathered making it a so called
double-blind study to avoid bias. David H. Freedman in his book "Wrong"
tells us that RCTs can if fact go wrong in any number of ways:
"For
one thing, randomization of large pools of people does little to
protect against most of the other problems with studies we've looked
at, including shaky surrogate measurements, mismeasurement, unreliable
self reporting, moving the goalposts, tossing out data, and bad
statistical analysis. As with epidemiological studies, large RCTs often
traffic in exceedingly small effects. What's more, RCT findings are
usually averages for results that often vary wildly among different
individuals, so that the findings don't really get at what's likely to
happen to you."
Albert Einstein
The failure of
journals and journalists to provide us with the truth. If
scientists cannot be trusted to give us expert advice, then the
journals and journalist through which their expert advice is
transmitted to us is doubly suspect. David H. Freedman in his book "Wrong"
explains:
"But more often the media simply draw the most
resonant, provocative and colorful - and therefore most likely to be
wrong - findings from a pool of journal-published research that already
has a high wrongness rate.
Generally even the most
highly respected science journals and their editors want to grab our
attention. They want studies that are groundbreaking, shocking, and
interesting. That means almost that the very least they are looking for
studies that have positive findings. Why would anybody want to read
about a theory that has been disproved. David H. Freedman in his book "Wrong"
provides some information:
"Research by Dickersin and others suggests that on
average positive studies are at least ten times more likely than
negative studies to be submitted and accepted for publication. That
might well mean that if one mistakenly positive study is published, on
average only two of the nineteen studies that correctly ended up with
negative results will be published. The seventeen others will probably
go into a file draw, so to speak, or if they're submitted for
publication they'll probably be rejected for having ended with negative
results that simply confirmed what everyone suspected was true anyway."
Important scientific
journals tend not to print negative findings. These can have very
distorting impact, where theories are refuted and studies are
invalidated, and nobody knows. This creates incredible waste where
these theories and studies are refuted over and over again without
anyone being aware that it had all been done before.
What can we believe? So if all testing is
methodologically inconclusive and possibly suspect, how can we ever
trust any scientific findings? There are a number of indicators that
help in this, but again none which is conclusive or permanent. Indeed
as Popper has shown it is impossible for science to be completely
conclusive. We tend to, and indeed should tentatively accept
particular scientific ideas as being true, because there is consensus
in the scientific community over a reasonable long period of time.
However, we should also bear in mind the words of Bertrand Russell, "Even
when the experts all agree, they may well be mistaken." David
H. Freedman in his book "Wrong"
quotes polymath Charles Ferguson who perhaps should have the last word
on listening to experts. He explains:
"The point isn't that you should always do what
experts say, but rather that making giant sweeping decisions without
listening to them at all is really dumb."
Perhaps laypeople simply can't be
expected to know. It is kind of sad but we are badly equipped
to try and determine which experts are right if we are not expert in
the field ourselves. Harvard Law School professor Scott Brewer
concludes, "...laypeople simply can't be expected to figure
out which experts to believe, no matter what technique they employ."
Evolution and our genetic disposition almost guarantees that we will
tend to ignore harsh probable truths and believe comforting lies.
David H. Freedman in his
book "Wrong"
provides some tips for detecting the likelihood of the grosser elements
of scientists and tests fallibility, and provides some indicators of
scientific accuracy, thoroughness and credibility, as follows:
Typical Characteristics of Less
Trustworthy Expert Advice.
-
It's
simplistic, universal and definitive. We all want certainty,
universality and simplicity. In his book "Wrong" David Freedman points out that if you are given a
choice of following the advice of one of two doctors, we will always
prefer to follow the advice of the one who seems most sure in what he
is telling us. A doctor who tells us that its hard to tell exactly
what's wrong and that the treatment he is recommending usually doesn't
work, but does work slightly more often with people like yourself, is
difficult to accept as being good advice. Despite the fact that very
little in science turns out to be universal, we also tend to drawn in
by claims that something is universally applicable. Like wise if
something is simple to understand we are also attracted to it. If we
know these features are biasing us to trust some expert advice, we can
allow for this, and allocate it to a less trustworthy
position.
-
It's
supported by only one study, or many small or less careful ones, or
animal studies. As a rule the more studies the better the
more careful the studies the better and if we are trying to learn about
humans it is obviously better to conduct the tests directly on them
rather than on animals. If we know these features indicate this is less
trustworthy expert advice, we can allow for this, and allocate it to a
less trustworthy position.
-
It's
groundbreaking. As David Freedman points out, "...most
expert insights that seem novel and surprising are based on a small
number of less rigorous studies and often just one small or animal
study. That's because big rigorous studies are almost never undertaken
until several smaller ones pave the way, and if there had already been
several studies backing this exciting finding, you probably would have
heard about it then and it wouldn't seem so novel now. If we
know novel and surprising indicate less trustworthy expert advice, we
can again allow for this, and allocate the advice to being understood
to be less trustworthy.
-
It's
pushed by people or organizations that stand to benefit from it's
acceptance. As David Freedman also points out, "All
experts stand to benefit from their research winning a big audience, of
course, and that's well worth remembering. That's especially true when
the research is coming out of or being directly funded by individual
companies or industry groups whose profits may be impacted. Corporate
sponsorship doesn't mean a study is wrong, but there's simply no
question it sharply raises the risk of serious bias..." If
we know that promoters are of necessity biased and may even be tempted
to lie, what they promote, should be understood to be untrustworthy
expert advice, and we can make it so in our own minds.
-
It's
geared toward preventing a future occurrence of a prominent recent
failure or crisis. As David Freedman also points out, "This
is the 'locking the barn door' effect: we're so irked or even
traumatized by whatever has just gone wrong that we're eager to do
now whatever we might have done before to have avoided the problem." Just
because we are itching to follow this advice now, does not make it any
better expert advice than it was before, it is in fact less
trustworthy, because we want to follow it. We should allow for this,
and allocate the advice to being considered by us less trustworthy.
Characteristics of Expert Advice we
should ignore.
-
It's
mildly resonant. You've heard the old saying that a little
knowledge is a dangerous thing. When we have a little knowledge it
often feels like we know a lot. Expert advice can just sound right to
us, because we misjudge what we know. We think we know a lot when we
know only a little. It fits with our view of the world, but we don't
actually have enough information about that field to make the informed
choice needed. As David Freedman points out: "...it appeals
to our common sense, it's amusing, it makes life easier for us, it
offers a solution to a pressing problem. Too bad none of that improves
the chances of an expert conclusion being true."
-
It's
provocative. Again as David Freedman points out: "We
love to hear an expert turn a conventional view on it's ear: Fat is
good for you! Being messy can be a good thing! We're tricked by the
surprise, and at the same time it may ring true because we're so used
to finding out that what we've all been led to believe is right is
actually wrong. The conventional view is indeed often wrong, or at
least limited, but look for good evidence before abandoning that view."
-
It
gets a lot of positive attention. Yet again as David
Freedman points out: "The press, the online crowd, your
friends - what do they know? The coverage drawn by an expert claim
usually has more to do with how skillfully it has been spun or
promoted, combined with its resonance and provocativeness, rather than
how trustworthy it might be."
-
Other
experts embrace it. David Freedman points out that while
other experts should not be completely ignored they need to be put in
perspective: "...communities of experts can succumb to
politics, bandwagon effects, funding related biases and other
corrupting phenomena. It's also often hard for laypeople to to tell if
most of the experts in a field do in fact support a particular claim -
press reports may be presenting a biased sampling of experts.
...In that light, the immediate wide support of a community of experts
for a new claim might be seen as a warning sign rather than a
recommendation. More trustworthy, by this reasoning, would be the
support that gradually builds among experts over a longer period of
time."
-
It
appears in a prestigious journal. Prestigious journals do
not publish research because it is likely to be thorough or correct.
David Freedman quotes D. G. Altman who studies bad research practices: "There
are many factors that influence publication, but the number one factor
is interest in the study topic." Freedman continues:
"What makes a study's results important or otherwise interesting? There
are no hard-and-fast rules, but editors and researchers tend to speak
of results that break new ground, or that might have impact on what
other researchers study, or that have important real-world applications
such as drugs for a major illness. Its also widely understood in the
research community that, all things being equal, journals much prefer
to publish 'positive' findings - that is, studies whose results back
the study's hypothesis." After all who wants to read about
theories that have been refuted. The research that is published is that
which is novel, groundbreaking, provocative and positive.
-
It's
supported by a big rigorous study. David Freedman points out
that while big rigorous studies are generally more trustworthy this
should not be allowed to distract you from the possibility that it can
very easily still be very wrong: "No study, or even group of
studies, comes close to giving us take-it-to-the-bank proof. When
several big rigorous studies have come to the same conclusion, you'd be
wise to give t serious consideration - though there may still be plenty
of reason for doubt, perhaps on the grounds of publication bias (the
dissenting studies may have been dropped somewhere along the line),
sponsorship corruption (as when a drug company is backing all the
studies to bolster a product), measurement problems (as where
questionable markers are involved), flawed analysis (as when cause and
effect are at risk of being confused), and more."
-
The
experts backing it boast impressive credentials. David
Freedman points out that: "We've seen that some of the
baldest cases of fraud oozed out of Ivy League campuses, world class
hospitals and legendary industrial labs - where competence and
standards may be sky-high, but so are the pressures to perform, along
with freedom from close oversight. If experts can cheat, they
certainly can succumb to bias gamesmanship, sloppiness and error. And
they do all the time." Of course it should be understood
that people who have impressive credentials are generally more
trustworthy than those that do not, but that in no way should incline
us to accept and trust in the advice of those highly credentialed
researchers. Nor should it incline us to dismiss the work of those with
lesser qualifications. Even laypeople can sometimes be right when the
experts are wrong. Real world observations while being less meticulous
than scientific research, can sometimes be more
relevant.
Some Characteristics of More Trustworthy
Expert Advice.
-
It
doesn't trip the other alarms. David Freedman explains that:
"Knowing now the characteristics of less
trustworthy advice, we can obviously assume that expert advice not
exhibiting such traits is likely to be more trustworthy. In other words
we ought to give more weight to expert advice that isn't
simplistic...," universal or definitive. We should find more
trustworthy, that research which has been satisfactorily duplicated
many times, that has the support of large careful studies, that avoids
conflicts of interest, that isn't groundbreaking and that isn't a
reaction to a recent crisis.
-
It's
a negative finding. David Freedman continues: "As
we have seen, there is a significant bias every step of the way against
findings that fail to confirm an interesting or useful hypothesis - no
one is going to to stop the presses over the claim that coffee doesn't
stave off Alzheimer's disease. There isn't much reason to game a
disappointing conclusion, and anyone who publishes one or reports on it
probably isn't overly concerned with compromising truth in order to
dazzle readers."
-
It's
heavy on qualifying statements. David Freedman continues: "The
process by which experts come up with findings, and by which those
findings make their way to the rest of us, is biased toward sweeping
under the rug flaws, weaknesses and limitations. What can experts, or
the journals that publish their work, or the newspapers and television
shows that trumpet it, expect to gain by hitting us over the head with
all the ways in which the study may have screwed up? And yet sometimes
journal articles and media reports do contain comments and information
intended to get us to question the reliability of the study
methodology, or the data analysis, or how broadly the findings apply.
Given
that we should pretty much always question the reliability and
applicability of expert findings, it can speak to the credibility of
the experts, editors, or reporters who explicitly raise these
questions, encouraging us to do the same."
-
It's
candid about refutational evidence. David Freedman
continues: "Claims by experts rarely stand unopposed or enjoy
the support of all available data. (A saying in academia: for PhD,
there's an equal and opposite PhD.) Any expert, journal editor, or
reporter who takes the trouble to dig up this sort of conflicting
information and highlight it when passing on to us a claim ought to get
a bit more of our attention. But don't be impressed by by token
skeptical quotes tossed into the media reports in the name of
'balance'; nor by brief toothless, proforma 'study limitations'
sections of journal articles that listlessly toss out a few possible
sources of of mild error; nor by the contradictory evidence that seems
to been introduced just to provide an opportunity for shooting it down.
The frustration of on-the-one-hand-but-on-the-other-hand treatments of
an expert claim is that they may leave us without a clear answer, but
sometimes that's exactly the right place to end up. And, once in a
while, watching the negative evidence take its best shot leaves us
recognizing that the positive evidence actually seems to survive it and
is worth following."
-
It's
provides some context for research. David Freedman
continues: "Expert findings rarely emerge clear out of the
blue - there is usually a history of claims and counterclaims, previous
studies, arguments pro and con, alternative theories, new studies under
way, and so forth. A finding that seems highly credible when presented
by itself as a sort of snapshot can sometimes more clearly be seen as a
distant long shot when presented against this richer background, and
it's a good sign when a report provides it. Andrew Fano, a computer
scientist at the giant high-tech consultancy Accenture, it to me this
way: 'The trick is not to to look at expertise as it's reflected in a
single, brief distillation but rather as the behavior of a group of
experts over a long period of time.'"
-
It's
provides perspective. David Freedman continues: "Expert
claims are frequently presented in a way that makes it hard to come up
with a good answer to the simple question 'What if anything, does this
mean for me?' We often need help not simply in knowing the facts but
also in how to think and feel about them." [What we need is
some perspective about whether we should be inclined to act on it or
not.] "...Such meta-findings might take the form of 'Though
the effect is interesting, it's a very small one, and even if it isn't
just a fluke of chance, it probably doesn't merit changing behavior';
'Chances are slim this will apply to you.' ...Experts, journal editors,
and journalists might reasonably argue that their audiences shouldn't
need these sorts of reminders, that the fact should speak for
themselves, that it's not their place to interject such semi-subjective
commentary. Well fair enough, but I'd assert that those who go ahead
and do it anyway ought to be rewarded with a higher level of trust, in
that it demonstrates they're willing to sacrifice the potential impact
of an expert claim to help our the rest of us in knowing what to make
of it. The need for such explanation is particularly acute when
research involves statements about about probabilities and risk, which
the public has a terrible time interpreting (as do many
experts)."
Why do we want to
believe the experts? There seems to be some
evolutionary value in having special people who we understand to be
more knowledgeable than ourselves in particular areas of knowledge.
This is obviously an advantage to humans as a species. The problem is
not the experts nor our willingness to believe they may know more than
us about something. It rather the uncritical and un-skeptical way in
which we tend to accept what they have to say. It is the the way we
deify experts and grant them the certainty of almost omniscient powers
of foresight. David H. Freedman in his book "Wrong"
calls this the "Wizard of Oz effect":
"We're brought up under the spell of what
might be called the 'Wizard of Oz effect' - starting with our parents,
and then on to teachers, and then to the authoritative voices our
teachers introduce us to in textbooks, and then to mass experts whose
words we see our parents hanging on in the newspapers and on TV, we're
progressively steeped throughout our upbringing in the notion that
there are people in the world who know much, much more than we do, and
that we ought to take their word for whatever it is they say is so."
What we really need, is not to simply accept
expert advice because it is expert, but rather use it as a stepping off
point for investigating further. There is some evidence that children
that have grown up with the Internet may be more open to this skeptical
approach.
Expert opinion and this site.
On this site a great deal of expert opinion has been presented. Just as
David H. Freedman says of his book "Wrong" this site must admit that a balanced view
covering all points of view has not been presented or attempted here.
Also it is likely that this site is riddled with factual and conceptual
errors of which we have no awareness. Perhaps some errors have snuck in
because something has been misread or misunderstood. This site is
guilty of all the things that Freedman complains about. However,
whatever errors and biases have crept into this site it seems likely
that they are of insufficient magnitude to compromise the overall
arguments of this site. The arguments and messages of this site can be
taken to have good possibility of being valid on the grounds that this
is a massive amount of expert opinion, that is all in basic agreement,
and that this agreement has taken place over a vast amount of time.
These experts agree with the basic premise of this site and have done
so over a long period of time.
Knowing.
Popper's schema for learning has other consequences for knowing. Not
only is it difficult to find ways of trusting that knowledge is
correct, but the tentativeness of knowledge implied by Popper's ideas
makes it difficult to state that you know something. The things that we
truly know cannot include theories that can be replaced by other
theories at any moment. The mere fact that things have always happened
does not make them certain. We believe the sun will rise in
the
East and set in the West every day, but there are unlikely
circumstances in which this would not be so. At the North and South
poles the sun does not rise or set. Also, some astronomical
calamity could knock earth out of orbit etc. What we know is limited to
universal agreement, in that it is how concepts are defined. We know
that 1 + 1 = 2 because 2 is defined as 1 + 1. We know the rules of
logic hold true and that we can use deduction to arrive at truth.
However, these truths are also open to question because they
always have to include some given truth. For instance it is a given
that, 'All men are mortal', though that is true only until we discover
or create an immortal man. So in the end what we 'know' is very little.
Despite this we live in a world that is very predictable. There is a
high probability that most of what we believe to be true is true or
will occur as we believe.
In his book "On
Being Certain"
Robert Burton suggests that knowing has little to do with the logic of
error elimination or with careful observation and is simply an
emotional feeling that occurs because evolution has found it
advantageous to reward some simple associations. Burton says:
"The message at the heart of
this book is that the feelings
of knowing, correctness, conviction, and certainty aren't deliberate conclusions
and conscious choices. They are mental sensations that happen to us."
Certainty. In the book
"This
Will Make You Smarter" Lawrence Krauss points out:
"The notion of uncertainty is
perhaps the least well understood concept in science. In the public
parlance, uncertainty is a bad thing, implying a lack of rigor and
predictability. The fact that global warming estimates are uncertain,
for example, has been used by many to argue against any action at the
present time.
In fact, however, uncertainty is
a central component of what makes science successful. Being able to
quantify uncertainty and incorporate it into models is what makes
science quantitative rather than qualitative. Indeed, no number, no
measurement, no observable in science is exact. Quoting numbers without
attaching an uncertainty to them implies that they have, in essence, no
meaning."
Kathryn Schulz in the same book brings to our
attention the startling idea that not only is scientific knowledge
uncertain but that all knowledge is uncertain. The title of her
contribution is "The Pessimistic Meta-induction from the History of
Science." What does that mean? She explains:
"Here's
the gist: Because so many scientific theories from bygone eras have
turned out to be wrong, we must assume that most of todays theories
will
eventually prove incorrect as well. And what goes for science goes in
general. Politics, economics, technology, law, religion, medicine,
child
rearing, education: No matter the domain of life, one generation's
verities so often become the next generation's falsehoods that we might
as well have a pessimistic meta-induction from the history of
everything.
Good
scientists understand this. They recognize that they are part of a long
process of approximation. They know they are constructing models rather
than revealing reality. They are comfortable working under conditions
of uncertainty - not just the local uncertainty of 'Will this data bear
out my hypothesis?' but the sweeping uncertainty of simultaneously
pursuing and being cut off from absolute truth.
So
knowledge in the end is not what is 'correct' or 'true' but rather what
highly
likely of being correct as far as we know so far. Knowing as
'certainty', for the most part, is
an illusion. Very little is certain and the best theories in science
are just that, theories. Our knowledge is about what has been found to
give the best approximation of reality and it can be superseded at any
time. It is often what some call working knowledge. It is what gets the
job done, what is good enough to do what we are trying to do. Good
knowledge then is always about probabilities and not about certainties.
In the book
"This Will Make You Smarter" Carlo
Rovelli explains further:
"Every
knowledge, even the most solid, carries a margin of uncertainty. (I am
very sure what my name is ... but what if I just hit my head and got
momentarily confused?) Knowledge itself is probabilistic in nature. a
notion emphasized by some currents of philosophical pragmatism. A
better understanding of the meaning of 'probability' - and especially
realizing that we don't need (and never possess) 'scientifically
proved' facts but only a sufficiently high degree of probability in
order to make decisions - would improve everybody's conceptual toolkit.
However, human beings are drawn to
certainty and away from
ambiguities, inconsistencies and probabilities. We want the doctor
diagnosing what ails us to seem certain and not tell us about
probabilities. Why humans are this way may be genetically
or socially transmitted. In his book "On
Being Certain" Robert Burton suggests that this desire
could well be socially transmitted. He says:
"I cannot help wondering if an education
system that promotes black and white or yes or no answers might be
affecting how reward systems develop in our youth. If the fundamental
thrust of education is "being correct" rather than acquiring a
thoughtful awareness of ambiguities, inconsistencies and underlying
paradoxes, it is easy to see how the brain reward systems might be
molded to prefer certainty over open open-mindedness. To the extent
that doubt is less emphasized, there will far more risk in asking tough
questions. Conversely, we like rats rewarded for pressing the bar, will
stick with the tried-and-true responses."
In his book "On
Being Certain"
Robert Burton further suggests that our proclivity to want to be
certain is the cause of many of societys ills and which can and has led
us down many dangerous and dark paths. He believes we should try to
avoid this and suggests the following:
"Certainty is not biologically possible. We must
learn (and teach our children) to tolerate the unpleasantness of
uncertainty. Science has given us the language and the tools of
probabilities. We have methods for analyzing and ranking opinion
according to their likelihood of correctness. That is enough. We do not
need and cannot afford the catastrophes born out of a belief in
certainty. As David Gross Ph. D., and the 2004 recipient of the Nobel
Prize in physics said 'The most important product of knowledge is
ignorance.'"
What
is knowledge?
Knowledge is a survival strategy put in place by evolution to to give
us humans the advantage we have needed in order to survive as a
species. It started out as memory that could be retrieved from our
large brains when we needed it. Like many mechanisms that have evolved,
its purpose has changed, but unlike most mechanisms it has kept
changing
and may continue to change as time goes by. The invention of writing
meant that knowledge could be recorded and transmitted to others
without
a human's memory being involved. Knowing was still important but less
so than before. With the invention of printing knowledge changed again
as knowledge that was previously understood by just a few, became
available to many. Knowing still had importance but because knowledge
was spread fairly evenly through the species knowing became even less
important than before. Now in the era of the world wide web knowledge
and what we understand it to be is changing again.
Do we still need to know?
As has been shown above, not only is it not possible to be certain and
thus
know about anything, but the assumed knowing we experience is just a
feeling, an emotion, and thus an illusion. The question then arises,
'Do we really need to know anything?' On the one hand,
if we think that knowing means certainty then we really no longer need
to know. On
the other hand if by know we mean that we believe there is a
very
high probability of something being true, then, 'Yes we still need to
know.' But even these probability assessment of actions to outcomes
are only needed at the time just previous to the action and to an
extent do not need
to be held in memory any more. The problem is that real knowing is not
just information that can be absorbed at the time of action. What
happens is, that incoming data combines with knowledge held in memory
at the time of action. This knowledge held in memory is a map or model
of reality and is what enables understanding. Incoming information adds
to it and sometimes alters it structurally enabling the apropiate
action.
Who knows?
Knowledge then, for the most part, no longer needs to known by
individual people. The survival strategy of knowing has been replaced
by a more important survival strategy. The new survival strategy is
knowing where to find information when you need it. This strategy has
been becoming more important as knowledge has changed through the
centuries. It first became
important with the invention of writing and more important when
printing appeared. Now with computers the world wide web and mobile
phones, knowledge can be assembled (found) quickly and easily any time
any where. We have already built many tools for retrieving information
and consolidating it into knowledge. Search engines and social networks
have begun this massive undertaking but more and better tools will be
forged as time goes by.
Has knowledge become too big to
know?
Before the advent of computers and their processing power not only was
there no possibility of processing large amounts of data but the means
of
gathering large amounts of data did not exist for the most part. Now
the opposite is the case. Now masses of data about every conceivable
variable that might be significant in any experiment exists in ever
increasing quantities. The processing of this mass of information by
human brains is no
longer possible. The most we can now do is construct computer models
and let those models run to process the information for us. In his book
"Too
Big to Know" David Weinberger explains:
"The problem - or
at least the change - is that we humans cannot understand systems even
as complex as that of a simple cell. It's not that we're awaiting some
elegant theory that will snap all the details into place. The theory is
well established already: Cellular systems consist of detailed
interactions that can be thought of as signals and responses. But those
interactions surpass in quantity and complexity the human brain's
ability to comprehend them. The science of such systems requires
computers to store all the details and to see how they interact.
Systems biologists build computer models that replicate in software
what happens when millions of pieces interact. It's a bit like
predicting the weather, but with far more dependency on particular
events and fewer general principals.
Models this
complex - whether of cellular biology, the weather, the economy, even
highway traffic - often fail us, because the world is more complex than
our models can capture. But sometimes they can predict accurately how
the system will behave. At their most complex these are sciences of
emergence and complexity, studying properties of systems that cannot be
seen by looking only at the parts, and cannot be well predicted except
by looking at what happens."
"With the new
database-based science, there is often no moment when the complex
becomes simple enough for us to understand it. The model does not
reduce to an equation that lets us then throw away the model. You have
to run the simulation to see what emerges."
Some knowledge has to be stored
in our brains.
Despite all the above, of course, some knowledge needs to remain in our
memories, in out meat
sack brains. Nothing would make any sense at all without our maps or
models of reality, and these are made up of the facts and theories we
have acquired over a lifetime. Also, a large amount of data is
necessary
in each individual brain in order for those individual people to be
able to take part in creative activity.
The future of knowing.
Of course science will always require a fair amount of knowing that
there is a high probability of some things happening in particular
circumstances, and the people who are creative in the various
scientific
domains will need to know this knowledge most of all. Creative people
need to
know because if they do not have most of the knowledge in their domain
the ideas they originate are less likely to be new and unique. However,
as time goes by,
science will become more and more about the systems that even the
scientists that study them do not understand or know. Similarly the
ordinary person will also need to know less about everything everything
else and more
and more about where to find information when they need it.
"I
can live with doubt and uncertainty and not knowing I have approximate
answers and possible beliefs and different degrees of certainty about
different things...it doesn't frighten me." Nobel laureate Richard
Feynman
Our
changing understanding of what knowing is. With the
gathering importance of systems research 'knowledge' or
our understanding of what knowledge is may be in the process of
changing and with it the very idea of knowing. When people speak of
knowing in the future they may mean something far different to what
people have meant by it in the past.
|