preface
|| intro || 1
|| 2 || 3
|| 4 || 5
|| 6 || 7
|| 8 || 9
|| 10 || Epil
|| Biblio
Foundations
of Philosophy
4
Inverse Insights
While direct insight grasps the point, or sees the solution,
or comes to know the reason, inverse insight apprehends that in some fashion the
point is that there is no point, or that the solution is to deny a solution, or
that the reason is that the rationality of the real admits distinctions and
qualifications.1
Preliminary Exercises.
(1) Give the next number in these sequences:
(a) 5, 7, 4, 6, 5, 8, 6, 6, 5, ......
(b) 75, .82, .77, .80, .79, .74, .79, .......
(c) 98.1, 99, 98.7, 98.8, 98.3, 98.7, ......
(d) 5, 1/2, --198, 17, 1.1, ......
- Name the following geometrical figures.
[124]
(3) What is the numerical relationship between the side of a
square and its diagonal? Or work out the square root of 2.
(4) What keeps an arrow moving in the air even though there
is nothing pushing it?
(5) Drop an elephant and a feather off a high building at the
same time. Which hits the ground first? Why?
(6) Can you distinguish patterns in the distribution of the
stars in the night sky?
(7) Do you believe the weather forecast for the following
day? Does it give probabilities or certainties? Can you have accurate
predictions of the path a tornado will take? If you have complete information
can you make that prediction?
1. The Experience of Inverse Insight
Have you ever had the experience of reading a rather
difficult book, and despite your best efforts and concentration you are not
getting the point? You presume that this author is very bright, and it is
because of your lack of intelligence that you do not understand. You may give up
there and then, and the author will always remain in your mind revered for his
great intelligence and superior knowledge. You may also have the experience of
persevering with great determination to get what he is saying and slowly and
painfully discovering that the fellow is utterly confused, presents his material
badly, and is trying to impress with big words and obscure ideas: he does not
know what he is talking about. You have finally understood the author and the
book; however, it is not an understanding of what is there rather, on the
contrary, of what should be there, and is not. You are not grasping the message
the author is trying to communicate but rather that he has no message, is quite
confused, wrong, uttering nonsense. This is an insight; but it is a new, strange
kind of insight.
Or perhaps you have had the experience of being asked to take
the minutes at an important meeting. Unfortunately, the chairperson is not very
competent and exercises no control over the proceedings. [125] So you listen to
a stream of suggestions, pronouncements, interjections, retorts, declarations,
digressions, appeals, defamation, arguments, winding their crooked way through
the morning, until it is time for lunch and the meeting is mercifully called to
a halt. What do you write in the minutes? There was no order or pattern,
beginning or end, form or meaning to the whole procedure. Can you describe pure
chaos? You can either give an exhaustive account of everything everybody said;
or else report truthfully that it was a most chaotic meeting and that you cannot
give an accurate account of chaos; or you can impose your own kind of order or
pattern on the chaos to rescue what you think might be useful. An inexperienced
secretary may blame him/herself for having difficulty following the discussion
but the real difficulty lies not in being unable to follow what is going on, but
in grasping that what is going on is pure confusion, chaos, a random stream of
words with no meaning or pattern. Again, it is an insight; but of a different
species from those we have met heretofore.
Or perhaps you are a bit of a gambler as well as a budding
mathematician; so you try to construct a system of betting that will beat the
roulette wheel, without cheating of course. How do you predict a random stream
of numbers? How do you create a system that will beat the odds? You may notice
that a certain number has not come up at all for a long time. Because you are a
mathematician you know that occurrences tend to average out. If the number is
falling behind its average, you might think that it has to catch up and so is
now more likely to occur. But, do previous occurrences in a sequence like this
change the probabilities of the next spin? When and if you grasp that they do
not (hopefully before you become penniless), you are on the way to an inverse
insight. The intelligibility of such a sequence of numbers is different from the
direct intelligibilities which we have been considering up to now. It is with
some shock and annoyance that we learn to accept certain limitations to direct
understanding and how data are to be brought under law. There is no system that
can be applied to a random stream of numbers which will enable you to beat the
odds; whether you are trying to win or lose makes no difference; in the long run
you lose. [126]
Sometimes it is assumed that the only real kind of knowledge
is of certain, necessary, permanent truths. In these examples we begin to
discover that most of our understanding of the material world yields not
certainties but varying degrees of probability. The expectation of complete
certainty is unrealistic in most areas of science; there is a convergence
towards certainty but neither in classical method nor in statistical method do
the conclusions of empirical science reach complete certainty. However,
knowledge of probabilities is genuine human knowing - in most areas it is all
that we can expect. That is the kind of universe we are living in; that is the
kind of mind that we have.
In this chapter we explore these degrees or kinds of
intelligibility in our universe. Not every area of data is susceptible to the
kind of direct understanding that we have been considering up to now. We will
identify classical method as the appropriate way of understanding data
which are systematic and orderly. We will identify statistical method as
appropriate for data which are a combination of systematic and nonsystematic.
Finally, we will consider the empirical residue which lacks any immanent
intelligibility but can be grasped indirectly by way of an inverse insight.
Our focus continues to be self-appropriation. We present many
examples in the preliminary exercises and in the text; the purpose is to help
you to recognize this experience as it happens in your own mind. If you find
that the examples are not appropriate, substitute others familiar to you in your
own discipline or field of competence, at your own level.
2. Inverse Insight Defined
An inverse insight is an insight; therefore, there will be
found the five characteristics that we have already identified in the direct
insight. There is a problem posed to intelligence from the experience of certain
data; a question arises creating tension towards a solution. This is followed by
a sudden enlightenment, which depends on inner conditions rather than on outer
circumstances. There is a pivoting between the abstract and the concrete, and
the solution passes into the habitual texture of the mind. The experience of
discovering that an author or a lecturer has got it all wrong is a very
liberating experience. But it starts with an expectation of a different kind of
intelligibility. The tension is towards that kind of positive intelligibility
which you have a right to expect of an author or professor. But you reach a
block; you try everything; look at the data from all sorts of different points
of view; you try to make sense of the confusion; come back to it again and
again. Then, slowly it dawns on you. The fellow does not know what he is talking
about; there is a stream of impressive words but no sense; all sound and fury
signifying nothing. You will never be the same again. Once you have realized the
emptiness of one author or philosopher or professor, you are constantly aware
that there may be more of them out there.
An inverse insight is different from a direct insight;
instead of grasping what is there, we grasp what is not there. We can define an
inverse insight as an insight into the absence of an expected intelligibility.
There are three characteristics of an inverse insight.2
One, there are positive data. Two, there is a spontaneous expectation
of a direct intelligibility. Three, the insight is into the absence of
the expected intelligibility.
Firstly, there are positive data: there is something
to be understood. All the above examples presume some data presented by the
senses, memory or imagination, which poses a problem for understanding. An
inverse insight is not an insight into a simple absence of data. It is not
simply a correction of a previous mistake. There are positive data, but they do
not seem to respond to the usual procedures. The book, which is full of
nonsense, presents a multitude of data on every page. The unfortunate secretary
has pages of words to deal with. The gambler can collect sequences of actual
numbers for as long as he wishes. The data are there, but are they intelligible?
Secondly, it runs in the face of the spontaneous
expectations of intelligence. We can only have insights if we ask questions;
if there are no questions there are no insights. We can only ask questions if we
expect and anticipate an answer. We usually expect to find a law, a regularity,
a system, an explanation as the answer. The answer we [128] expect to be
directly intelligible, otherwise it would not be an answer to the question. But
in the inverse insight we get, not a direct intelligibility, but precisely a
lack of the expected intelligibility.
Thirdly, the insight is into a lack, an absence, a
deficiency in the expected intelligibility. We expect to find the rational, the
systematic, the significant, the regular; but discover that we also must cope
with the irrational, the nonsystematic, the insignificant, the irregular. The
Greeks were puzzled by the relationship between the side of a square and the
diagonal. Surely, such a basic relationship in the most regular of all
geometrical figures could be expressed in rational whole numbers. But they found
that it could not. They called it incommensurable; modern mathematicians call it
irrational. If the side of a square has the numerical value of one, you expect
the diagonal to be some rational number. But if you apply the theorem of
Pythagoras you find that it is the square root of two. If you follow the rules
for finding the square root of two, you find that it goes on and on; you keep
expecting the further decimal places to reveal a pattern; but even the most
powerful of modern computers has not found a pattern in this sequence of
numbers. How strange! And so we begin to grasp that there are varying types,
degrees and levels of intelligibility.
Often, the inverse insight is the realization that we have
been asking the wrong question. Misguided questions point us in the wrong
direction, we are barking up the wrong tree, seeking for something that is not
to be found. The insight is into the mistaken anticipations of the question; we
have to back up and start again. Aristotle, for example, was asking the wrong
question about local motion. He saw that everything in his experience of
everyday life comes to rest; when you stop pushing, it stops moving.
Consequently, he assumed that rest was the natural state. Thus, he concluded
that it is motion which needs to be explained: if something is moving there must
be something or someone pushing. But what is pushing the heavenly bodies? What
is pushing the arrow which is already in flight? This way of formulating the
question inclined thinkers to invent theories of impetus, heavenly Movers, and
other occult forces. This was an enormous block to progress in astronomy.
Copernicus could obviously not be right; the earth could [129] not be moving
because of the enormous force that would be needed to push it. It was only in
the time of Galileo that the question was turned on its head. What needs to be
explained is not motion, or rest, but changes in motion or rest. Galileo
understood this and it was later formulated in the first of Newton's laws of
motion, the law of inertia.
Expectations are conditioned by our education, culture,
specialization, and degree of differentiation of consciousness. Expectations can
be refined by the process of education but one normally approaches an area with
an expectation of a positive, direct intelligibility. Inverse insights confer a
limited grasp of the irregular, and the nonsystematic. The frustration of
accepting an inverse insight as the limit of what can be reached often leads to
a denial that this is a genuine insight. Until this century it was assumed that
knowledge of probabilities is not real knowledge. Classical science had no place
for such a deficient insight and presumed that everything could be understood by
a totality of direct insights.
Intelligibility is the content of a direct insight. Therefore
a direct insight will always grasp what is intelligible, significant, regular,
systematic, relevant, meaningful. The inverse insight confronts us with the data
that are not fully intelligible, are lacking significance, regularity, or
relevance. But you cannot by definition grasp what is unintelligible or
nonsystematic by way of a direct insight. The inverse insight confronts us with
this reality of lack of intelligibility. We can only deal with the nonsystematic
and unintelligible by the roundabout route of the inverse insight. Inverse
insights are important because they reveal the degrees and kinds of
intelligibility attainable in our universe. They reveal that there are degrees
of intelligibility and that our universe is only to be understood correctly by a
combination of direct and inverse insights. We will explore this in detail in
our consideration of classical method, of statistical method, and of the
empirical residue.
3. Classical Method
We consider classical method here because we wish to contrast
it with statistical method, which follows immediately. The kinds of [130] direct
insights we considered in chapters two and three were for the most part
belonging to classical method. We use the word ‘classical’ to associate this
type of insight with the scientists from Copernicus to Einstein: the historical
period called the Scientific Revolution. Generally speaking, these scientists
were all looking for the kind of direct insight which we have previously
defined, and they in no way recognized the existence of inverse insights or
statistical methods. We are appropriating their grasp of classical scientific
method, but we disassociate ourselves from the philosophical assumptions which
often accompanied their science.
3.1 Classical method described
Let us first of all describe a typical case of classical
method at work and pick out the characteristic stages in the unfolding of this
kind of direct insight. Let us dwell on the example of Galileo, as he set out to
discover the nature of a free fall.
1. Heuristic anticipation. There was something to be
understood; there is a wealth of data on falling bodies; it is a common
experience of mankind; it is part of our everyday living; there is a something
there which needs to be explained and so Galileo asked himself, 'What is the
nature of a free fall?' He gave the unknown a name; he presumed there was an
immanent intelligibility to be found.
2. Ordinary description. It is easy to give a description
of the matter: heavy bodies fall, very light bodies seem to rise. Heavy bodies
seem to fall faster and faster. The problem is set by commonsense observations
and questions. You do not have to be a scientist to observe and describe
instances of falling bodies and to ask why do they fall in such a way.
3. Similars are similarly understood. You start with
instances which are similar from the point of view of common sense or of
description. But you are moving towards a similarity which will be based on
explanation. Galileo expected that similar instances of falling bodies would be
explained in the same way. He did not expect to have one theory for Italy and
another theory for Spain; one theory for gold and another theory for iron. There
is something [131] behind all the instances of falling that is to be understood
as common to them all.
4. Functional Relations. The Scientific revolution had
discovered the importance of mathematics and how the regularities of nature
could often be expressed in mathematical language. What were the variables which
could be systematically related in order to formulate the regularity of a
falling body? Galileo might have considered weight as significant; for Aristotle
this was a determining factor. Distance is a factor because the farther an
object falls, the faster it moves. Time is a factor because the longer the time,
the faster it falls. Changes of velocity are a factor because you are talking of
faster and faster. Acceleration is a key because that is the precise
mathematical term for faster and faster. But what is the precise mathematical
formula? How can the individual variables be related in a single function?
5. Scientific Description. This is the time for
measurement, observation and experimentation; for rolling steel balls down an
inclined plane and measuring time and distance as accurately as possible.
Description becomes more and more precise depending on the materials, the
instruments used, the care taken. Galileo's first discovery - actually from
using a pendulum - was that weight made no difference. No matter what weight the
balls or what material they were made of, the time and the distance did not
change. This in itself was a major discovery and proved that Aristotle was
wrong: the commonsense assumption that weight determines how fast bodies will
fall was simply wrong. But, leaving aside weight, what was the relation between
time and distance? He fixed the distance and measured the time taken; he
repeated this a number of times for accuracy. He increased the distance and got
another set of results. He decreased the distance and recorded further sets of
figures.
6. Range of Possibilities. Now he put down his results on
paper in the form of a table, with distances at the top and corresponding times
at the bottom. How could these be related? Sequences of numbers can be generated
by a variety of mathematical formulae. So Galileo looked at these series of
tables struggling to find the formula that would unlock the secret. His
familiarity with mathematics made a wide range of possibilities open to him. We
might only think of [132] addition and subtraction; but there are also
multiplication and division, roots and cubes, direct and inverse relations, etc.
7. Practical Techniques. Perhaps, he tried to represent
the tables on a graph. It was easy to see that the more the time increased the
more the speed increased. Just as in algebra you manipulate equations to find
the value of the unknown, so Galileo explored possible relationships to find one
that fitted.
8. Upper and Lower Blade. There was a process from below
upwards, by which the data are gathered, selected, measured and begin to suggest
possibilities; there was a movement from above downwards, by which he
constructed various hypotheses, different formulae or functions, and found them
wanting. There is a scissors-like movement from data to hypothesis, from
hypothesis back to the data.
9. Insight. Suddenly and unexpectedly the insight came.
It was a leap of constructive intelligence. He found a formula for the
relationship between distance and time. 'Spaces traversed by freely falling
bodies are proportional to the square of the times.' He had found a universal,
abstract, functional relationship, which underlies all instances of all falling
bodies. Any discrepancies could be explained by the deficiencies of his material
or the inaccuracy of the measurements. Of course, he had to make the proviso of
other things being equal; the figures did not match exactly. The law is true in
a vacuum, but may not be true if friction was allowed to interfere; so he had to
postulate a vacuum. Now that he understood acceleration, he could go on to work
on measuring friction, projectiles, etc.
10. Verification. Undoubtedly he went back to check,
perhaps constructing further experiments and tables, perhaps working further on
the mathematics to verify again the correctness of the formula. Once he had this
he could predict how falling bodies should behave in varying circumstances. Now,
he could start studying projectiles because he had pinned down mathematically
one of the major forces determining a trajectory.
Galileo succeeded in specifying the nature of a free fall in
terms of simple mathematics, a simple rule, an abstract mathematical correlation
of time, distance and velocity: distance is proportionate [133] to time squared.
The abstract formula could be applied to any instance of a falling body and,
provided that there were no outside interference, could be verified. It was a
most satisfying kind of direct insight because it was so simple and explained so
much. It applied to all instances of all falling bodies in all places. No wonder
the scientists considered that mathematics was the key to understanding nature.
They had uncovered a very basic regularity in the workings of nature.
3.2 Classical method defined
We could define classical method as, "the intelligent
anticipation of the systematic-and-abstract on which the concrete
converges." 3 When we were discussing the
characteristics of direct insight we noted the constant pivoting between the
abstract and the concrete, the intelligible and the sensible. The abstract is
usually universal, functional, explanatory; expressed in concepts. The concrete
is the data, the sensible presentations, the images, what is given in sensation.
But these are not two separate things; it is precisely the intelligible in the
sensible that we are trying to understand. Classical method seeks the systematic
laws and functions immanent and verified in the data.
The key word of the definition is convergence: the concrete
converges on, comes closer to, the abstract. There can never be a perfect
coincidence between the concrete and the abstract. There are always inaccuracies
because of impurity of samples, limited power of instruments, limitations on
measurement, and the impossibility of excluding all extraneous influences. But
the point that Galileo probably noted was that the more accurate his
measurements the closer his figures converged on the abstract law. The more he
excluded friction, the finer the materials he used, the closer his figures
converged on the abstract mathematical formula.
Classical laws have to postulate 'other things being equal.'
They have to presume that extraneous influences have been excluded. Chemists go
to great lengths to procure pure samples, to have clean implements, to control
all factors of pressure, temperature, etc. But this can never be complete or
perfect; it is just sufficient for practical purposes. You have to make the
assumption that the imperfections [134] of the sample or instruments are
insignificant. It is impossible to exclude all extraneous influences in
principle. There is always room for experimental error.
As a matter of principle there can never be a complete
coincidence between the abstract law and the concrete data. The abstract will
always be abstract; the concrete, concrete. The intelligible will be
intelligible, the sensible sensible. The definition of a circle will always be
abstract; any concrete circle will always be imperfect. If it were perfect you
would not be able to see it and it wouldn't be a concrete; it is imperfect
because it is a concrete realization of the definition of a circle, an image
that converges on the concept. But the image can never coincide with the idea
because they are different kinds of things; the image is a picture; the concept
is an intelligible relation.
3.3 Misunderstanding
The kind of classical insight exemplified by Galileo is
extremely satisfying: a single neat formula reveals the regularity underlying a
vast multitude of data. When related laws are also expressed with similar
simplicity, a system of laws can be set predicting and controlling a vast range
of data as in the planetary system. The protagonists of the Scientific
Revolution built up these systems of laws along the classical lines. Newton was,
perhaps, the most successful of all these thinkers, putting together a synthesis
of physical laws concerning motion in a highly systematic way.
Further, the application of these laws to the concrete proved
to be very successful. In the technology of war, ballistics, projectiles,
battering rams, gunpowder, firearms, began to change the balance of power.
Applications in navigation, manufacture, machines, the steam engine,
electricity, etc. began to transform the way we lived. The ideal of complete
control over the workings of nature seemed to be within their grasp. No wonder
the nineteenth century was the age of optimism.
However, a basic misunderstanding of the scope of classical
laws lay at the root of all this as would become very evident in the twentieth
century. It was the assumption that classical laws are the [135] only kind of
scientific laws. Coupled with that was the assumption that once all these laws
were known, that knowledge would confer complete control over nature, and the
possibility of predicting and controlling anything that could happen.
Laplace is said to have maintained this position in its
purest and most explicit form.4 He claimed that when
all the laws were understood, and we knew one situation in world process, we
could work forward to predict accurately any subsequent situation. This was a
position of complete determinism; it did not leave room for freedom or chance or
probability. The universe was a vast mechanical machine obeying the fixed
classical laws of physics. This position assumed that all the data of world
process were systematic and so could be understood by laws of the classical
type. Just as it is relatively easy to predict an eclipse because our solar
system is systematic, so Laplace held that all data are systematic and that it
was only a matter of time before finding the other laws that would make total
prediction and control possible. He contended that the totality of classical
laws would completely explain all the data of experience and enable exact
predictions and control to be exercised on everything. There are many things one
could say in answer to this, but let us concentrate on the implied
misunderstanding of the scope of classical laws.
Classical laws express what would happen if certain
conditions were fulfilled. But how do we determine whether the conditions are
fulfilled? Galileo's law of falling bodies will only work perfectly in a vacuum.
But there is no perfect vacuum, so does that mean that it cannot be verified?
No! The nearer the situation comes to a perfect vacuum, the more accurate the
results become. It is statistical laws which will determine when, where, and how
the conditions will be fulfilled. Classical laws can only be verified given
certain conditions. But classical laws do not determine those conditions.
Classical laws assume 'everything else being equal.' But how
do we determine that? Is everything equal when you throw a feather and an
elephant off a high building? Classical laws work if no extraneous influences
interfere. But classical laws cannot guarantee the exclusion of such
interference. A chemistry professor always faces the disquieting possibility
that his experiment may not work. [136]
The idea of having full information on any situation is an
illusion. Laplace argued that if you have full information on one situation, you
could predict any situation. But full information would be all the relevant
facts. What are the relevant facts by which you could predict the trajectory of
a falling leaf? Immediate conditions of height, wind, warmth, pressure,
humidity, etc. are obviously relevant. But each of these variables is
conditioned by a whole series of diverging variables. There is no end to the
quest for full information. To have all the relevant information you would have
to know everything about everything and that is beyond the human mind.
Classical laws are abstract and when applied to the concrete
they explain certain aspects of the data but they do not explain all the data.
Some data are amenable to explanations of the classical type and so are
systematic or regular; other data cannot be brought under such laws and so are
relatively nonsystematic or irregular. As well as the regularity of the solar
system, there is also the relative irregularity of the weather. As well as what
can be subsumed under laws, there is always the residue of data pertaining to
particular places and times which can never be fully explained - as we shall
see. Between the abstract law and the concrete instance, there is always needed
an insight into which laws apply in what order of precedence; there is always
this 'gap' to be filled by a further insight.
History provided its own answer to Laplace. These optimistic
assumptions of automatic progress in science, technology, economics, medicine,
etc. were brought down to earth by the first world war, the great depression and
the discovery by scientists themselves of the need for a statistical method. It
seems to have been in the field of nuclear physics that the principle of
indeterminacy was most clearly identified and it was realized that statistical
techniques were needed to deal with questions about the occurrence of particular
states and events. Use of statistical methods spread to all other sciences
through the century and was found to be successful. Einstein was apparently the
last of the pure classical scientists, insisting to the end that ‘God does not
play dice with the world.’ [137]
4. Statistical Method
4.1 Statistical Method Described
Statistical method is analogous to classical method, that is,
it shares some common characteristics. However, it anticipates a different kind
of intelligibility. To begin we will describe a typical case and illustrate the
unfolding of statistical method in ten steps, which parallel those of classical
method. Let us assume that we are investigating whether there is a connection
between smoking and lung cancer. Is there a significant correlation between
cases of smokers and those with lung cancer?
1. Heuristic Anticipation. As classical method
anticipates an understanding of 'the nature of' something, so statistical method
anticipates understanding 'the state of.' The state that is to be defined is the
distribution of incidents of lung cancer compared to incidents of smoking. We
are wondering whether the correlation will be significant or random.
2. Ordinary Description. At first it is only a suspicion;
someone makes a connection and begins to wonder; the public are alerted. An
interest group is formed. They point to this and that example, but it is rather
hit and miss. Figures that are limited, unreliable, approximate and ambiguous
are produced. They may be slanted if they are produced by an interest group.
3. Similars are similarly understood. We start from
sensible similarities, from description. A neutral group is set up to start a
pilot project. Describe the kinds of smoking and the possible connection with
lung cancer. Description is too vague and ambiguous; the similarities of
description have to shift to the similarities of explanation.
4. Functional relations. Terms need to be defined.
Compare sets of classes of events with sets of probabilities as an ideal. What
would be the normal distribution of incidents of lung cancer if it were random?
There has to be some basis of comparison to judge whether the results are
significant. [138]
5. Scientific Description. The data have to be collected.
Everything has to be done accurately, precisely, objectively. Normally the whole
population will not be investigated; a random sample will be chosen, but it must
be sufficiently large to be representative of the whole population. Perhaps,
certain controls can be included for comparison. This will involve training
personnel, making questionnaires, collecting names and addresses, doing
interviews, coding information, etc.
6. Range of Possibilities. Meanwhile someone has to work
out what deviation from the norm would be significant. If a sample has been
used, then the smaller the sample the less convincing the results will be. What
random deviation from the norm can be expected? What is the norm?
7. Practical Techniques. There are various ways of coding
information so that correlations can be easily made for age, sex, smoking, race,
religion, anything that might be significant. There are statistical techniques
for establishing probabilities, an average, and significant deviation.
8. Upper and Lower Blade. There is both a movement from
the data to hypothesis, and from hypothesis to the data, as in the scissors
movement of a heuristic. The figures represented on a diagram will begin to
suggest patterns. But the mathematics of statistics, size of sample in relation
to population, etc. will suggest which correlations might be significant.
9. Insight. Just as in classical method, so also in
statistical method there is a moment of insight, when all the work is summarized
on tables and diagrams and the relationship is seen to be significant or simply
random. The conclusion can be stated in a proposition that is universal and
abstract, a statement of a probability function or a statistical law.
10. Verification. There should be a process of
verification, a checking of the mathematics, a review of the procedures used in
collecting data, a crosscheck on extraneous factors which might have interfered,
a comparison with control groups or factors which were included for this
purpose. [139]
This is an outline of the use of statistical procedures in
one particular area. In fact, these procedures are becoming more and more common
and accepted in most areas of scientific work. Statistical method does parallel
classical method but is anticipating only a statement of a statistical
probability rather than a universal law applicable to all cases. It is this
aspect of probability which made it difficult for scientists to accept it as
scientific knowing. But it has been quite successful in its own way, and
barriers to its acceptance have broken down. More and more we are accepting the
idea of probability, of averages, means, frequencies, rates, and using these in
our understanding of our universe. Let us examine more closely the notion of
probability and the definition of statistical method.
4.2 Statistical Method Defined
Let us be clear about a few preliminary definitions:
An event is the occurrence of a defined variable such
as an incidence of leukemia, a death, a birth, an accident, etc. An event is the
answer to the question, did this occur? A frequency is the number of
events so defined in terms of how much, for so many: how many deaths, for how
many of the population, over a certain period of time? A frequency can be either
ideal or actual. It is ideal if it is a theoretical statement of
an abstract statistical correlation. It is easy to see that the probability of a
toss of a coin producing a certain result is fifty-fifty; that is the ideal. But
if you toss a coin for a certain number of times, keeping an account of the
results, you get an actual or real frequency.
We can define statistical method as, "intelligent
anticipation of the systematic-and-abstract setting a boundary or norm from
which the concrete cannot systematically diverge." 5
Statistical method yields laws that are systematic and abstract. The laws of
probability of occurrence of events in states are abstract and systematic. Death
rates, average life expectancy, mean temperature, incidents of diseases,
frequency of accidents, can all be stated in laws that apply to a given
population at a certain time and place.
In classical law we saw that the key word was convergence;
the concrete converges on the abstract. In statistical method the key [140] term
is ‘nonsystematic divergence.’ You do not expect actual frequencies to
coincide with ideal frequencies; you do not expect the tossing of a coin to
conform always to the average of fifty-fifty. You expect a divergence. Just
because the life expectancy is fifty years, it does not mean that everyone dies
at fifty years of age. There will be fluctuations, ups and downs, divergences.
But the crucial point is that these divergences cannot be
systematic. If they were systematic, they could be explained by the use of
classical method, which specializes in dealing with the systematic. If there
were a systematic divergence it would indicate that the averages were wrong:
some other factor is operating. If you find that a certain number is recurring
on a roulette wheel, then you begin to suspect cheating. If the actual figures
for deaths are always above expectations, then suspect some new disease
interfering. The actual frequencies fluctuate around the ideal but they do so in
a way that is nonsystematic, cannot be explained, is not subject to law.
Chance can be defined as the random divergence of the
actual from the ideal. In the tossing of a coin the ideal frequency will be
fifty-fifty. But if you actually toss a coin you find strings of heads and
tails. There is a divergence of the actual from the ideal. It is that divergence
which constitutes chance. The ideal remains the same; but there is a divergence
which is nonsystematic; the divergence cannot be controlled or predicted. The
actual fluctuates around the ideal nonsystematically. That is where luck comes
in; that is the aspect of coincidence.
Statistics is true knowledge. It does give a limited
intelligibility; it is not a mere cloak for ignorance. The scientists of the
classical mold had an ungrounded expectation of a much more complete
intelligibility than that provided by statistics. They hoped that the time would
come where everything could be explained by classical laws. Then statistics
would not be needed. But the importance and success of statistical method in so
many areas of science today belies this claim.
The insight of statistical method can be called a devaluated
inverse insight. It is an inverse insight because it is into a lack of expected
intelligibility. But it is not a pure inverse insight because there is an
intelligibility that is grasped in the probabilities and [141] averages and
frequencies that are expressed in statistical laws. It is a sort of in-between
case. It reflects the fact that there are degrees of intelligibility and at
least two complementary ways of understanding data.
It is much easier and more satisfying to understand
systematic processes, because a cluster of insights grasps an interrelated set
of intelligibilities. One cluster of insights orders all the data about the
movements of the planets for decades to come; accurate predictions can be made
for centuries ahead. But to understand the weather pattern, here, tomorrow, you
need comprehensive data and numerous insights, yet in the end can do no more
than state a probability. The same process of data collection and understanding
have to be repeated each day to give a probable forecast for the following day.
In statistical method long-term prediction is extremely difficult; situations
cannot be deduced from one another; each situation is open to extraneous forces;
statistical laws change over time and place.
There are four characteristics of statistical method that set
it off from classical method and they are worth considering:
(1) Statistical method clings to concrete situations in a
way that classical method does not. Averages, probabilities, means always
refer to certain populations, particular areas, specific times and places. If
you establish the average rainfall in one place, it does not follow that it
will be the same in adjacent areas. If you determine average life expectancy
in one country, it does not follow that it will be the same in another
country. Statistical laws apply to specific states, at specific times, in
specific places. The conclusions of statistical method go out of date very
quickly. One expects classical laws to be invariant over time; but with
statistical laws things may be changing so fast that in a few weeks the
probabilities are completely different.
(2) Statistical method attends not to theoretical process
but to palpable results; it involves counting and its conclusions are verified
in counting. There are theoretical elements involved in the mathematics, but
the crucial moment is gathering the data. In certain situations as in throwing
dice, or the roulette wheel, or certain games of cards, the probabilities can
be calculated as an ideal from the very nature of the number of possibilities
available. But in normal cases of fixing average rainfall, incidence of child
mortality, rate of divorce, etc. there has to be a research, a counting, a
measuring.
(3) Statistical method attends not to individual events but
to frequencies, rates, averages, sequences, etc. The intelligibility of
statistical laws resides in the sequences or averages and not in the
individual event itself; there will be a reference back to individual events
but one event is not usually significant. Let us consider the figures for
average rainfall in a specific place. Each individual occurrence of rainfall
can be explained correctly in terms of moisture, temperature, pressure, winds,
etc. Each individual event can be understood in terms of classical laws. But
statistics concerns the intelligibility of the sequence of occurrences of
rain. That intelligibility is expressed in a figure which is an average. That
does not mean that each year the quota must be fulfilled; it allows for large
fluctuations from the average. An individual event such as a catastrophic
downpour has little significance from the point of view of statistics; one
event can never disprove a statistic. If Galileo found that one of the balls
was falling twice as fast as the others he would have had to revise his
thinking; if you get twice the average rainfall in a day or in a month, it is
no reason to revise the average.
(4) There is a fundamental difference in mentality between
statistical and classical method. There is a difference in the expectation of
intelligibility. Galileo was looking for the mathematical formula which would
express the intelligibility of a free fall and would be valid for all time and
in all instances of falling. But looking for the connection between the
distribution of cancer cases and cigarette smoking can only establish whether
that connection is significant in the long run. It does not mean that everyone
who smokes gets lung cancer; nor that everyone who gets lung cancer is a
smoker. [143]
5. Complementarity of Classical and Statistical
Investigations
We have considered classical and statistical methods
separately to show that they are distinct empirical methods which are an
accepted part of contemporary science. In conclusion, we wish to show how these
methods are not mutually exclusive, but complementary to one another in many
ways. It is not that physics is a classical science to the exclusion of
statistical method; or that the human sciences are statistical to the exclusion
of classical. There is an overlapping and a complementarity that has to be
explored.
1. Complementarity in Heuristic Anticipations. Classical
method anticipates an understanding of the systematic. Statistical method
anticipates an understanding of the nonsystematic. Data will be either
systematic or nonsystematic or any combination of the two. The same data are
considered from the point of view of the systematic and are understood in
classical laws; under a different aspect they are nonsystematic and are studied
by statistical method. All data will be either systematic or nonsystematic;
hence all data will be covered by a combination of classical and statistical
methods. (In chapter nine we will consider dialectical method and in chapter
eleven a brief mention of genetic method.)To understand a single traffic
accident you must invoke both classical laws and statistical laws. Classical
laws state that if a car is driven at a certain speed around a corner of a given
camber, it will roll; that without oil the engine will seize; that braking can
reduce speed at a given rate for each vehicle without skidding. But why is a
particular driver going around a corner too fast? Why was there no oil in the
engine? Why did the driver fail to stop in time? Was the driver drunk, tired,
inattentive, or incompetent? The coincidence of these factors is governed by
statistical method.
2. Complementarity in procedures. We have already shown
the parallel procedures of classical and statistical methods in the examples of
Galileo and the statistical relation between smoking and lung cancer. These
procedures are complementary in that the isolation of the systematic prepares
the way for the determination of the nonsystematic. Similarly, the isolation of
the nonsystematic prepares the way for a determination of [144] the systematic.
Galileo used mathematics as the heuristic tool in his search
for the law of falling. To do so he had to try to exclude extraneous influences.
He made his equipment as perfect as possible, and his measurements of time and
distance as accurate as possible. Given the limits of his instruments he could
not produce a vacuum or perfect conditions so he had to try to eliminate
possible experimental error by repeating his measurements and experiments to
average out the discrepancies. When he had formulated his classical explanation
for the law of falling bodies, he was in a position to study friction. He had
determined how bodies should behave in a vacuum; by measuring the discrepancy in
the figures he would be able to study the different interferences, especially
friction. He would be able to measure the degrees of friction on an elephant and
a feather dropped from a height and thus to explain why the elephant would fall
faster.
We used the example of the distribution of incidents of lung
cancer in relation to cigarette smoking to illustrate statistical method. But if
this study were to indicate that there is a significant positive correlation
between the two variables, then that would be a hint to the classical
investigator that there was a is the causal relation between lung cancer and
smoking. What element of smoking could be the causative factor; what is the
precise element of smoke that causes cancer; is it a causative element or merely
a catalyst, etc.
Mendel studied genetics using a statistical method.6
He noted the recurrence of traits in a series of experiments with peas and was
able to formulate statistical laws as to the probability of certain traits
repeating themselves after a certain number of generations. That was the hint
the biologists needed to go in search for the reason for this recurrence, and
led to the discovery of genes, chromosomes and D.N.A.
3. Complementarity in Formulation. Classical formulations
regard correlations, which are verified only in events. Statistical formulations
regard events, which are defined only by correlations. [145]
Classical laws presume that no extraneous influences
interfere. They make the proviso of 'other things being equal'. The law of the
lever states what would happen, if you apply a lever with a given force and a
fixed fulcrum to a certain body. But that does not tell you how it might
actually be used to move the earth. What material could it be made of? What
happens in theory in a diagram may not be possible in reality because of the
material that would be needed. Classical laws need statistical laws in order to
be applied in the concrete. A physics professor might set up an experiment to
demonstrate some classical law, but much to his embarrassment the experiment
does not work; he cannot completely exclude all extraneous factors.
On the other hand, statistical correlations only possess
scientific significance if the events are defined by the correlations of
classical laws. An event is 'what' it is that occurs; it is the definition of
this 'what' which makes statistical method possible. In doing the survey on
smoking and lung cancer, it was vital to define the terms clearly; what kind of
cancer; what kind of cigarette smoke; etc. If doctors were confusing lung cancer
with tuberculosis or bronchitis, the results would be useless. If the smokers
included those who did not inhale, then, similarly, the results would not be
accurate. The value of the survey depends on accurate definitions of the factors
being correlated. But these definitions are provided by classical method.
Therefore, it is classical method that suggests which correlations might be
significant. Many possible statistical relations could be investigated. Throw in
traits like ‘marital status’ or ‘knowledge of foreign languages’ as a
control factor in the survey of incidence of lung cancer, and the distribution
should be random: it is hard to conceive of a possible positive correlation
between these factors and lung cancer. It is classical method investigating
cancer which suggests that certain chemicals in cigarette smoke could be
causative factors and are worth investigating.
4. Complementarity in modes of abstraction. Both
classical and statistical methods lead to abstract laws. Classical method leads
to abstract correlations between variables; statistical method leads to ideal
probabilities. Both methods abstract from the concrete in the enriching leap of
insight. Both need further insights to be applied in [146] any particular
concrete situation. Classical laws are applied to the systematic; statistical
laws determine ideal frequencies from which actual frequencies diverge
nonsystematically. Both are legitimate scientific procedures; both yield
abstract ideal laws or frequencies. But they are applied to the concrete in
different ways. Classical laws are applied on the proviso of other things being
equal. Statistical laws are applied on the assumption that actual frequencies
will diverge nonsystematically from the ideal. The complete view demands the use
of both methods in complementarity.
5. Complementary in verification. Both classical and
statistical laws can be verified and so both are valid scientific methods.
Classical laws determine what would happen if conditions were fulfilled.
Statistical laws determine how often one may expect the conditions to be
fulfilled.
Classical laws cannot explain everything; they are verified
only with the proviso that 'other things being equal'. But it is impossible to
exclude all extraneous influences. They have to be reduced to a minimum; and
even then allowance has to be made. Classical laws are verified in that the
concrete converges on the abstract; but there can be no total coincidence
between the concrete and the abstract.
Statistical laws determine how often the conditions will be
fulfilled; they indicate which data are due to randomness and which seems to
have a significance of its own. What in one age is dismissed as due to
inaccuracies of measurement can later be the ground for important discoveries.
If a divergence of data from the expected is found to be systematic, then it is
an indication that classical law is at work; it is statistics which determines
which divergences are random and which are of significance.
6. Complementarity in data explained. There are not two
distinct and separate sets of data, one for classical investigation and one for
statistical investigation. There is one set of data and certain aspects of the
data receive the classical type of explanation while other aspects of the same
data are explained along statistical lines.
Data which are systematic at the moment can become
nonsystematic, and vice versa. The solar system is systematic for the present,
but the planets are slowly coming nearer to the sun and will [147] eventually
collapse back and the solar system with all its regularity and system will
become a chaos.
What is coincidental can suddenly become systematic. The
stray elements of pressure, wind, moisture, and temperature, which are
independent and nonrelated, can suddenly become systematically related in the
system of a typhoon. Then the different elements do form part of a system of
interrelated factors.
In a particular study either classical or statistical method
may predominate for a time. But when the aspects of the data dealt with in
classical method have been fixed there will be a need for statistical method and
vice versa. There are no data which can be entirely explained by one method to
the exclusion of the other.
6. The Empirical Residue
Finally, we deal with a notion which some deem to be
difficult. The term ‘empirical residue’ refers to what is left over when
classical and statistical methods have run their course. Is there something left
over? What is it? Can it be explained? There is something left over and we are
calling it the empirical residue and by definition it cannot be explained either
in terms of classical or statistical method. The empirical residue, then, has
three characteristics, (1) it is positive empirical data, (2) it does not
possess any immanent intelligibility, but (3) it is connected with a
compensating higher intelligibility.7
The empirical residue is positive data. It is not simply a
vacuum or an absence of data. The positive data are given in experience, by way
of the external and internal senses. We do experience a vast multiplicity of
data, but experience alone is not understanding; human understanding is insight
into data. Experience is preintellectual and preconceptual. An animal can
experience but cannot understand. Similarly we experience a vast panorama of
data; some of it we understand by classical laws and some of it we understand by
statistical laws, but then there remain aspects of the data which can only be
experienced. [148]
The data that belongs to the empirical residue possess no
immanent intelligibility. They cannot be brought under law. They can be named,
pointed to, but they are not the content of a direct insight. The most obvious
case of the empirical residue is particular places and particular times.
Particular places and particular times are precisely what are abstracted from in
the procedures of classical and statistical investigation; they are left out,
left behind as not relevant.
Scientific generalization consists in abstracting from
individuality. When Galileo discovered his law of falling bodies, he did not
have to formulate different laws for different materials, different laws for
different places and times. It was an abstract intelligibility which, given
certain conditions, could be applied to any material at any time and any place.
Particular time, particular place and particular material used were irrelevant,
to be excluded, of no significance. This is the power of scientific
generalization. The particular as particular, the concrete as concrete, merely
numerical differences cannot be brought under law. To bring under law is to
abstract the universal from the particular. If you are trying to understand the
particular as particular, then, there is no way you can abstract from the
particularity.
The Scholastics of the Middle Ages had great difficulty
formulating a principle of individuality. They felt that there had to be a
reason for the numerical differences of individuals within a species. Aquinas
appealed to materia signata quantitate (quantified matter); Scotus
appealed to haeccaetas (thisness). But there is no principle that
explains merely numerical differences; they are different simply as a matter of
fact. The Scholastics were not familiar with the experience of inverse insight
into the lack of an expected intelligibility. They were looking for something
which was not to be found.
The process of abstraction abstracts the relevant from the
irrelevant, the important from the unimportant, the rational from the
irrational, the meaningful from the nonsense, the significant from the
insignificant. What do we do with the unimportant, the irrational, the nonsense,
and the insignificant? Can we bring them under law? Can we explain them? Can we
formulate a theory about [149] nonsense? But to formulate a theory means to
abstract the intelligible from the unintelligible; there comes a point when the
unintelligible is simply unintelligible; when nonsense can be pointed to but
cannot be explained. These are precisely the elements that are left behind in
the process of human understanding.
The empirical residue is connected with a compensating higher
intelligibility. Although the empirical residue possesses no immanent
intelligibility of its own, it provides the materials for the procedures of
abstraction and generalization that are of enormous significance.
The empirical residue is not a direct correlative of inverse
insight. We defined inverse insight as the absence of an expected
intelligibility. The difficulty with the empirical residue is that nobody
expects it to be intelligible. Few people expect scientific theories to be
different depending on where they were invented or by whom or when. Few people
look for an ultimate explanation of why x was born at a particular place at a
particular time. But one characteristic of the empirical residue is that it is
connected with a compensating higher intelligibility. The empirical residue is
'significant' because it allows the process of scientific generalization and
abstraction to take place and is itself simply left aside.
The empirical residue is roughly equivalent to what Aristotle
referred to as matter. However, he used that term in many different senses such
as prime matter and secondary matter, general matter and specific matter; he
used it in a technical sense and in a loose sense. The common element was that
matter was not knowable; it was the matter in which the form was realized; we
know the form in the matter but the matter in itself is strictly, in principle,
unknowable. So for us the empirical residue is a technical term with a very
specific meaning as what is left over when all intelligibility has been
abstracted; data which can be experienced but cannot be explained. The empirical
residue can only be known in that strange indirect way of an inverse insight; it
can only be known when the processes of generalization and abstraction leave
behind the individuality of particular times and places. [150]
In conclusion, lest we get distracted from our main purpose,
let us remember that we are slowly becoming aware of how our minds work, how we
understand. This chapter compels us to face the fact that there are types of
intelligibility, degrees to which data can be brought under law. The scientists
of the nineteenth century looked forward to a time when science would understand
and therefore control and predict everything. In the twentieth century science
is more realistic. We have accepted the need for statistical method as a
necessary complement to classical method. There are degrees of intelligibility.
Some questions we can answer and some not, not just from lack of information but
in principle. Inverse insights bring us up short. They remind us of the limits
of our human understanding and the degrees of intelligibility of our universe.
We are forced to use at least two different methods in the understanding of any
set of data. The exercises give you some opportunity to identify these
procedures in your own consciousness.
The principle of sufficient reason seems to imply that there
is a reason for everything. The foregoing has revealed that there are degrees of
intelligibility ranging from the satisfying intelligibilities of classical laws
to the shock of the empirical residue which lacks immanent intelligibility. We
have a reluctance to accept the unintelligibility of some data; we spontaneously
ask for the reason for something and expect a direct insight. But this is not
always possible. For there is chance; there are random occurrences; there is no
explanation for the particular as particular, for particular times and places;
there are accidents, coincidences, the merely empirical residue. Why did the
locusts land on my farm and not on my neighbors? Why did I get malaria and my
friend in the same room did not? Why did the tree fall just when I was passing?
Why did it rain here and there is drought over there? This is the kind of world
we live in; one which is a complicated combination of the systematic and
nonsystematic; one which can only be correctly understood by a combination of
direct, devaluated, or pure inverse insights.
We got into some rather technical 'stuff' in this chapter.
The more down-to-earth examples of the bewildered reader, the unfortunate
secretary and the inveterate gambler at the beginning [151] remind us that the
problem exists not only for scientists and experts but for everyone of common
sense. The type and degree of understanding attainable in any discipline will
vary enormously. Aristotle warns the readers of his Ethics not to expect
the same degree of precision in moral inquiry as is possible in mathematics. We
noted that measurement and hence precision and accuracy is primary in the
physical sciences; but in the human sciences it is explanatory definition which
is primary. In all areas we must differentiate what we can know with certainty,
with high probability, or simply as probable. In all cases we distinguish what
we can understand clearly and distinctly and what we can only expect to
understand in a confused and ambiguous manner. Our universe is rich in the
diversity of the phenomena it presented to us. Our minds are rich in strategies
for coping with this diversity. The third stage of meaning is not some
monochrome reduction to one type of meaning; rather a nuanced and sophisticated
grasp of the complications, variety and levels of intelligibility of human
persons operating in the real world.
Comments on Exercises
(1) The peculiarity of probabilities is that the
intelligibility applies to the sequence not to individual numbers. These
sequences might represent results of exams, range of temperature, degrees of
humidity, etc. There is no single number that is exclusively correct. For (a)
it would normally be anywhere between 5 and 8. For (b) one would expect
something between .74 and .82. For (c) one expects between 98.3 and 99. But
even these limits are not sacred in a statistical sequence. The numbers in (d)
are lacking in intelligibility and are close to purely random. It is an
inverse insight to grasp this lack of expected intelligibility.
(2) We can easily name and describe the intelligible. But
can we recognize and name the random, the unintelligible, the nonsense, as in
(d)?
(3) This puzzled the Greek mathematicians who expected to
find an even numerical ratio. But they found it was irrational,
incommensurable. In our terms it turns out to be a decimal which goes on
forever. Get the square root of two on a calculator. It goes on forever and
ever. [152]
(4) Aristotle presumed that the natural state of a body is
to be rest in its natural place and if nothing is pushing, then, a material
body will come naturally to rest in its natural place which is down. The
heavenly bodies had Movers to explain how they kept moving. The earth was not
moving because of the enormous force that would be needed to keep it moving.
The arrow was a problem and he thought in terms of air coming from the front
and pushing it from behind. Late medieval physicians developed complicated
theories of impetus on the same lines. The whole line of thinking is based on
a false presupposition, a wrong question. Newton's first law of motion states
the correct assumption. It is changes of rest or motion that need to be
explained.
(5) Obviously the elephant, but why? Aristotle would have
said because the elephant is heavier and the heavier a body is, the faster it
will fall. Galileo disagreed and proved that weight has nothing to do with
acceleration under gravity and in a vacuum all bodies will fall uniformly. We
accept Galileo's principle. But when you actually drop an elephant and a
feather from a tower the friction from the air will have a disproportionate
effect on the feather as opposed to the elephant; it is friction or air
resistance which causes the elephant to fall faster than the feather.
(6) The only significant pattern of stars visible to the
naked eye is the Milky Way. That is a band of many millions of stars that
constitutes a sideways view of our galaxy. Otherwise the distribution of stars
is random. Other clusters such as Pegasus, Orion, the Great Bear, etc. are
names given to groups of stars which happen to look like things on earth.
- Weather forecasts give a range of probabilities rather than certainties.
Rain may be possible, probable or highly probable or anything in-between.
The probable path of a tornado can be predicted but with little precision or
certainty. It is an illusion to think you can have all the relevant
information: for that you would need to know everything about everything.
[153]
End Notes.
1. Insight, 44.
2. Insight, 43-50.
3. Insight, 126.
4. Pierre-Simon Laplace (1749-1827), French
mathematician, astronomer and physicist. He applied Newton's laws of motion to
the planets with a great degree of accuracy.
5. Insight, 126. The word
'statistics' sometimes refers to information in a statistical form or to the
mathematics involved in dealing with probabilities. We use it in the sense of
the prior understanding or mentality behind dealing with probabilities.
6. Gregor Mendel (1822-1884), an Austrian
botanist, studied the occurrence of traits of tallness and color in generations
of peas. He noticed the statistical significance of the patterns of recurrence
and formulated laws of genetics, leading to the discovery of genes.
7. Insight, 50-56.
Go Back
|