http://www.lewismicropublishing.com/
Chapter
Twenty-Nine
Alternative
Metasystems
Meta-systems
are basically two sets of things at the same time:
1.
Meta-systems are systems of systems, mostly naturally occurring systems, or
alternatively real and human-made systems. They can also include abstract and
imaginary systems. Most systems themselves are super-complex--therefore
meta-systems science deals with super-complexity in the interrelationships
between different systems or their component subsystems. In a sense, we can
describe a single hypothetical meta-system that comprehends and encompasses all
other systems. It can be said that all of nature, indeed all of reality, which
seems to be somehow a greater and more general notion than that of nature, can
be unequivocally said to be constituted by systems that occur upon multiple
levels of articulation. We understand the functioning, organization, operations
and patterns that occur and recur in reality in countless cycles in terms of
systems, in terms of the ordered relations that recur between like elements, in
terms of rules that are consistently reiterated in the processes of change and
occurrence. A systems approach is ultimately how we approach knowledge of
reality scientifically once we move beyond simplistic deterministic models that
are based upon strict correspondences between events and terms names and the
classical sense of causality that is used to explain such event structures. It
was Niels Bohr who pointed out the relevance of a view of complementariness in
all fields of the sciences, at all levels of the articulation of reality, and
this amazing insight, as true for cultures and the designs of frogs as it
remains for realistically understanding the structure of subatomic particles in
their atomic orbitals, remains at the heart of a meta-systems approach.
2.
Meta-systems are knowledge theories and heuristic methodologies relating to
knowledge. In this sense, meta-systems are comprehensive and they represent both
a form of philosophy and philology and a kind of science about knowledge.
Because all knowledge that is known is fundamentally human knowledge, or at
least human mediated knowledge, this sets certain basic constraints and
conditions on the normal or typical structure that knowledge takes. Therefore,
we may say that meta-systems provides a heuristic system for the organization,
articulation, application of received knowledge and the generation of new
knowledge.
Meta-systems as a perspective and approach grew out
of my professional involvement in the Anthropology of Knowledge, and represents
an extension and application of this approach to a wide range of issues and
areas that are both trivial and important in our world. The anthropology of
knowledge has had an eclectic history of development, and is related but not the
same as the sociology of knowledge though it comprehends many components of this
other area. Anthropology has long been interested in the problem of the psychic
unity of humankind and the general problem of "primitive thought." It
has had its own tradition and contributions to psychology and the study of human
behavior and symbolism in cross-cultural contexts. It has been intimately
interested in problems of socialization, enculturation, identification and the
linguistic ties of the native speaker to a coherent worldview. The focus of the
Anthropology has come to focus upon what has been known as the worldview
problem, or how we articulate a coherent view of the world and function in
relation to such a world.
Metasystems science was where theoretical and
methodological development in the Anthropology of Knowledge had been leading me
consistently over the last decade, one step at a time. It took my fieldwork
experience in the heart of central China to precipitate this framework
out--perhaps it was the totalitarianism of daily life there that demanded of me
a sense of totality of worldview that was not violent or destructive but at
least appears benign and constructive. But even more importantly, I believe, it
was my students and their continuous questioning me about the larger world,
making me think about the consequences of a shattered or ill-defined or
incomplete worldview, that the consequences of its ideological manipulations
that the true power of genuine independent thought and intellectual freedom came
to the foreground of my anthropological concerns.
Of course, an entire decade of graduate training and
prior fieldwork led up to this stage in my own development. There as a growing
dissatisfaction with conventional solutions and pat answers that even an
esoteric field like the Anthropology of Knowledge could offer.
Since that time four years ago, I have been devoted
in one way or another, and usually in multiple ways at the same time, to the
development and fulfillment of a meta-systems approach, not only on paper, but
in terms of lived reality as well. I believe the world is more than ripe for
such a frame-shift or maze-way reformulation, but it is not yet prepared
psychologically or ideologically to receive or participate in such alternation,
especially in any collective sense that would be necessary to bring such a
vision to fruition. It was I believe Buckminster Fuller who saw the most
optimistic and positivistic vision of a world governed not by politicians and
their private interests, but by the good intentions and wisdom of scientists and
the public benefit that is derived from this. In this sense, he was completely a
visionary, a man ahead of his own times. But he had the open and naively
idealistic framework of the 60's, set against the evils of Vietnam, to propel
him forward in his vision. Since then, socially and ideologically, human
knowledge has seen much regression in spite of the quickening tempo new
scientific revolutions, discoveries and inventions around every corner. We have
revived for administrative attention and public obfuscation issues that were
supposed to have been settled with the Scopes Monkey Trial.
There has arisen an unfortunate legacy of this sister
area of the sociology of knowledge that it has been construed as somewhat
anti-scientific and political in its interpretation and application. In terms of
its central tenets and methodologies, nothing could be further from the
truth--it has only striven for a more realistic vision of the articulation of
scientific knowledge in the world and how this articulation is susceptible to
social and ideological influences. Like the general anthropological doctrine of
relativism, with which it is closely associated, this doctrine of the social
construction of knowledge and knowledge systems has been reinterpreted and
revisioned to suit the interests of whomever it is doing the re-visioning and
reinterpretation, regardless sometimes of the accuracy of the point of view
being promulgated. In such a manner, we see that even the field like the
sociology of knowledge is susceptible to the same ideological constraints and
influences that it was created to critique and "deconstruct" in the
first place, and this makes sense because even knowledge about knowledge becomes
susceptible to the same kinds of structural patterns and limitations and
distortions that all knowledge is prone to.
Coming from the anthropology of knowledge, these
political and ideological issues can be at least partially side-stepped. The
problem with the anthropology of knowledge has been that it has been
conventionally received as such an esoteric professional interest that even most
other anthropologists themselves are mostly unfamiliar with its terrain, much
less the average non-academic.
There are five basic sets of questions that mostly
deeply concern meta-systems, each of these questions informing and guiding
research at different levels of meta-systems stratification:
1. What is physical reality? Or What is real?
2. What is life?
3. What is intelligence?
4. What is possible?
5. What is true?
The answer to these kinds of questions is never
straight-forward, and attempting to answer them results in a life-time of
research and query. Some might claim that these kinds of questions are
unanswerable, though I do not think so, at least from a relative point of view.
Unanswerable kinds of questions are those that science should not appropriate
ask, and, when we boil it down, there may be only one such unanswerable
question:
How and why did it all begin in the very first
instance?
A logical extension of this is to ask the opposite
but complementary question:
How and why will it call end in the very last
instance?
The question that I believe to be ultimately
unanswerable is the question of ultimate origins of our reality. This is a
question that cannot be answered even if we adopt a purely mechanistic and
material point of view. It is therefore a problem not for science but for
religion and symbolic ideology to deal with. There are also non-absolute or
relative questions that I believe it to be ultimately beyond the purview of
science to resolve. These are normative or human evaluative questions like:
What
is good?
And
what is beautiful?
There are no absolute or absolutely certain answers
to this kinds of questions that science can grab hold of in a fully objective
manner. That does not mean that explication and especially elucidation of these
kinds of questions should not be attempted in the name and spirit of science, to
yield what greater objectivity we might from them. Religion and symbolic
ideology can also answer these kinds of questions as well in some ultimate
sense.
Otherwise, I see the range and possibility for
scientific query to be fairly unrestrained and wide open. Science can and
ultimately will, if provided enough time, solve all problems relating to the
questions of reality and truth listed above, at least in a way that is mostly
satisfactory if only approximate. If we consider the fullest logical and natural
implications and consequences of these kinds of questions, we realize that they
extend beyond the boundaries of the current state of knowledge in critical ways.
They open us up to asking questions we might not otherwise think to ask, and to
seek answers to problems we previously did not even imagine existed. And this
augmentation of reality has been a normal and common function of our sciences.
The development of systems theory and methodology in
a complete sense allows us this degree of openness and flexibility, and permits
us to approach and formulate new kinds of problems that were previously
unapproachable without this consistent framework.
Advanced
MetaSystems Science
In
general philosophical perspective, it is significant that, as regards analysis
and synthesis in other fields of knowledge, we are confronted with situations
reminding us of the situation in quantum mechanics. Thus, the integrity of
living organisms and the characteristics of conscious individuals and human
cultures present features of wholeness, the account of which implies a typical
complementary mode of description. (Niels Bohr, Causality and Complementarity,
1958)
If we seek a unity between C. P. Snow's two cultures
of the Sciences and the Humanities, we must find this common ground in the
so-called social and human sciences. There is good reason
for this intersection, as it is the anthropological relativity of
humankind, as the central knowers and doers in reality, that leads to the
possibility of the integration of these separate approaches to reality. Niels
Bohr gave us the means for achieving this kind of integration when he compared
complementary explanation to deterministic causality, and he referred to the
complementarity of relativistic explanation with intrinsic naturalistic
explanation. If we pursue the humanities far enough, we are liable to run into
either narrow ideologies or expansive relativist doctrines. If we stick too
strictly to science we end up with a empty model structure of reality that is
devoid of the very pattern it seeks to understand.
All human knowledge exists in a fundamental
relationship to the unknown, which relationship
can be defined in terms of relative or residual uncertainty that is
attached to any particular bit or statement of knowledge. A statement like
"Christopher Columbus discovered America in 1492" is one that is
fairly unambiguous and most would attach almost no amount of uncertainty to it.
This is as true as the common sense that a school-child's song would imply. The
trouble with the statement, understood critically and semantically, apart from
the actual historical record, is that "America" did not receive its
name until well after 1492, and that the dating system is not universally
applicable to all people. This may seem like equivocation, but it does emphasize
the kind of critical attitude to information that is necessary to understanding
its form and function in our lives. In this example we can clearly distinguish
between internal and external validity and consistency of the statement.
Research that is rooted in the expansion of knowledge
is in a sense rooted in the desire to systematically reduce or eliminate the
source of ambiguity in knowledge that arises from uncertainty. We do so by
trying to chase out the unknown, or at least chase after it. This process is
clearly evident for example, in the more mathematical of sciences like chemistry
or physics, where problem sets are usually posed trying to solve for a specific
unknown factor in terms of factors that are known. Such systems of knowledge
rely on the great applicability of mathematical definitions and procedures for
the identification and relation of physical substances, properties and
processes. We know that a mole is a specific number of atoms or molecular units,
and that a mole of iron will weigh, on earth at least, a certain number of
grams. Even here though, determination of unknown values often rests upon
empirical measurement which contains some residual degree of uncertainty.
Another approach to the problem of the unknown in
other knowledge systems is the use of inductive inference and contextuality to
help make determinations regarding unknown variables. Of course such methods are
inherently less accurate and less precise than are the preferrred mathematical
approaches, but if done well they can bring a degree of synthetic understanding
to complex problems that in a sense transcend the analytical aspects and factors
associated with the problem.
It does us little good to define an unknown in terms
of another unknown, unless the second, substituted unknown may be determined to
be somewhat better known, or a partial unknown, that can help to reduce the
level of uncertainty associated with the first unknown. The second unknown
variable may provide for us a
context or a point of reference by which we can tackle the larger problem it
comes to represent.
It is natural and only human to attempt to define
unknown variables in terms that are known. In the simplest cases, we merely
substitute what is most familiar to us, and rely upon complex rationalizations
to absolve ourselves of any contradictions that may follow. The trouble with
using our known variables in reference to unknown realities, is that if
something is unknown, it tends to be without reference point or hook. We do not
even know where to start in unraveling the problem it presents. It opens, in the
language of A.I., a tremendous degree of search solution space and informational
complexity which presents us with what can be called an informational
bottleneck--what we know may be simply inadequate
to defining the problem in the first place, much less in determining its
solution.
The unknown therefore always surrounds and
overshadows what we know with a cloud of uncertainty. We can be certain that we
are uncertain, but we cannot be uncertain of our uncertainties. The inherent
uncertainty of all knowledge presents of with inherent dilemmas about our
understanding of reality that are difficult, in the largest sense, probably
impossible to overcome. On the other hand, the horizon of our knowledge offers
us the promise that the quest for new knowledge will never come to an end until
at least we ourselves come to an end.
The known and the unknown exist in a complementary
relationship, and uncertainty is a measure of this relationship. It is tied to
the idea that disorder connects to all sense of order, and that everything tends
in the long run toward increasing disorder. In other words, knowledge has a
fundamental anti-entropic information function, that was tied to our own
physical survival and adaptation in the world. This function is inherently based
upon the self-organization of natural information, and the energy relationship
it shares with real systems in the world.
The antinomalities presented in our understanding of
physical reality are able to be transcended when we realize that our
understanding and descriptive observation interacts with physical reality in
basic ways. The cognitive models we hold of a system or an object and its
behavior influence directly how we see
and understand the thing and its actions. We may see and approach a thing both
analytically or synthetically--paying attention to the properties that are the
result of its holistic integration as a system, or paying attention to the
elements or components that are the efficient cause of the systems functioning.
Either way, we are neither right nor wrong. Thus a pocket watch is both
understandable as a machine of many fine interlinked components, and as a single
integrated mechanism that keeps time in an accurate manner.
There are fundamental, intrinsic limits to our
ability to observe and know the very large and the very small. The speed of
light defines an observational parallax for the very large that prevents us from
seeing the universe in an instantaneous manner as it may exist at this moment
and at the next. We can only infer its existence by reason and by extension of
reference from more proximate examples that we can prove--we cannot see the
exact disposition of Mars at the moment we are observing it through a light
telescope, as its light took a few minutes to reach our objective lense. And yet
because we can still observe Mars several minutes later, then we can conclude
that we are seeing at a later point what Mars actually looked like several
minutes earlier. And there is sound reason not to think that this is the same
general situation for very distant points in the universe that can only be
observed at very great depths of space and time. Just because we cannot directly
observe the instantaneous state of the universe, does not mean that we therefore
conclude that it doen't exist. Scientific evidence at this point does not rest
on either falsifiability or upon proof--we can neither prove nor falsify what we
cannot even indirectly see. We only conclude that it is so by logical deduction
and inference from phenomena we can demonstrate. We suppose that the universe in
the largest sense exhibits a minimal amount of consistency in its basic physical
properties and components, and we do not expect
it to be radically different.
Similarly, it seems, that there is a scale of the
very small that defines a fundamental limit of observability. This scale seems
to be related to the size of a photon or quantum of light energy. In logical
short hand, anything smaller than a photon would be invisible by means of
photons. While both these limits express limits of observability
based upon our dependency upon the
basic properties of light, it is not necessarily the case that they describe
inherent physical limits to the possible size and shape of the very small or the
very large.
There exists ample indirect experimental evidence of
the very large and the very small that allows us to conclude that at both
extremes there occur other kinds of interesting constraints that influence both
our observability of these physical phenomena and their own intrinsic behavior.
Upon the scale of the very small, it becomes impossible to localize in a
determinative manner any particular entity that is independent of the field in
which it occurs and may be evenly distributed within a given range of
Bose-Einsteinian probabilities. In other words there occurs an inherent
indeterminancy or uncertainty in our ability to precisely define both the
location and the energy state of such fundamental particles at the same time.
Upon the scale of the very large, evidence tends to suggest that what we assume
to be Euclidean in dimensions may actually be non-linear and may be in fact
larger or smaller than would be expected in a Euclidean sphere.
The concept of complementarity implies an inherent
duality of structure and explanation, of event pattern and observation, that
informs the structure of physical reality upon every level of its organization.
This gives to scientific explanation an inherent dialectical tension between an
inferrable holism of pattern on one hand, and an analytical reductionism on the
other. This complementarity of physical reality, that can be observed at all
levels of stratification and integration, is as much result of the
anthropological relativity of our knowledge, to squeeze our understanding
through the symbolic screen of our consciousness, as it is anything intrinsic to
the structure of reality itself. It is a product of both anthropological and
physical relativity of structure and pattern. Dialectically speaking, physical
relativity of knowledge about physical reality constrains the anthropological
relativity of our knowledge and
worldview in basic ways. At the same time, the anthropological relativity of our
knowledge and worldview also constrain our understanding and relation to the
physical relativities of our sense of reality.
What
is a System and a Metasystem
A system can be described as a complex set of
interrelationships that occur in a semi-determined manner, within the framework
of a larger set of surrounding relationships that may or may not have an
determinative influence upon the system. The relationships in general are so
complex in even a simple system that they tend to defy description or prediction
of outcomes. A system in a technical sense is a kind of mechanism, and therefore
such systems tend to follow mechanical principles as these are understood at
some level. The types of mechanics that describes systems varies considerably
with the level of the system and its integration. We employ quantum mechanics to
describe the behavior of the atomic system of electrons, light energy and other
fundamental particles of physical reality. We employ classical and relativistic
mechanics to describe on the other hand the motions of bodies in space, and I
would make the case for gravitational mechanics to explain what occurs within
gravitational fields and gravitational bodies. Upon a biological level, we may
employ the term bio-mechanics at several levels as well, referring first to the
cellular and biomolecular mechanics of energy storage, molecular production and
reproduction. We can refer to the mechanics of cell tissues and the
physiological mechanics of complex organs and biological systems of an
organism--distinguishing for instance the nervous system and the skeletal
system. In this, we can see that the organisms constitute what can be called
metasystems, which simplest definition can be said to involve the integration of
various component systems and subsystems within a single organiismic framework
that is characterized by specialization and differentiation of function, and by
the emergence of organiismic properties that shape the behavior of the system as
a whole. Upon a human level we can find systems in terms of the organization of
worldview, attitude, response pattern, affectation and social adaptation of the
individual, and of the small group dynamics and network that individuals
maintain in larger social contexts, to the development of full fledged
institutional and corporate systems that serve one purpose or another in the
world.
All natural systems are metasystems, or else parts of
metasystems
A metasystem is a complex organization of processes
of change and transformation of states that can be said to have some kind of
structure or order (i.e. nonrandom occurrence).
All natural systems are stochastic systems. In other
words, they are derived as the result of the chance concatenation of component
subsystems into a regular working order.
All natural systems as such perform some form or
function of work, which can be described as the antientropic transference or
increase of energy in a systematic manner.
In order to do work, natural systems must be
organized in a manner that implicitly conveys information.
Scientific knowledge systems are built upon and lead
to this kind of natural information implicit to the patterning of phenomenal
reality.
All systems may be characterized by certain
structural patterns, and these patterns recur throughout the universe and appear
to be something profoundly basic to all physical reality. First and foremost, we
might state the following:
1. All natural systems tend towards a state of
dynamic equilibrium.
2. This equilibrium tends to be complex.
3. Systems perform work, which is the informational
organization of energy to maintain equilibrium.
4. If the equilibrium of a system is perturbed, it
will tend to restore it self.
5. Systems at all levels tend to be continously
perturbed.
6. All systems follow basic principles of universal
energy dynamics.
7. All systems follow therefore state-path
trajectories that can be characterized in the long run and in the large by
non-linear control equations.
8. The universe as a whole is a system within
equilibrium, what we can call universal equilibrium.
a. All finite systems occurring within reality are
subsets of the universal system.
b. The universe is the universal set that contains
all systems as members.
9. Subsystems are stratified and nested upon multiple
levels based upon fundamental dimensions of differentiation.
10. Systems tend to be historically unique and
relative to the level upon which they occur.
The concept of equilibrium of a system is really what
is definitional about a system as something distinctive to the concept of a
"system." In other words, a system that does not exhibit some form of
equilibrium (upon some level of integration) is one that cannot be said to be a
system. Equilibrium is generally represented in the following kind of formula:
K
= (X)/(Y)
Where
upper case K stands for an equilibrium value associated
with a system, and variable (X) stands for all those variables and
associated values that are in the end-state of the system being measured, and
(Y) stands for the composite of all those variables and associated values that
are a part of the start-state of
the system being measured. Alternatively, unknown (X) can be the composite of
all internal values associated with a system, while (Y) can be the composite
variable of all unknown external values associated with
a system.
This formula implies an inherent reciprocity of value
between X and Y and it is this reciprocity that is the key to the equilibrium
that exists between them. We can rewrite each unknown in terms of the
equilibrium value and the other unknown. For instance:
(X)
= (Y)/K and (Y) = (X)/K
Integration is the measure of order of a system that
is achieved by means of equilibrium. Equilibrium is defined as a state of
relative balance to which all things tend in their interrelationships. Once a
system has achieved equilibrium, such a system will tend to maintain that
equilibrium indefinitely. That equilibrium is important in reality can be
demonstrated in many basic principles, for instance, it is known that in the
electromagnetic spectrum, wavelength times frequency is equal to the speed of
light. We therefore have an
equivalent form of basic equilibrium if we rewrite the formula:
ln
= c, l = c/n,
n = c/l
where
l is the wavelength
n is the frequency
and
c is the speed of light
Associated with these properties is the idea that
both wavelength and frequency of light
always exist in equilibrium with one another in relation to the constant of the
speed of light.
Another related and well known formula is the
statement of the equivalence of energy to mass:
E
= mc2, m = E/c2 and
c2 = E/m
Where
E is the total energy of a system,
M is the mass of a system,
And
c2 is the speed of light squared.
In this equation, we can say that measurement of both
mass and energy are in equilibrium in relation to the square of the constant of
the speed of light, which is itself therefore a constant. In both cases, because
the denominator value of the equilibrium equation is a known constant that never
varies, we can understand that the equilibrium of the system is variable in only
two sets of dimensions. We refer to
these forms of equilibrium as equivalence structures, and equivalence thus
defined represents a form of equilibrium that is variable in only one set of
dimensions.
Equilibrium implies something more important as well,
and that is the operation of a "control" or controlling force that
influences the behavior and trajectory of the system. In this case, control is
to be considered several things--it is non-linear and it is self-determining.
A second basic statement about systems is that all
systems can be said to be self-organizing systems. That is, they arise
stochastically as a function of blind chance without necessary causal
predeterminations. Such systems tend to run toward increasing degrees of
complexity in their functional equilibrium. We tend to see such systems
therefore as being inherently underdetermined. The complexity of these systems
is a consequence of the degree of integrative equilibrium they achieve, and the
fact that this equilibrium arises as the result of complex relationships that
are at best only semi-deterministic. They are thus inherently unpredictable as
systems--a fully determined system is one that is theoretically uninteresting
because we can learn nothing new from it. It has no new information to be
gained. In science, we cannot attach attributions of original or inspired
causality or determination to natural events--natural events must be explained
in terms of a kind of mechanical causality, or at least a reciprocal
relationship between things that are part of the system.
A third fundamental statement about natural systems
is that all such systems are inherently dynamic--i. e., they tend to change in
the long run in certain important ways. The dynamics of systems is really what
makes them most interesting to study, as change processes, though not
predictable, can be expected to yield new information.
To summarize our key points, we may say the
following:
1.
natural systems exhibit equilibrium
2.
natural systems tend toward complexity
3.
natural systems are self organizing
4.
natural systems are continuously dynamic
5.
natural systems are informationally representable.
The last point is an important consideration to make,
to the extent that it ties the status of the inherent organization of such
systems to our ability to comprehend such systems in a manner that can be
considered objective and true to the nature of the system--i.e., this feature of
natural systems makes possible scientific knowledge about such systems.
In the consideration of metasystems in relation to
advanced systems analysis, we must realize that the development of a metasystem
entails the progressive subordination of systemic function of lower order
systems. A metasystem is a complexly integrated entity that is composed of a
number of different subsystems that interfunction to create a total system. In a
sense, any discrete thing that is inherently differentiated on the basis of
specialized functions can be considered to be a metasystem. The earth can be
considered a metasystem, as might be the oceans and continents. I believe that a
metasystem has the features of wholeness and integrity. They are easily
identifiable as separate systems in a larger framework of other systems. The
solar system is an example of a metasystem that combines multiple planetary,
lunar and the solar system within a single unified structure.
Metasystemic functions are characterized I believe
foremost by emergent and synergistic properties. As mentioned above, they are
characterized by a subordination of
function to the purposes of differentiated specialization in a context that
contains its own internal equilibrium, or even what might be called,
internalized ecology of environment that serves to set it apart from the larger
surrounding context of its occurrence.
By and large, the kinds of metasystems that I am
concerned with in advanced systems science are those possible or potential
metasystems that can be developed by humankind
in the context of the earth and beyond in the context of a larger universe. I am
also concerned foremost as well with the actual human metasystems that have been
produced, and that are developing along their own state-path trajectory
regardless of human interferenece or involvement.
Scientific
Relativity
Scientific relativity concerns the question of the
status of scientific knowledge within the context of the patterned structure of
reality that such knowledge seeks to represent. The status of such knowledge is
that it is constrained by certain intrinsic and extrinsic limitations at various
levels that serve to predetermine how much we can know, and how we can know it,
with the implication also that it tends to preclude our capacity for knowing
things beyond the compass of our scientific knowledge systems.
In a general sense, we experience scientific
relativity in terms of the kind of data and knowledge that science deals with,
and the kind of knowledge that science cannot deal with. We distinguish between
ideology and scientific methodology as distinguishing hallmarks that constrain
scientific knowledge.
We may also distinguish three major forms of
scientific relativity--physical relativity of knowledge at the level of physical
systems theory; biological relativity of knowledge at the level of biological
systems theory; and the anthropological relativity of knowledge at the level of
anthropological systems theory.
Relativity concerns the question of the relationship
of the known to the unknown, or of the degree of relative uncertainty of our
knowledge that is based upon the working constraints and that serve to limit the
sense of certainty that can be attributed to our knowledge. Certainty is an
important problem in scientific knowledge, as it concerns centrally the issue of
the validity and reliability of information, of what we know and how we know it.
We cannot escape this central existential dilemma about knowledge, that we can
never be absolutely certain of what we know. There is only one area of knowledge
that has any sense of absoluteness about its value as knowledge, and this is in
the area of mathematics.
Mathematics in a sense comprises the only field of
non-relative knowledge that we have. There is a sense of certainty in the
statement that two plus two equals
four, that can be found in no other kind of statement we can make, neither
"this is a tree" or "the tree's leaves are green," etc..
Mathematics, being the basic language of science, remains the preferred form of
expression and mode of operation in scientific research, if this is at all
possible. Mathematics refers to nothing in the external world as the basis for
its validity, unlike the sciences which, however rationalized, are always
fundamentally empirical in reference and orientation. Mathematics derives its truth value from the internal
coherence of its purely logical structures. In this sense, a computer is
nothing more than an intricate mathematical machine, and a computer
therefore exists always in a closed world that has no reference to the external
or larger world of which it is a part. It may
reflect or represent this world, but this is only a form of mimicry. The
lack of external reference in mathematical validation renders it inherently
non-relative and absolute in form, and it can be said to be the only
non-ideological or symbolic system that achieves this status that we know of.
The relativity of knowledge at all levels can be seen
therefore as a function of the dependency of such knowledge to external
reference for its validation. With an empirical dependency, the problem of
inherent correctness that is characteristic of mathematics, is replaced by the
problem of external certainty. Such relative systems also fundamentally face a
problem of internal coherence, when the meaning and verification if its system
is always pointing to a larger world, because coherence becomes faced with a
central dilemma of inherent ambiguity of meaning that is uncertainly placed in a
larger world.
At each of the levels, there are various reasons and
sources for the scientific relativity of knowledge that will be each addressed
in turn below. At whatever level, we may identify both internal relativity of
such knowledge that is largely a function of the limits of the language of
science to accurately describe and represent the reality it refers to, and the
external relativity of knowledge that is based upon the status of ourselves and
our physical limitations in being able to know reality in some larger or finer
or alternate way.
Physical relativity is well known upon several
levels. There is the uncertainty principle that determines that we cannot know
the exact point position of an electron in its orbital without sacrificing other
kinds of information in the process. There is the general and special relativity
of Einstein, that determines that the universe exists within a four dimensional
space-time coordinate system. I would impose as well a framework of fundamental
relativity that states that there is a limit to our ability to know the very
small due to the constraints of our ability to see at a size smaller than that
of a photon. The other form of relativity is what I refer to as universal
relativity, and this takes several aspects. The first is our inability ever to
see beyond the space-time limits of our own sphere of observation, to view
simultaneously the exact contemporaneous state of the universe, or even a
portion of it. This constraint limits our ability to see the very large in
certain interesting and basic ways. We cannot see for instance, exactly what may
be going on inside of a blackhole, if no light ever escapes the confines of its
gravitational force. We cannot look outside of the historical dimension of the
speed of light, so that we cannot even know, for instance, the exact disposition
of the whole universe at any point in time, past or present. We may speculate
further that if light bends, or
space-time curves in some basic ways, and perhaps leads into alternative
universe systems, then it is possible that we
will never be able to directly realize or experience these systems. I
would add to the concept of universal relativity the inability ever to observe a
completely motionless system or a system that is completely outside of a
space-time system that is shaped by a complex gravitational field.
Biological systems they, as I have previously
remarked, are some of the best mapped out theoretical constructs that we now
have. We are confronted with basic forms of biological relativity in terms of
the limitations of our own species from a biological standpoint. For instance,
we can never know in a final or complete way how another large brained organism,
like a dog or a dolphin, may think, nor will we ever be able to develop a fool
proof system for communicating with these creatures at the level that we can
converse with one another. Dogs have a tremendous sense of smell, that we cannot
approximate, and it appears as if dolphins may be able to see acoustically in
three dimensions. Biological relativity becomes
even more pertinent when we consider, for instance, the challenge of the
creation of alternative life forms, or the formation of alternative or alien
biological systems in the universe. There are certain biological constraints
that we, as human beings, must overcome if we are to achieve a larger vision and
broader experience of reality. One of these sets of constraints for instance is
within our normal light field of vision. We have through technology learned to
see at other wavelengths, by the translation of these wavelengths to forms that
can lead to visual representations. We use telescopes to extend our range of
vision to vast distances, and microscopes to clarify the world of the very small
that is beyond the resolving powers of the naked eye. To a great extent,
biological limitations of knowledge have been largely overcome, and it will not
be until our encounter with truly alien forms of life that we will once again
confront very fundamental issues involving our own sense of biological
relativity.
Anthropological relativity stems foremost from the
fact that we are the knowers of the universe. We sit at the center of all
knowledge and understanding, and there are many forms of anthropological
constraints that serve to influence, limit and shape our knowledge and
experience of the world. These kinds of influences can be cultural, rational,
linguistic, historical, social, symbolic, ideological or religious, etc.
Anthropological relativity exists to be dealt with at all levels of our
knowledge systems. It ties to biological relativity to the extent that our own
knowledge systems are based upon the organization and functioning of the human
brain within the kind of context that it develops within. Our brains constrain
us to see and understand and respond to the world in certain ways, and not in
others. The symbolic organization of human cognition and consciousness
constrains our behavior and our noetic experience of the world such that it
becomes impossible not to see the world in these terms.
The
Role of Anthropological Relativity in the Structuring of Human Knowledge
Anthropological relativity is a profoundly important
concept. It is especially important in the Anthropology of knowledge, because it
serves to identify a problem inherent to human knowledge of all kinds, a set of
limitations that are characteristic of the basic structure of this knowledge.
Most recognizable forms of relativity, as in linguistic relativity or cultural
relativity or social or historical relativity, plus many other kinds, are merely
variants of the general problem of anthropological relativity that is applied to
some field of study or general range or set of problems in reality. Scientific
relativity is also a variant of anthropological relativity, and I would dare say
so are most forms of physical relativity as well, which, in whatever form, boil
down to the proposition that our ability to know something basic about physical
realtiy depends critically upon the point of view of the observer's frame of
reference.
Anthropological relativity can be said to exist in
the inherent limits of human knowledge systems and in the natural languages that
are used to describe reality and to convey understanding. Our knowledge is
universally structured in very basic ways, ways that I would simply call
symbolic, and this structuring imposes inherent limits of design and
articulation in our knowledge systems from the beginning. Scientific knowledge
has achieved remarkable results in its application and progress mainly to the
extent that it has been able to systematically control and overcome the
influence of anthropological relativity in our knowledge. This has been slow and
painstaking progress. We are in essence involved today in a scientific
revolution, one that has seen an exponential explosion of new knowledge and
insight across all the scientific domains of research, and that has driven the
electronic information revolution as well. We may well ask when this growth
curve will taper off upon its natural plateau, as it has not yet appeared to do
so.
It seems, the main problem presented by
anthropological relativity is its invisibility in our knowledge systems, for we
though we are entrapped within it, we are most often oblivious of its influence
upon our thoughts. Its influence normally lies beyond the bounds of our
awareness, because it exists in the background of the knowledge system upon
which we know and build our knowledge. A way of understanding this is to see
that all that we know is always a subset of all that is unknown, and the unknown
(including the unknowable) is always a larger and more inclusive set than the
known. The trouble is that we cannot directly know the unknown, or unknown the
known, unless we are willing to suspend for the time being the ideational frames
of knowledge that we bring normally to the experience of reality. We can say
that our basic knowledge background prestructures how we see, think about and
relate to the world. This prestructuring is symbolic and it is unavoidable. Much
of our knowledge remains implicit to the background--that is it is usually taken
for granted and assumed to exist as such without being further queried. This is
not just a convenience, it is a necessity, as otherwise our knowledge systems
become quickly overwhelmed by the need to deal in an explicit sense with too
much information. We suffer information overload anyway, regardless of how well
defined our background knowledge may be, because when we deal with unknown
variables with large uncertainty factors, overloading becomes an eventual
consequence of failing to resolve the information bottleneck of our own symbolic
sense making mechanisms. We find that we cannot avoid the problem even if we
try.
Anthropological
relativity then becomes the basic problem of the limitations of our
knowledge, and our ways of knowing, to deal with every
question and problem set that we confront in reality. This is even more
problematic when we realize that how we pose questions and define problem sets
are themselves constrained in critical ways by the very knowledge systems from
which they spring in the first place.
There are several sets of dimensions that are useful
in understanding the role and implications of Anthropological relativity of our
knowledge:
1. Subjectivity versus Objectivity in knowledge
systems
2. Empirical versus Rational knowledge
3. The ideological limitations of language and
culture
4. Problems of inference and reference
5. Problems of implication and explication
6. Analysis versus synthesis
We can identify further important issues when we come
to a recognition of the limitations of scientific knowledge versus other
alternative ways of knowing:
1. Limitations of observation and observability
2. Limitations of perception and cognition
3. Limitations of measurement and abstract
application
4. Limitations of description and explanation
5. Limitations of definition of problem sets
To these issues we can add probably a host of other
critical kinds of limitations to or knowledge, especially those standing out as
being primarily social or methodological in orientation:
1. Limitations of social communication and openness
2. Limitations of social praxis and relations
3. Limitations of research funding and priority of
focus
4. Instrumental and methodological limitations of
research tools and instruments
5. Limitations of bureaucratic controls and
socio-cultural contraints.
To this final set of limitations, we may add one more
profoundly important set, and that is the ethical constraints and professional
obligations and standards of a discipline of knowledge, that may preclude the
possibility of some kinds of research, or restrict access to information or to
possible procedures that might otherwise lead to new information. The last
consideration is especially acute in the human sciences, where much that has
been learned for instance about the human brain and human behavior prior to new
technologies was achieved through "forbidden" experiments or by
natural but unusual occurrences, such as brain aphasias from battlefields.
Anthropological relativity is not just about
limitations of our knowledge--even where and when knowledge appears to be
relatively unlimited or unconstrained, when we have an open and free view of
something, the claim may nevertheless still be made that our knowledge systems
remain fundamentally constrained and restricted in basic, anthropocentric ways
from which we cannot by ourselves escape. It
must be understood that the means to greater power and vision in
knowledge is not through the abandonment or side-stepping of relativistic
considerations, but by the embracing of such issues with the intention of both
better understanding how such constraints influence our knowledge in what ways,
and how we may work to circumvent or side-step such knowledge systems in a
better way of knowing. We turn our weakness into our strength, and we take
advantage of what is relative about our knowledge, in order that we may better
overcome such limitations in the long run. And so far, we have largely been
successful in this regard, and relativism of understanding upon very basic
physical levels is no longer seriously questioned, but becomes the basis for the
development of a complementary approach to theoretical explanation of
fundamental phenomena. Relativity arguments fail to catch on in biological and
especially social-psychological circles of research, because in the epigenetic
complexity of information pattern, imposing such constraints can seem not only
unwieldy, but counterproductive to the task of creating simplifying solutions to
reality. But we abandon the warning signs of relativism on the intellectual road
we travel only at great risk, because it will invariably lead to the
foreshortening of our opportunities to expand the basic horizons of our
knowledge beyond our own ideological constructions.
Relativism in the social sciences especially has been
poorly received because the dragon of relativity has not been fully considered
for its implications, and it is therefore generally misrepresented as a kind of
blind solipsistic determinism that undermines or makes impossible the
objectivity of our constructions and of our knowledge systems.
There are many examples in different fields of forms
of relativism of knowledge, and it is characteristic whenever fields of study
deal with unknowns with great degrees of uncertainty. In such cases, uncertainty
begets little agreement or consensus on one hand, with a plethora of competing
solutions, or else no dissent or disappreement at all, which is even worse.
Those who want to frame their fields "scientifically" to squeeze
whatever kind of left-over legitimacy they can get from such terms, are apt to
remove the entire problem of the relativity of knowledge as a perjorative and a
counter-productive source of noise in their theoretical and methodological
formulations. The tendency and sometimes vocal call to do so resounds time and
again across academic classrooms and down corridors. Most end up attempting to
avoid the entire problem of relativity anyway they can, not sure in the end how
to deal with it or what it may mean.
Anthropological relativity identifies in the most
basic sense the status and position of the human being at the center of the
knowledge universe--we can say that knowledge is inherently anthropocentric in
this regard, as we cannot remove ourselves, even if we wanted to, from this
central position as knowers. We can say something like, "I think, therefore
it is." We use science to create a degree of social parallax to our
knowledge, to make it not a unicentric approach, but a multi-centric field with
many interchangeable points of view. Ideology is also socially based, but this
is rooted in conformity and agreement to dogmatic ideas, rather than in the
capacity for human beings to contest and cross-test ideas for their validation.
But the social distribution and sharing of knowledge does not ultimately
overcome what is most basic about anthropological
relativity, and this is the fundamental anthropocentric character of the
human knower at the center of the human knowledge universe. Social parallax
cannot overcome this event horizon, unless perhaps we can increase somehow our
definitions of society to include things like talking Chimpanzees and Gorillas,
intelligent Dolphins and even symbolic if silent Canines. We have our huge
radio-telescope ears trained carefully to the most distant corners of the galaxy
in the hope that we will hear something intelligent amidst all the static and
interference. So far though, there has been only silence and the sounds of our
own thoughts that fill the night sky.
Human language, cognition and culture become the
basic limiting factors to consider in anthropological relativity. They impinge
upon scientific knowledge in critical ways, circumscribing what and how we know,
and the kinds of conclusions we draw about the world. Even if and when
scientific method seems to be consistently applied across cross cultural
boundaries, it is still the case that the same methods might lead to different
results, or at least different interpretations of our results, due to unseen
anthropological factors. I would say that if we could clearly isolate the
problem of perception from the problem of cognition, conception and problem
definition, then it might be possible to impose
relatively objective etic standards of measurement and description upon the
phenomena we are testing. This is what science strives to achieve in the
adoption of arbitrary but international standards or units of measurement. It is
clear though that the problem of perception is a thorny one upon the horns of relativity,
and though it is not clear that in a mechanical sense we are all seeing the same
things in physical perception of reality, it is very clear that the images and
patterns we derive from these perceptions may vary considerable due to these
background factors. This kind of issue becomes even more acute when the problem
of description and natural linguistic codification take over, as for example in
establishing taxonomic classes and ordering these in relationships, as there
begins to be a noticeable lack of agreeable physical measures that can be
imposed upon such data. Indeed, in the typologies of Hominid skulls down through
time, no expense of effort has been spared on making numerous tedious and
precise measurments of cranial breadth, etc. and the various polytypic
combinations of these elements have resulted in fairly accurate and clear
descriptions of ranges within which certain types appear to fall. But it is to
be expected in the natural history record, if not in the fossil record, that
continuous variation would be the rule and discontinuous trait boundaries the
exception, especially if it appears that we can assume an overall
anthropological history of allopatric speciation. The trouble with the precise
measurements, especially of type fossils and standards, is that many specimens
fall out of categories as anomalous inbetweenies, such that the distances
between these variations is less upon the margins as they are at the center. The
fossil record, incomplete and often sketchy at best, fall silent in regard to
the probable ranges of variation represented by fossils during any one period or
in any one area, and this is even more the case when it can be considered that
fossil preservation and survival was a very rare exception and not the norm. In
essence we are using very small sample sets, of specimens that at least in one
aspect are to be considered quite exceptional, as representatives of entire
classes that presumably contained very large populations and normal
distributions. This is why each additional fossil found is so vital and
important in our reconstruction and interpretive efforts, as it adds a
proportionately greater amount of knowledge about the range of variation of the
traits we measure.
This may seem like a problem of the data, which it
is, and not of the language that we use to describe this data, but it can be
clearly demonstrated that when the gaps of knowledge are great in data-sets,
then the language problem becomes even more pronounced as well as the cognitive
and cultural problems involved. Large areas of unknown create a tendency for
greater range of interpretation, definition, conception and ethnocentric
appropriation, almost in inverse proportion to the sufficiency of our samples
sizes. Increase the sample size, and the room for interpretive and semantic
parallax will decrease considerably. Simply put, this defines a fundamental
relationship between knowledge as this is linguistically articulated, and the
factual reality such language is designed for. This defines the following kind
of relativistic paradigm:
1. When the data sizes are small and the
uncertainties large, this insufficiency of the empirical record will be
reflected in the much greater interpretive variation and parallax. Speculation
looms large under such circumstances, and the capacity to empirically test one's
assumptions remain very small.
2. When facts increase and the voids between begin to
shrink, there tends to emerge a common ground of mutual agreement about which
variation moves to the margins. Any area of knowledge will then exhibit a
greater degree of consensus.
3. Increasing size of samples, an empirical theory of
very large and representative numbers, will tend to factor variation out in the
long run, resulting in what can be called accurate
representation of true natural variation of pattern.
This will foster increasing agreement about the middle range of the
sample, and push uncertainty to the margins of the data.
4. However large the sample becomes, residual
uncertainty will always remain. No data set will be perfectly representative of
reality. At the same time, there can never be complete interpretive agreement
across large data sets about which marginal uncertainty remains.
5. New facts may and will tend to arise always upon
the margins, or in the interstitial spaces between our data sets, that do not
fit our models or interpretations. There will always occur exceptions to any
rule we may formulate, and there may always be one more Swan we haven't yet seen
that is not white but black.
6. All data sets are by definition finite within a
larger encompassing natural context. Even if our agreement as to the
representation of a particular data set or kind of data sets is strong and
uncertainty small, it is also the case that when these sets of data, whether as
individual data points or as entire classes or sets, are framed within a larger
set of natural relations, the degree of uncertainty about the unknown will then
increase. In this sense, no science, however well worked out, like Chemistry,
can be said to be a finished or complete science.
7. From the previous point, we can conclude that in
knowledge, especially in scientific fields of inquiry that are concerned with
the mysteries of reality, what is known is always encompassed by and forms a
finite subset of what is unknown, and what is unknown remains, as far as we can
conclude, probably open and infinite.
If we are to scratch the problem of anthropological
relativity of knowledge a little deeper in terms of its structural factors and
aspects, we might see that the language problem is inherent to the linguistic
construction and definition of reality itself. We bring language intimately to
the understanding and experience of reality, and we cannot separate our
linguistically encoded experiences from the organic perceptions upon which they
are based. We experience reality linguistically, and build meaning by the words
of our language, and it for human beings it can be no other way. It is a fact of
our evolution that we cannot escape--it is both our biological and our cultural
imperative to see reality through a linguistic lense. The problem is not acute
when there is shared categorical agreement over natural sets and problems mostly
of concrete description, but it becomes acute in those areas where the problem
shifts from being that of concise description to that of abstract explanation.
Even in well received and supported theories, such as the theory of evolution,
the problem of explanation remains an acute versus a moderate problem, for there
are general aspects of evolutionary process and pattern that we do not yet well
understand, and may never come to clearly or concisely comprehend, though many
of the mechanical nuts and bolts are well worked out and substantiated through
experimental research and naturalistic observation.
Then, in all the fields, anthropological relativity
of knowledge shows itself in greater and greater proportions when we step up the
empirical ladder of scientific representation from direct description of
experience to indirect explanation of causality and structural dynamics. At the
higher levels, not even physics that entertains some precisely formulaic and
mathematically derivable theories, can escape the dilemmas of anthropological
relativity as this applies to theoretical and general interpretation.
It has been one of the main arguments of this work,
and previous excursions, that natural systems theory, if nothing else, provides
a standard frame of reference for the generalistic and structural description of
natural event patterns from a theoretical and comprehensive point of view.
Natural systems that occur at any level of observation and phenomenal event
pattern, exhibit structural similarities of pattern that are not just analogous
nor are they necessarily directly historically homologous. Nevertheless, all
such systems do necessarily conform to what can be called a general
"template" of systemic patterning that is both complex and elegantly
simple to understand.
The superimposition of a natural systems framework is
not intended to force an arbitrary or preconceived framework upon all fields of
knowledge and inquiry. It is intended only to provide a common and shared
semantic-linguistic framework for the interpretation and integration of
otherwise disparate areas of knowledge across disciplinary boundaries.
I do not believe that we can overemphasize the
critical importance of anthropological relativity as a phenomenon of human
reality that is intrinsic to our knowledge, its structure and function in the
world. It is not just that such knowledge creates at times an irreconceilable
sense of parallax about how we construe the world, but that it fundamentally
constrains and limits how we see the world and come to know it upon the most
fundamental levels. We have achieved a remakable degree of counter-objective
parallax in our scientific knowledge systems, but this knowledge and its realism
did not get arrived at overnight--it took a very long time to achieve, after
many dead-ends and fits and starts. Furthermore, its progress remains
incomplete. Science is always unfinished business--however high the next
mountain we surmount, there's bound to be yet a taller one hidden beyond the
horizon.
Relativity
and Complementarity
We may say that relativity implies complementarity of
perspective, and complementarity implies relativity of perspective. We may say
furthermore that complementarity is derivative of relativity, and relativity is
in turn based upon complementarity. These statements are metalogical in the
sense that they imply both a relativity of complementarity and a complementarity
of relativity.
Relativity implies that we can have different modes
or different points of view about the same thing, and that, in our uncertainty,
all alternative modes or points of view may be equally (and partially)
uncertain. Relativity implies the notion of complementarity of perspective or
point of view. Complementarity of view is a form of equivalence which states
that different statements, both inherently contradictory, may be simultaneously
be true. Such statements can be said to be mutually exclusive in antecedents,
but lead to the same sets of consequents or conclusions in a manner that is
inherently logical and empirically infallible. We would say that such statements
would present a dilemma or a paradox about knowledge. Complementarity can only
be understood from the standpoint that such statements are part of a
larger dialectical metalemma in the sense that they rely upon the inherent
uncertainty of the problem for their reconciliation or mutual inclusion.
The
Scientific Structure of Reality
Stratification and integration are implied in the
concept of the complementarity of physical structure and behavior.
Stratification entails integration, and integration at different levels entails
a stratification of reality between these levels. Integration is marked
primarily by the emergence of definitive properties that characterize a level of
patterning in reality. Stratification is a mark of the separation between levels
based upon the differences of property and
physical characteristics of systems, particularly their relative
spatial-temporal distribution, their relative size scale, their relative density
and their relative informational patterning that is associated with the
complexity of pattern.
The natural world presents to us a very interesting
set of properties. Size stratification determines that at whatever level we wish
to define, there will always be a level smaller that composes the one we are at,
and larger level of which our own level is probably but one small part. Each
level represents in a sense an integration of properties that is only an
appearance of structure. If we can jump to a smaller level, then we will find
that what seems solid becomes hollow and vacuous, and what was once stationary
seems to be a world of motion and turbulence.
In our reality, there appears to be an upper limit to
the size stratification of physical entities, though this may in itself be only
an appearance of reality if we fail to see a larger pattern of order of which we
are but one small part. At each level of integration, different physical
forces appear to hold sway--electromagnetic
forces operate at a molecular level, and strong forces appear to operate upon a
nuclear or atomic level. Gravitational forces appear to predominate upon a
supermolecular level. The level of size scale upon which each force appears to
operate is inversely proportional to the relative intrinsic force of the field.
Gravitational energy is the weakest of all known forces, and works over the
widest range. Strong forces appear to be at the opposite end of the continuum
and is the strongest of known forces, but working over a very small range. Each of these forces appears to contain a kind of field that
defines a force of attraction.
It may be that there are even stronger and weaker
forces that operate unknown and unseen on levels and in ways we do not yet
comprehend. Gravitation may be in fact a kind of composite field of a set of
forces of diminishing strength and broadening range. Forces associated with a
predicted Higgs boson may prove to be the strongest forces we have yet
encountered.
It is possible that, as given to speculation as
Natural Systems science seems to be, we might speculate that the universe in
fact organizes itself on scales both smaller and larger than we are aware of.
The exponential jumps in scale from one level to another may entail that the
larger scale of which we and our molecules are a part is vast indeed. It seems
as those Black Holes, that compress matter to such densities that even strong
forces are overcome, and matter as we know it is disintegrated, then we can
wonder if this is not a kind of formation of structure in the universe upon a
new level of stratification that we do not yet fully comprehend. Similarly,
though a quark structure for nucleons seems self-consistent, is it not yet
possible that these structures are yet composed of even more infinitesimal
"entities."
In this enlargement or reduction of size scale we can
only conjecturally and hypothetically determine if there is some fundamental
lower limit or some universal upper limit to this process of stratification. It
is entirely possible that we may never be able to prove this one way or another,
regardless of our scientific advances. There is a sense that not only do things
become too small or too large to "see," whether with the naked eye or
aided by some optical instrument, but at a certain scale, both large and small,
light itself becomes no longer effective in revealing the mysteries of the
unknown. In other words, we are inherently limited by fundamental properties of
light past which limits our ability to observe the universe becomes impossible.
It may be possible to develop new means of observation, for instance, indirect
observation through the presupposition of predictive cause
and effect, or, alternatively, by an alternative energy form such as
gravitational energy.
I propose a theory of a non-zero and open-ended state
universe. This theory predicts that reality is always:
1.
Constitutive, or constituent or componential, or at whatever level we specify,
there is always a smaller and a larger level to take into account.
2.
Reality is discretely stratified at each level,
and each level of this stratification is largely self-consistent. In other
words, properties pertinent to one level of stratification do not necessarily
apply to any other level of stratification, and each
level of stratification is more or less independently integrated.
3.
Understanding the apparent contradiction between 1 (constituency) and 2
(self-consistency) we must hypothesize that reality is at every level
complementary to the system of reality as a whole. In other words, each level is
distinct to itself, but also is
part of a larger system of organization, both smaller and larger, that, as a
universal entity, has its own integrity and self-consistency as a system, even
though it is infinite.
The interesting aspect of reality appears to me to be
the concept of functional differentiation and integration that occurs at each
level, such that there are synergistic properties that emerge at that level
which are not apparent upon any other level of the larger system. This aspect of
the stratification of reality appears to me to be fundamental to the structural
description of this reality. The challenge of science is to describe how levels
occur and are organized, and how one level can come to be constitutive of
another. At each level of integration and stratification of physical reality,
there appears to be a unique and holistic set of properties and traits that are
distinctive to that level and that level alone, and yet, at the same time, each
stratified layer appears to participate in and be part of a yet larger level.
Is it possible, for instance, that the discrete
nature of an individual hydrogen atom or even nuclei might be lost to some
extent when it becomes part of the solar plasma of the sun, such that a stellar
system exhibits certain properties of mass and energy that cannot be clearly
measured by the mere summation of its component entities. It is apparent that
gravitational energy may be relative to the density of the system that is at its
source, and it is apparent that other aspects of energy may also increase
exponentially in direct proportion to the size, density and nature of the
system. We see these emergent properties of physical reality readily in
distinctions and phase transitions between gases, liquids and solids.
In this, we can divide the natural stratification of
reality into the physical, the biological and the human (or anthropological).
The pattern apparent at each level
is unique to that level, though this stratification is hierarchical. Biological
systems are made up of physical components,
but exhibit life-patterns unique to such systems. Similarly, Human systems also
are made up of biological components, but these systems exhibit patterns of
communication, culture and symbolic cognition that are not apparent in other
biological systems. The cultural patterns apparent with human systems are not
available to other biological systems--only Chimpanzee groups to date have
demonstrated rudimentary patterns of cultural acquisition, differentiation and
transmission.
At each of the significant levels, the physical,
biological and human, we can further sub-stratify into discrete sublevels. We
can identify with physical systems what can be called the sub-atomic, the atomic
and the molecular, or inter-atomic. At the subatomic level, we really find
possibly a number of lower levels of stratification--only a host of subatomic
particles are visible or available for indirect observation. These levels
compound to form mass objects of visible size scale, and of a very large and
grand scale of distribution. With biological systems, we can distinguish between
the microscopic (organismic), the metascopic (organismic systems), and the
macroscopic levels (super-organic system), with distinct patterns occurring at
each of these levels. With human systems, we can distinguish between the
individual, the small group, and the larger social system. At each level, we can
also refer to the "total physical metasystem" and the "total
biological metasystem" and the
"total human metasystem" as these encompass all subsystems together.
We may outline in a formal manner the main structural
stratification of physical reality in terms of the informational patterning
distinctive to each sublevel:
Natural Levels |
Natural Properties |
I.
Physical Level |
Physical
Metasystems |
1. Subatomic level |
Fundamental
entities/forces, complementarity |
2. Atomic/Nuclear level |
Nuclear
Structures, Elements, Isotopes, Electron Shells |
3. Molecular level |
Intermolecular
Bonding Forces, Phase structures |
II.
Biological Level |
Biological
Metasystems |
4. Micro-organismic level |
Cellular
structures & functions |
5. Metaorganismic Systems level |
Multi-cellular
Organisms in Social Context |
6. Macro-organic level |
Interspecific
Community Ecosystems |
III.
Human-type Level |
Anthropological
Metasystems |
7. Individual level |
Symbolic
Cognition & Behavior |
8. Intermediate group level |
Group
Culture & Language |
9. Species Level |
Emergence
of Historical-Civilizational Patterns |
We may speculate in the model above that there occur
at each primary sublevel of each main level (1, 4, 7) above, what can be
referred to as multiple intermediate levels that are defined by hybrid
precursors between the next intermediate level and the next higher level. In
sublevel one for instance, we can
say that the subatomic level may be
further subdivided into any number of more fundamental levels of patterning that
we are unable to observe. In sublevel four, at the micro-organismic level, we
must understand that cellular growth and reproduction depends upon the
availability in the environment of basic geo-chemical nutrients, both macro and
micro nutrients necessary for cellular metabolism, as well as a vast array of
complex organic molecules that are either manufactured by the cell or obtained
from other cells. We can see at level seven that there are many relatively large
to intermediate brained mammals and other creatures that deserve study as far as
their mental functioning is concerned--rats and dogs have been observed to dream
in ways similar to human beings, etc.
Another way of stating this is to observe the overlap
between levels, and to observe how discrete entities can be isolated at each
level that are not necessarily a part of a larger level of stratification. Each
next level, contains by definition all levels below it, though the set contained
within each higher level
is only a subset of the
total at each lower level. Each next level is marked by increasing multi-level
complexity of pattern.
In spite of speculation about possible infinitudes of
reality, it is apparent that from an evidentiary standpoint, it is difficult to
prove or talk about those things we cannot see or demonstrate through
experimentation to exist. Therefore, based upon evidence available to us at this
time, these levels described above define the intensive limits of informational
stratification of natural systems in our reality. As mentioned above and
emphasized below, this framework can very well change overnight with new
discoveries and new theories about how reality is integrated.
Metasystems theory provides the conceptual and
operational basis for the symbolic unification of human knowledge and
information systems at multiple levels. The basis of this comprehensive
unification is both scientific and ideological in a technical and formal sense.
It provides a common paradigmatic framework for the interdisciplinary
unification of different fields of knowledge within a common theoretical
framework of understanding.
In general, a system may be understood as any set of
interacting components that make up a whole functional entity within a
background context that relates this system directly or indirectly to other
systems at multiple levels. A system thus has a holistic design and patterning
of state path behavior or translational structure that is more than the sum of
the individual components that compose such a system. A system is always part of
a larger framework or context within which that system normally occurs.
All natural systems are by definition open systems.
The representation of a natural system as a closed circle is a
misrepresentation, though such representations of functionally synchronic
systems are common in the literature. A correct abstract representation of such
a natural system is as a non-linear control system, that can spiral outwardly in
growth or inwardly in loss, or that can fluctuate in some random or periodic
manner. A straight line representation of such a system through time would be
also a misrepresentation, as such a system would essentially be a static or
non-dynamic system. Even an alleged line of equilibrium for a system would
essentially be non-linear in form,
as all natural equilibrium is in essence dynamic equilibrium.
There is no real or naturally occurring system that
is totally or perfectly isolatable as a system.
All systems are subsystems of larger systems, and all
subsystems are larger systems containing other systems.
Systems are by definition complex composite entities
the function of which is more than the operation of the individual components
that make up the system. Such systems therefore exhibit what can be referred to
as holistic, synergistic or emergent properties that are unique to that system.
Thus, to understand any system in nature, we must seek to analytically discover
how the component parts of that system interact in ways to induce such holistic
patterning, and how the state-path trajectory of the system is influenced by the
dynamic interaction of its components.
We seek to describe all systems as maintaining some
sense of equilibrium over a period of its life-span. This equilibrium determines
its state-path trajectory, this trajectory may take a number of alternative
pathways that describe a paradigm of alternative states for such a system. We
also seek to understand how this equilibrium is established, maintained and
altered by the interactions of the components of the system and by the system as
a whole upon the individual components.
In our descriptions and explanations for the behavior
of natural systems, we must realize and deal with a basic metalemma that
whatever system we circumscribe, this will be part of a yet larger system that
will have an influence upon this system in different ways. All systems are yet
part of even larger systems, and are composed of yet smaller
subsystems, and there is no avoiding the complexity that this natural
situation presents to us.
The only closed systems are abstract systems that are
definable conceptually and mathematically. Even alternative, human-made systems
are in their realization inherently open systems in some manner. If we build an
automobile engine, with all its component subsystems, we still need to input
fuel and air and output exhaust and heat. These inputs and outputs define such a
system as fundamentally open to the world. Thus we may say that all alternative
systems, as real systems, are open as well, even if they are artificial and
non-natural systems.
All open systems are by definition subject
mechanically to the laws of thermodynamics. For such a system to maintain its
structural integrity as a system, it must therefore perform some complex set of
functions that are definable as work. Work can be said to be the utilization of
energy to maintain the functional organization and state-path behavior of a
system.
All systems achieve functional integration by means
of maintaining a functional boundary around itself in relation to its
surroundings. Such a boundary defines the system as semi-closed and partially
determined as a system. In general, a boundary of a system can be said to be a
set of implicit limits of tolerance in relational organization and interaction
of components of a system. These are essentially periodic interharmonic
oscillating devices that control the behavior of the components of the system
regardless of a tolerable range of conditions external to such a system. These
relations are furthermore determinable in a functional manner as rules that
govern the system.
Natural systems encompasses all real systems, but it
is not clear that abstract systems are completely subsumable as a subset or a
special class of natural systems. Without a biotic basis of natural
intelligence, we could not conceive of abstract systems, thus they would not
exist. At the same time, though, it is possible to make an argument that such
systems would in theory exist whether or not we could conceptualize them or not.
A perfect triangle would continue to remain so regardless of whether any human
beings were alive to conceive of such an ideal form or not.
At this time, given our state of knowledge, we cannot
clearly answer this kind of question one way or another.
All natural systems are also, by definition,
self-organizing systems. They arise stochastically as the result of isotrophic
trends and patterns affecting relationships between entities. The
self-organizing character of natural systems has not be given enough
consideration, although it has spawned chaos theory and the theory of non-linear
systems. Self-organizing systems are by definition of their openness
semi-determined or only partially
determinable systems. In other words, all natural systems are functionally
underdetermined systems. They maintain boundary conditions, but these conditions
are never static, continuous or
total in their control.
By contrast we may distinguish real alternative
systems as systems that are partially determined but that are
essentially non-self organizing in nature. They have been designed and
their relations predetermined by the nature of their design and fabrication.
Another distinction to make of such systems is
between biotic and abiotic natural systems, or what I refer to as the
distinction between physical and biological systems. The vast majority of
natural systems appear to be physical systems. In essence all systems are
essentially physical on a basic level. Biological systems as we know these
represent only a very tiny subset of the total amount of physical systems that
occurs. The critical distinction between a biological system and a physical
system can be said to be the self-organizing process of a biological system that
perpetuates its own design through generational reproduction and that is subject
tosome form of evolutionary development. In other words, biological systems are
self-replicating systems, whereas we can say that though physical systems are
self-organizing, they are not in general self-replicating. Physical systems are
produced as an outcome of a combination of forces and elements that leads to a
significant reaction and a product.
Biological systems harness such combinations and
replicate such reactions and products in a controlled and continuous manner. A
subset of biological systems that is distinctive is in general what can be
referred to a natural cybernetic systems. These are what can be called loosely
and generally "intelligent" systems.
Human systems are the epitome of this general class of systems, and therefore
deserve special attention and study as such, though it is increasingly apparent
that they are not the only possible or existing intelligent system in the
universe. Intelligent systems can be said to give rise to a new level of
functional organization of systems, and to the creation of new, real systems
that are artificial and non-natural. This definition of an intelligent system
takes away from the classical definition of intelligence as
"problem-solving," but it is more consonant within a systems theoretic
approach. The conception of what is a "problem" and therefore what is
a solution depends upon the ability to recognize and conceive of the problem in
the first place, as well the ability to solve the problem in some logical
manner. Behind this capacity is the ability of a system to recognize and
organize "experience" or information in some coherent or consistent
manner. In other words intelligence allows for the creation and design of new
systems beyond its own design template. It permits a form of state-path behavior
that can be described as "problem solving" on the basis of the design
and development of alternative systems.
Relative classification of systems includes the
identification of the natural hierarchy of determinations that serve to define
and limit the behavior of systems. We can distinguish for a system at any one
level a context or set of surroundings that embraces a supersystem framework, a
set of alternative systems that function at a comparable level as the system in
question, and a set of subsystems that compose the system. The alternative
systems will define a range of possible forms that a system may take, the total
range of which will define the system in a classificatory framework in relation
to other systems. It can be seen in the relative classification of systems that
any system is at once a subsystem of some other system, and at the same time a
supersystem to a set of subsystems that compose it.
Systems may also be classified on a non-relative or
absolute scale, and the basis for this absolute scale I believe to be that of
the finite size a system comprehends on a scale of size measurement. Size can be
measured in different ways, upon different scales, but the delineation of size
for a system defines that system in
a total framework of dimensions that ranges between the infinitely small to the
infinitely large. Size can be given a discrete and discontinuous value, and
hence can serve to locate a system on a total scale of measurement compared to
other systems of different sizes.
The paradox of the size-scale of natural and real systems is that the scale
itself may be defined mathematically as infinite--both infinitely small and
infinitely large in size. We set such a scale to a standard abstract value of
zero, or of no size, and we define any object,
however infinitesimal, as of some value greater than zero. We can say that a
non-relative systems of measurement of systems must by definition be zeroed, or
zero-based, even if there is no system that can be said to be of zero-size. This
notion of zero in the universe has important implications for our
understanding of the structure of physical reality. If we hypothesize that
physical systems may only approach zero in some manner, as for instance kinetic
energies of such systems that always approach Absolute Zero, but never reach
zero, then we can hypothesize that such systems are constrained by a
zero-determinant in some fundamental way that is essentially non-linear. In
other words, we may assert the following:
Advanced
Alternative Systems
Advanced alternative systems will increasingly depend
upon the power of information technology and processing to achieve sophisticated
integration and complex articulation with the environment. Information
processing systems have made tremendous advances in the last couple of decades,
and remain at the forefront of the applied sciences. Artificial intelligence is
the name we give for this rapidly
developing and multifaceted domain of information sciences.
The conventional criteria for the evaluation of
artificial intelligence has been the von Neuman standard of the Chinese
Room--implicit to this criteria has been the model of human intelligent
functioning as the goal of artificial
intelligence development. This kind of standard criteria is inherently difficult
to apply in an objective manner, and, because it embraces the inherent issues of
anthropological relativity, it does not transcend the basic dilemmas inherent to
human knowledge and intelligence in the world.
Furthermore, it is quite apparent that machine
intelligence has as well certain critical non-human constraints that is inherent
to their design and functioning as human made machines. These constaints are the
following:
1. All machine intelligence exists, or functions, in a closed world. This
world is one that is built, managed and operated by human beings. Intelligent
pattern that is the result of machine intelligence is a product of meaningful
design, and may be employed in the
production of meaningful design, but it does not by itself produce meaningful
design.
2. All machine intelligence exists, or functions, in a manner that
processes information in a linear manner. It processes strings of information,
in series that occur in sequential order. Even parallel processing architectures
are essentially the cofunctioning of multiple strings.
3. All machine intelligence exists, or functions, in a manner in which
there is no duality of patterning--the signal string contains the information,
and the information conveyed by the string is a part of the string itself. In
other words, machine intelligence exhibits no duality of patterning in its
signal pattern.
4. All machine intelligence exists, or functions, in a dead, or
non-living state. It cannot be attributed the essential synergistic features of
living biological organisms, or of what is referred to as "life." A
dead state is one that cannot change itself except entropically. Thus,
intelligent machines perform a certain or general kind of work, involving energy
transfers and heat as a by-product, that results in the manipulation and
production of meaningful pattern. Again meaningful pattern is merely
a by-product of this work.
5. All machine intelligence exists, or functions, in a manner that can be
said to lack awareness, either of the self or of the sense of surroundings.
These constraints all occur
at the same time, and are interrelated to one another in the design of
machine intelligence. These kinds of constraints are inherently
non-anthropomorphic, as there is not implicit comparison or contrast to human
intelligence in their determination.
Technical reductionists would argue that human
intelligence can be analytically reduced to the brain wave functioning of
neurons that have an electro-chemical basis. This would not be an incorrect
analysis to make. In other words, our own intelligence is machine-like just as
much as any computer would be by
this reductionist model, and therefore ought to be subject to the same kinds of
design constraints are are intelligent machines. Indeed, too, human intelligence
is not unconstrained by basic design features and limitations. A brain too large
for instance, or overactive, might face a fundamental problem of heat
dissipation.
But, also in a technical way, each of these points
can be used to contrast human intelligence with machine intelligence. Human
intelligence does not exist in a closed world. It functions in an inherently
non-linear manner. It has duality of pattern in its signal processing
characteristics. It is a living machine, and it can be said to have an advanced
form of awareness of both the self and the world in which the self is situated.
It follows that if these are the basic kinds of
constraints that predetermine the possibilities of design for intelligent
machines, then the design of more intelligent machines will proceed from
understanding and as much as possible circumventing or nullifying these kinds of
contraints. We measure the quotient of machine intelligence in terms of the
degree of sophistication achieved in its functioning and existence along each of
these five sets of points.
We can go further, if we wish to adopt a more
anthropomorphic model of machine intelligence, then there are further criteria
that we might wish to hold as human intelligence exhibits several other features
of design that appear for the most part unique to our species:
1. We are capable of the symbolization of experience, which is the
symbolic definition of experience. Indeed, symbolization is such an inherent
aspect of our intelligent design, that we cannot not symbolize experience except
in the most rudimentary and impulsive of ways.
2. We are capable of generalizing knowledge from one area or domain to
another, and thus devising means of applying this knowledge to alternative
domains to which it is not directly derived.
3. We are capable of creative concatenation of experience and knowledge,
to derive new patterns that have no precedence.
4. We are capable of the linguistic transmission of information that
conveys such experience from one person to another. Hence, we are capable of
learning new experience based upon the experiences of other people.
These secondary criteria of an anthropomorphized
machine intelligence appear to be most useful to the extent that they involve a
human interface in a manner that permits the adaptation and mediation of human
communication and activities upon multiple levels. I therefore consider these to
be extrinsic criteria versus the intrinsic criteria of the design constraints
listed above.
The dilemma of designing and developing more
intelligent machines then is the challenge of trying to overcome fundamental,
intrinsic and extrinsic constraints of design, that ultimately cannot be
overcome in any known manner or by any known means. What is really accomplished
in any simple mode is merely an Elizaesque-type parlour trick. Only by means of
supercomplex programming and data-base structures might these constraints be
approached in any meaningful manner. The challenge is that we do not have a firm
idea in any detail of what kinds of designs these may entail, or that may lead
us finally beyond the boundaries that conventional machine-like intelligence set
for us. One of the best examples of a limited application is in chess and other
game playing machines, which machines have increased in sophistication to
approach the game-playing capacity of the masters, and even to exceed this
capacity in exceptional circumstances. This is a set-piece type of problem, with
finite search-solution spaces. The kinds and number of possible moves to be made
at each turn are finite and fully determinable, though the number of alternative
pathways that can thread through the entire system approaches an astronomical
number. This kind of machine-intelligence solution to a limited and
deterministic problem set was not achieved easily, but only by
along period of development and application that lead to refinement and
sophisticated streamlining of the protocol. To apply a similar kind of complex
solution to every deterministic kind of problem set that we can encounter, in
whatever area or field of applied knowledge we wish to consider, exceeds by many
degrees our greatest supercomputer capacities. This is much more the case if we
take into consideration an even broader range of problem sets that do not have
deterministic-type solutions, but remain relatively underdetermined in
character.
It seems in this regard that intelligent machining in
conventional problem solving is most successful if focused upon narrowly
definable goals, and if it proceeds gradually in time from the ground up. The
only top-down approach that we can take at this stage is to define a
machine-based system of information processing and problem solving that extends
the capabilities beyond component machines to incorporate a vast network of
machines that interdigitate and articulate with one another in a organic manner.
In the construction of such a model, a great deal of unknown problem-solving
needs to be subsumed within a critical-path flow-chart that allows an
object-oriented and functional partitioning of the general system into a minimal
number of component subsystems.
Each system and subsystem must be tackled both separately and interdependently.
Each presents its own complex problem set that can be only solved partially and
incompletely. Within a larger system,
there will occur deterministic components that define the operational efficiency
and intelligent capacity of the system as a whole, though such key components
may not be easily or readily identifiable as such.
This type of system puts a premium upon the
communicative capacity between machines and operating systems. The information
bottleneck that is based upon the ability for processors to perform a certain
speed of operations, is matched by a communication bottleneck that permits
different machines to transmit, and receive, processed or raw information only
at certain speeds or rates. Generally, in our current state of the art, machines
have the be physically connected through transmission lines, and this has posed
severe restrictions upon the ability to communicate. The alternative has been a
kind of amplitude and frequency modulation of electromagnetic signals.
Communicative capacity between machines is as much a challenge of devising a
language of mutual intelligibility that would permit signals to transmit that
were in a synnonymous with the kinds of signals occurring within the operating
systems of computers themselves. In other words, the encoding of communiques
between devices should be in the same programming language as the computer
normally operates in anyway. There should be little requirement for translation
interfaces or mediation to be interposed between different systems.
The challenge of constructing a distributed
information processing system is in solving the communication needs at various
levels and in various areas simultaneously. Communication distribution can be
seen as a kind of hypergrid, distributed multidimensionally, each dimensional
unit having its own channel capacity for communication separate or at least
separable from those streams other dimensional units.
Just as computer processing streams are linear, so
also do communication streams tend to be linear. Making multi-linear streams of
communication are one way of attacking the problem, as is broadening the
transmission breadth of the communication signal. A combined stream that mixes
multiple signal carriers within the same grid unit, to be filtered separately by
each receiving grid, is an alternative solution to this kind of problem. Within
hardwired systems, this problem is readily solved by merely multiplying the
number of separate lines interconnecting the various components of the system.
Such a filter can be nothing but an embedded sequence of key identifiers that
can recognize, for instance, every nth point of reiteration.
The challenge of intelligent communication is
therefore the challenge of constructing complex systems of non-wired
transmission based upon some range or set of ranges of electro-magnetic
radiation, either focused as in laser systems, or broadcast.
A distributed system can be said to be a remotely
connected supercluster of multiple processing
systems interconnected by communication lines based upon broadcast transmission
of signals of various forms. Clusters and subclusters of such a distributed
system can be said to be hard-wire integrated multiple
processing systems within the larger supercluster grid, presumably that
perform either generalized or specialized or both hybrid sets of functions in
coordination with other operating clusters. Thus, an internet system such as the
world wide web, that connects mostly through telephone lines, is largely as yet
a kind of cluster network that is not a truly distributed system. On the other
hand, infrared based transmissions connecting office equipment with computers
may be considered to be a distributed system. The scale of the system is not so
important, I believe, as is the structural design of the system we are dealing
with at whatever level. One of the means for a distributed system to achieve a
degree of partial openness is through the development of an effective form of
broadcast transmission between units. It can be demonstrated anthropologically
that human systems and human intelligence could not have arisen outside of the
framework of open linguistic communication.
Wireless systems have developed in relation to
satellite communication, and these have grown increasingly sophisticated and
powerful, as well as with decreasing degrees of noise and static, though they
are far from meeting the standards challenges that would be required of a
genuinely distributed system.
It follows that strategies of heuristic design are of
paramount importance in the consideration of top-down distributed systems in
which the theoretic components
exist in complementary manner to the achieved technology. In other words, even
if present state of the art technology is relatively primitive and crude to the
challenges and goals of any given problem set, it is in the meeting of ground-up
practical solutions with top-down design configurations that progress will be
defined.
It is something of a paradox as well that devising
distributed, wireless based systems on the criteria of relative openness, may be
based as well upon solving several other sets of primary constraints in
computing--duality of patterning of a limited form is achievable in distributed
systems if these distributed systems can interconnect via a common input-output
interface and if this interface includes as well feedforward or feedback loops
that include effective environmental monitoring on one hand, and effective motor
articulation with the environment on the other hand. I am not referring to the
conventionally, anthropoidal robot that walks and talks independently of some
human controller. Rather I am referring to robotized systems that function
independently to achieve a limited range of functional tasks in relation to its
environment--such machines can take any form and perform practically any task.
The desire to put these machines to human form is as much a reflection of our
own anthropocentrism regarding intelligence as anything.
Achievement of a standard of duality of signal
patterning can arise when a common communicative interface can be utilized in
alternative contexts to achieve a range of different functional applications by
independent and remotely connected machines. It entails the creation of a
generalizing symbolic language in intermachine communication that can be adapted
to fit a wide and open range of possible applications. This achieves a kind of
limited duality that is based upon practical application of general terms to
varying contexts. This is normally a trend that is opposite from what is
expected with duality of patterning, especially if we adopt a strong
psycho-linguistic model of language structure and patterning, though I believe
it more accurately replicates what I believe are the actual parameters of
communicative design in human language. It emphasizes the social aspects of
language function as a communicative system around which cultural and
psychological meanings can be built. In this alternative viewpoint, it is the
intermediative function of language as a communicative system that is emphasized
over the subjective meaning building aspects of any particular language system.
The challenge therefore of building a distributed
network supercluster of machines that can perform a wide range of
information-based functions in limited dimensions, is two-fold. It is a
challenge of constructing a effective system of wireless communication that will
permit the long-distance transmission of both large quantities of information at
very fast rates, as well as a broad range of different kinds of information
transmitted simultaneously or in tandem. It is also the challenge of constructed
hard-wired systems as clusters and sub-cluster networks that fit within this
multi-dimensional grid structure and that are capable of performing a wide range
of alternative information-processing functions simultaneously.
A third challenge arises with the issue of control
and coordination structures, in both hard and soft information architectures,
that will be heuristically effective in incorporating the entire grid structure
in a systematic and synergistic manner. I see such control and coordination as
being decentralized and itself distributed at various levels in such a system.
Control and coordination remains ultimately a human endeavor, except to the
extent that a sense of relative autonomy of function and design can be designed
into the architectures of such systems themselves. Self-replication of
structure, learning and modification of architectures to fit alternative
frameworks would be standards to achieve in
such control structures. Machine systems that are capable of running and
managing themselves, with the fewest possible human inputs, and are even capable
of building and repairing themselves, seem to be distant science fiction goals
of intelligent design.
There is a sense in this issue, when viewed from the
top-down, of a central strategic problem, a general or even universal problem
set, that once articulated and fully defined, will lead by deduction and logical
inference to the solution of a great many different kinds of problem sets. I do
not believe there exists as yet any universal programming language to date that
is capable of encompassing all possible logical chaining structures that are
typical of intelligent systems. Machines capable of handling such languages
would also have to be designed and built, and I do not believe this has yet been
accomplished either.
The problem and challenge of constructing an
intelligent distributed supercluster involves
an entire range of problem sets at multiple levels, each of which must be
addressed separately, as well as in relation to the entire structure. We do not
know yet what the best or most streamlined design or set of designs would be for
the construction of such a system. It is apparent that no single kind of
programming system, whether neural networks, or object oriented programming, or
Lisp or Prolog programming, will completely address every dimension and aspect
of the entire problem. It entails putting together the common and conventional
approaches in Artificial Intelligence research, in the various applied and
theoretical areas, into a common problem set that defines a single advanced
distributed system. Thus the challenge of visual pattern recognition and vision
is as much a part of the general problem of building such as system as would be
the problem of voice recognition,
or of symbolic dependency, or learning or decision making or robotic
manipulation or circumlocation.
There occurs a higher level criteria for these kinds
of systems. This has to do with the achievement of a degree of generalization of
worldview and of self awareness, and what can be called the emergent pattern of
mental functioning from mechanical signal transmissions. Grossly, and in an
unqualified way, we can refer to this as "consciousness" and we can
say that a computer system, however sophisticated in design, lacks intrinsic
consciousness. We can attribute a
sense of consciousness to mice and rats, as well as to humans and dolphins. We
might even attribute some kind of limited consciousness to insects and other
non-mammalian animal forms. But we do not attribute a state of consciousness to
Deep Blue or to any other supercomputer we have built. The critical question to
be answered is "why."
Integration proceeds at different levels and in
different ways in the construction and design of distributed architectures.
Functions are not completely separable from one another, and there occurs a
great deal of overlap that, from the standpoint of informational efficiency,
represents a load and a form of noise intrinsic to an underdevelooped and
partially unintegrated system. Components must replicate similar kinds of
procedures in the course of normal operation. In the best of possible worlds,
each procedure would only need to be performed one time by one machine: the
results of this procedure would then be stored and made
available for use by any other machine further down the road. Often,
there are diminishing returns if retrieval of stored information, or the storage
of information itself, requires a more informationally expensive procedure than
reiteration of the original procedure in the first place.
There is a fundamental trade-off it seems, between
the problem of integration on one hand, that combines subsystems into a single
hard-wired "cluster" and the problem of distributed processing, which
serves to link different systems or clusters into a coordinate network. It seems
that we can improve systems integration through hardwiring, but only at the
expense of maintaining truly and remotely distributed networks. On the other
hand, if we wish to extend distributed networks to encompass broader ranges,
then the price we pay is in our ability to integrate systems as a single
operational unit. In a sense, with the problem of distribution, the
challenge of effective communication between different systems becomes
paramount over the challenge of processual integration into a single system.
The concept of unit operations is an important
approach to take in applied metasystems and in the design and coordination of
different systems. Operational units define unit operations as basic common
functional denominators, and provide thus a shorthand for design of more complex
systems. A limited number of basic operations, for instance, can be recombined
in a countless number of ways to achieve alternative complex systems.
Possible
Systems and System Possibilistics
Hypothetical
Possibilistics
Random
Statistics, Entropy, Chaos Theory & Stochastic Process
Any system that is unknown is a potentially possible
system. A possible system is one that exists hypothetically, rather than as
theoretically or practically demonstrated. A possible system is essentially an
unknown system, and the only means we have of realizing the possibility of a
system is through a means of exploratory discovery of the possibilities.
We lack a means of systematically investigating the
possible, or of easily distinguishing what might be possible from what must
remain always impossible. The trouble with the unknown is that we cannot tell
what is merely unknown from what is ultimately unknowable.
The problem with our ignorance, and the unknown, is
that we do not and cannot know before hand what is truly possible and what might
be ultimately impossible. The idea that there might be systems of anti-gravity
or possibly of faster-than-light travel, remain concepts that we can entertain
but cannot determine the possibility of one way or another. Our dilemma becomes
that if we dismiss the possibility of some idea, however seemingly impossible,
we automatically preclude the possible search and discovery of what may in fact
prove to be possible. Perhaps faster than light travel is ultimately impossible,
and this may be why alien civilizations more technologically advanced than our
own have not visited us, but this remains conjecture we cannot yet prove one way
or another.
It has been shown that metaphysically and naturally,
chaos underlies and is more basic to order, and all real systems tend, in the
structure of the long run, to return to a state of greater disorder. It is
worthwhile therefore to take into more careful consideration the problematic
that the notion of chaos implies for our understanding of advanced systems.
In an abstract sense, absolute systems, as for
instance, mathematical systems are only possible if they have an implicit and
antithetical counterreference to absolute disorder and chaos. We can say that
they achieve their coherence by the absolute determination of their values and
relations, leaving no room for uncertainty. Thus, in such a world, uncertainty
is excluded to a domain of the implicit. Underlying this is a sense that
uncertainty and disorder are inherently and ideally disordered. Consider trying
to generate a list of random numbers off the top of your head--can you be sure
the list of numbers you generate are completely random. If you think about them,
and attempt to rearrange them so that they appear to exhibit less patterning,
might you not be imposing some sense of order upon them?
Much of probability theory can only be construed from
the standpoint of a hypothetical "null space" that is defined by total
randomness and randomization, which is itself an ideal state that is never
attained in nature or real systems. Indeed, the entire structure of mathematics
as we know it could not exist without the central notion of zero as a common
point of reference. Without the notion of zero that implies nothingness and
hence disorder, we could not have equations or perform many operations that are
common to mathematics.
To try to treat disorder in a systematic manner, to
deal with it in terms that are complementary and integral to systems theory, is
to try to put a handle upon a significant aspect of reality that influences
every real system that exists. The outcome of chaos theory is that even high
complex systems can be based upon relatively simple operations, and relatively
simple formulas can generate highly complex and unusual outcomes. But not all
disorder is capable of being patterned--in any system, there should always be
some residual sense of true disorder that cannot be accounted for by any means.
I hope to demonstrate thereby that there can be found
order in disorder, and we can superimpose a sense of system upon a sense of
disorder itself. We do it not out of some strange pathological compulsion to
minimize uncertainty and chaos. We do it rather out of necessity in our
theorems. If we don't, then there remains a residual possibility that, in
failing to deal adequately with the tasks at hand, these issues will somehow
creep into our formulas and undermine our ability to functionally extend our
theories to real systems.
In attempting to do this, I am not so interested in
stochastic theory and probability, as I am interested in a system of
possibilistics that must underlie any kind of stochastic estimation. Before we
can judge the odds, we must know the playing field we are dealing with in a
manner that allows us to make such choices. However uncertain, our knowledge
must somehow move from remaining remote and unknown to being proximate and at
least inferable.
If we wish to derive some kind of sample, whether
representative or randomly, then we must at first understand the possible sample
spaces or regions that are available to being sampled, and that are defined as
those that are interesting by the criteria of our theory and its
operationalization. But often we cannot know beforehand the possible sampling
spaces that might be important to our operational procedures.
Much that might be of value to us in possible sampling domains must
remain unknown--this is part of the reason we sample in the first place. It
represents a kind of exploration of unknown areas. Similarly, if we seek some
solution to a problem, we are at first confronted with a potentially infinite
number of possible choices and alternatives. We must pick and choose a pathway
based upon some series of choices that will lead to a successful solution to the
problem. Often, we cannot know not only the correct choices, but even the
possible range of choices to begin with. In complex problems, we may construct
initially complex search tree structures, but none of the possible outcomes may
necessarily lead to the correct solution.
The question of possibilistics therefore leads
directly into the problems of operationalization of procedures, an issue that
will be undertaken in the next part. It
deals especially with the heuristics of problem solving, and many issues
broached in this chapter relating to initial problem definition and
identification will be taken up further in the second part. At this point, all I
wish to do is to elaborate a form of continuous and nondiscrete statistics, or
variable statistics, that can be used to conceptualize alternative possibilities
for any given problem. In this regard, the problem is especially the issue of
the constructive representation and application of alternative metasystems
models to real working models in any number of different areas. This in itself
creates a large space of possible alternatives that should be considered part of
the issue.
If we start with a simple 10 by 10 matrix, such that
any slot within the matrix may be filled with a number 0-9, and our job is to
create all possible combinations or permutations of strings occuring in rows or
columns of the matrix, we quickly find that we have an overwhelmingly complex
number of possibilities, something of the order of 2 x 1010 . What
appears at first as a rather simple square matrix of a very manageable size,
quickly zooms to astronomical complexity when we begin searching for its
solution. If we built a computer program to generate by recursion or reiteration
all these possible strings, it would require a very long running time, and would
be liable to consume the working memory resources of the whole computer.
As one computer science teacher told me, consider
trying to realistically represent and explain the orbits of all the lunar bodies
of the solar system about their planets, as these spin about the sun, and then
try to fit this into a larger pattern of motion of the sun within the galactic
system it occurs in. Though the motions are elegantly described by mathematical
equations and are sublime observed through telescopes in the night sky, actually
plotting these complex astronomical ballet moves is virtually impossible.
We could impose rules upon our matrix problem to
narrow its search space. For instance, we could specify all strings that are
only of a certain length, or that have a certain initial order, say 999. Doing
so would limit the total space of possibilities considerably.
It is perhaps part of the project of possible
statistics, or possibilistics, to be able to get an idea of the inherent
complexity represented by any problem set, without having to reach a complete
solution to the problem. In other words, if a simple problem proves to have an
astronomically complex solution set, then it is better to represent the problem
as some kind of recursion function than as a complete solution set of
alternative sample points. This is clearly the case in most of the sciences,
even on very basic levels. We always prefer the correct formulas to the actual
solutions to any particular problem.
It is often the case that unintentionally, rather
sophisticated and straightforward statistical problems require strict
randomization criteria that prove almost impossible to meet, especially with
large sample sets. It is the epitomy of wisdom in such cases to systematically
restrict the problem set down to some narrow range of possbility within the
larger spectrum in order to achieve more control and accuracy of the results. It
is a case in statistics that the law of large numbers does not necessarily apply
if you have a genuinely or relatively random sample--you can have the largest
sets possible but it would mean nothing if they were not randomly selected.
Many theories, especially in statistics, rest upon a
presupposition of purely random samples. Often, this is taken for granted, or
fudged, when in fact it is truly difficult if not completely impossible to
create a truly random sample, especially with people. Determinisms creep into
our database in many different ways, often without our understanding. But this
in itself is not necessarily a bad thing. Some kinds of surveys that can
generate deep knowledge and understanding are not necessarily contradicted by
the presence of bias in samples. It is possible that even with great bias,
samples remain true to life and representative of the reality they purport to
explain.
Probability theory has been well worked, as many
people have purported to depend upon it. But possibility theory remains
something of an unsolved mystery, and therefore is something more worthy of
understanding for its own sake. I would call possibilistics a form of statistics
that comes before description. Perhaps it can be called observational
statistics. It does not necessarily presume randomness in the research design.
Rather it presumes only a natural self-organization of pattern irrespective of
our own observational biases we may introduce into the sample organization.
In attempting to get at abstract systems and general
theories of natural systems that have universal validity, it is important to
resolve a basic consideration having to do with the inherent complexity of
naturally occurring systems. Multi-variable nonlinear differential equations are
insufficient to express the entire problem set at even rudimentary levels of
naturally occurring phenomena. Such equations remain beyond proof or even
solution, but it remains possible, within a paradigm of possibilities that are
defined by basic parameter variables, to create a generative system of
differential equations that are interdependent upon one another in terms of
their input and conditional values and variables.
The consequence and possibility is to be able to
produce a parsimonious mathematical model of complexly developing systems
without having to resort to working out solution sets for every differential
equation that is encountered. Such equations would be derivative and based upon
other equations, which in turn would be based upon and derived from yet other
equations, which eventually would resolve to single variable, soluble equations
that operate within a possible range of discrete or continuous input values. A
complex natural system would then be expected to be solved in terms of an
interrelated set of complex differential equations that would be framed within a
paradigm of possibilities, a matrix that is composed of simpler sets of
equations, and so on. The solution we would seek would be in terms of a
simulation of the pattern based upon the set of governing equations that we
define for that system. The degree of achieved detail and representative
accuracy would be a measure of the degree of closeness of fit between the real
system and the artificially contrived one. It becomes possible to express a
complex theory of systems accurately in terms of a single general differential
equation that can be unpacked by its systematic qualification of variables as
derivatives of nested differential equations within a matrix hierarchy.
It would be necessary as well to build into such
equations the uncertainty factors that would provide for the under-determination
of structure that all naturals systems exhibit.
It has been shown that metaphysically and naturally,
chaos underlies and is more basic to order, and all real systems tend, in the
structure of the long run, to return to a state of greater disorder. It is
worthwhile therefore to take into more careful consideration the problematic
that the notion of chaos implies for our understanding of advanced systems.
In an abstract sense, absolute systems, as for
instance, mathematical systems are only possible if they have an implicit and
antithetical counter-reference to absolute disorder and chaos. We can say that
they achieve their coherence by the absolute determination of their values and
relations, leaving no room for uncertainty. Thus, in such a world, uncertainty
is excluded to a domain of the implicit. Underlying this is a sense that
uncertainty and disorder are inherently and ideally disordered. Consider trying
to generate a list of random numbers off the top of your head--can you be sure
the list of numbers you generate are completely random. If you think about them,
and attempt to rearrange them so that they appear to exhibit less patterning,
might you not be imposing some sense of order upon them?
Much of probability theory can only be construed from
the standpoint of a hypothetical "null space" that is defined by total
randomness and randomization, which is itself an ideal state that is never
attained in nature or real systems. Indeed, the entire structure of mathematics
as we know it could not exist without the central notion of zero as a common
point of reference. Without the notion of zero that implies nothingness and
hence disorder, we could not have equations or perform many operations that are
common to mathematics.
To try to treat disorder in a systematic manner, to
deal with it in terms that are complementary and integral to systems theory, is
to try to put a handle upon a significant aspect of reality that influences
every real system that exists. The outcome of chaos theory is that even high
complex systems can be based upon relatively simple operations, and relatively
simple formulas can generate highly complex and unusual outcomes. But not all
disorder is capable of being patterned--in any system, there should always be
some residual sense of true disorder that cannot be accounted for by any means.
I hope to demonstrate thereby that there can be found
order in disorder, and we can superimpose a sense of system upon a sense of
disorder itself. We do it not out of some strange pathological compulsion to
minimize uncertainty and chaos. We do it rather out of necessity in our
theorems. If we don't, then there remains a residual possibility that, in
failing to deal adequately with the tasks at hand, these issues will somehow
creep into our formulas and undermine our ability to functionally extend our
theories to real systems.
In attempting to do this, I am not so interested in
stochastic theory and probability, as I am interested in a system of
possibilistics that must underlie any kind of stochastic estimation. Before we
can judge the odds, we must know the playing field we are dealing with in a
manner that allows us to make such choices. However uncertain, our knowledge
must somehow move from remaining remote and unknown to being proximate and at
least inferable.
If we wish to derive some kind of sample, whether
representative or randomly, then we must at first understand the possible sample
spaces or regions that are available to being sampled, and that are defined as
those that are interesting by the criteria of our theory and its
operationalization. But often we cannot know beforehand the possible sampling
spaces that might be important to our operational procedures. Much that might be
of value to us in possible sampling domains must remain unknown--this is part of
the reason we sample in the first place. It represents a kind of exploration of
unknown areas. Similarly, if we seek some solution to a problem, we are at first
confronted with a potentially infinite number of possible choices and
alternatives. We must pick and choose a pathway based upon some series of
choices that will lead to a successful solution to the problem. Often, we cannot
know not only the correct choices, but even the possible range of choices to
begin with. In complex problems, we may construct initially complex search tree
structures, but none of the possible outcomes may necessarily lead to the
correct solution.
The question of possibilistics therefore leads
directly into the problems of operationalization of procedures, an issue that
will be undertaken in the next part. It deals especially with the heuristics of
problem solving, and many issues broached in this chapter relating to initial
problem definition and identification will be taken up further in the second
part. At this point, all I wish to do is to elaborate a form of continuous and
nondiscrete statistics, or variable statistics, that can be used to
conceptualize alternative possibilities for any given problem. In this regard,
the problem is especially the issue of the constructive representation and
application of alternative metasystems models to real working models in any
number of different areas. This in itself creates a large space of possible
alternatives that should be considered part of the issue.
If we start with a simple 10 by 10 matrix, such that
any slot within the matrix may be filled with a number 0-9, and our job is to
create all possible combinations or permutations of strings occuring in rows or
columns of the matrix, we quickly find that we have an overwhelmingly complex
number of possibilities, something of the order of 2 x 1010 . What
appears at first as a rather simple square matrix of a very manageable size,
quickly zooms to astronomical complexity when we begin searching for its
solution. If we built a computer program to generate by recursion or reiteration
all these possible strings, it would require a very long running time, and would
be liable to consume the working memory resources of the whole computer.
As one computer science teacher told me, consider
trying to realistically represent and explain the orbits of all the lunar bodies
of the solar system about their planets, as these spin about the sun, and then
try to fit this into a larger pattern of motion of the sun within the galactic
system it occurs in. Though the motions are elegantly described by mathematical
equations and are sublime observed through telescopes in the night sky, actually
plotting these complex astronomical ballet moves is virtually impossible.
We could impose rules upon our matrix problem to
narrow its search space. For instance, we could specify all strings that are
only of a certain length, or that have a certain initial order, say 999. Doing
so would limit the total space of possibilities considerably.
It is perhaps part of the project of possible
statistics, or possibilistics, to be able to get an idea of the inherent
complexity represented by any problem set, without having to reach a complete
solution to the problem. In other words, if a simple problem proves to have an
astronomically complex solution set, then it is better to represent the problem
as some kind of recursion function than as a complete solution set of
alternative sample points. This is clearly the case in most of the sciences,
even on very basic levels. We always prefer the correct formulas to the actual
solutions to any particular problem.
It is often the case that unintentionally, rather
sophisticated and straightforward statistical problems require strict
randomization criteria that prove almost impossible to meet, especially with
large sample sets. It is the epitomy of wisdom in such cases to systematically
restrict the problem set down to some narrow range of possbility within the
larger spectrum in order to achieve more control and accuracy of the results. It
is a case in statistics that the law of large numbers does not necessarily apply
if you have a genuinely or relatively random sample--you can have the largest
sets possible but it would mean nothing if they were not randomly selected.
Many theories, especially in statistics, rest upon a
presupposition of purely random samples. Often, this is taken for granted, or
fudged, when in fact it is truly difficult if not completely impossible to
create a truly random sample, especially with people. Determinisms creep into
our database in many different ways, often without our understanding. But this
in itself is not necessarily a bad thing. Some kinds of surveys that can
generate deep knowledge and understanding are not necessarily contradicted by
the presence of bias in samples. It is possible that even with great bias,
samples remain true to life and representative of the reality they purport to
explain.
Probability theory has been well worked, as many
people have purported to depend upon it. But possibility theory remains
something of an unsolved mystery, and therefore is something more worthy of
understanding for its own sake. I would call possibilistics a form of statistics
that comes before description. Perhaps it can be called observational
statistics. It does not necessarily presume randomness in the research design.
Rather it presumes only a natural self-organization of pattern irrespective of
our own observational biases we may introduce into the sample organization.
Blanket Copyright, Hugh M. Lewis, © 2009. Use of this text governed by fair use policy--permission to make copies of this text is granted for purposes of research and non-profit instruction only.
Last Updated: 08/25/09