http://www.lewismicropublishing.com/
Chapter
Four
Systems
Methodology & Modeling
General systems may largely be called a theory in
need of a methodology, or a set of methods as well as a general set of
operational instructions in the deployment and articulation of methods. If it is
to be more than a theory of everything, then general systems must also become a
methodology for all possible problems for all seasons. If this is asking too
much from any single paradigm, perhaps this is so, but at the same time we can
expect no less than a comprehensive set of applied methodologies from a
purported comprehensive framework of general science.
Science based upon a general systems paradigm will
not come fully of age unless and until its own distinct set of methods or
methodology can be more carefully worked out and made to work in a practical
manner. One of the most important aspects of developing a general systems
methodology that is a method for everything is to come to terms with and deal
with the problem of the anthropological relativity of our own understanding of
systems, large and small. This comes to play especially I think in our
identification and definition of "problem" sets with which we must
deal, when we speak of applying always limited means to virtually unlimited
possibilities.
To understand general systems methodologies, we must
seek general purpose methodologies that are appropriate to a wide range of
different kinds of systems. We must also seek to understand the general nature
of the kinds of problem sets to be solved by such methodologies. The purpose of
methodologies are primarily to conduct research through the solution of complex
problems.
A problem may be defined as an unresolved question or
condition of reality that requires a solution at some reasonable level of
acceptability. A problem exists as a discrepant state of affairs between
existing states or conditions and ideal or desired states and conditions, seen
primarily from a human or anthropological standpoint.[1]
General systems methodologies then are concerned with
the solving problem sets in a deliberate and systematic manner, being whatever
it is that a person or group of people construe as being problematic based upon
some calculus of ends, whether such a calculus of ends is explicit or left
unstated.
Methodologies come into play as a set of possible
means when the calculus of ends creates a searchsolution space for the
resolution of problem sets identified by these ends. This is a complex way of
saying that methods attempt to systematically marry means to ends in problem
resolution. We work with the understanding that, especially with complex
problems, solutions, though hopefully simplifying, are unlikely to be perfect or
simple.
There are some sets of methods and methodologies that
seem pertinent for consideration of "general systems methodologies." I
would designate two general classes of pertinent methods: the first set I would
call general systems methodologies and these are a set of methods that apply
generally to a broad range of systems, but which are not necessarily designative
of any particular kind of system. The second set I would call special system
methodologies, and they are sets of methods that are appropriate to a certain
class or kind of system, but not necessarily to any other class or kind of
system.
To list the set of general systems methodologies, I
would include: 1. symbolic representation & strategic planning; 2. design
modeling & heuristic simulation, especially involving computing and
supercomputing; 3. nonlinear dynamics and settheoretic representation &
manipulation; 4. intercorrelative analysis; 5. experimental prototyping of
designs. I believe that for applied systems, this model automatically leads to a
production or processing sequence, as well as to issues of recycling and
repair/replacement of systems as well as to systems growth and regeneration.
Thus I have elaborated a basic development cycle for general applied systems
within which theoretically any form of applied system may be developed.
To list sets of special systems methodologies, we
need first to categorize general types of systems in some kind of logical or
natural schema based upon natural stratification. In general all methodologies
that are deployed in the normal sciences at each level of systems stratification
are pertinent and appropriate to that level or sublevel of system, albeit
usually in a fairly specialized manner. Any or all tools of the trade of any
particular scientific discipline or field of inquiry are pertinent methods to be
employed within the area of stratification of natural systems, eventhough some
kinds of methods may be more relevant and generally deployable than others.
General systems methodologies therefore encompass fully the range of analytical
and investigative methods that are deployable across all fields of science.
We can generalize a methodology to a framework of
applied systems of all kinds, with the recognition that all applied systems will
have at least their physical, biological and human components, as well as their
outcomes and consequences for the larger world. We recognize that the problem of
the anthropological relativity of systems, and its influence in determining
problem solving frameworks, need to be taken into account in the defining of
possible search solution spaces and the realization of alternative solutions in
a system. The frame of reference we adopt in defining a problem and a
methodology of solution will determine the range of possibilities and thereby
predetermine and constrain the outcome of the process.
Applied general systems therefore seems to entail a
multipurpose design development framework that is capable of taking a project
through a series of steps in its development as part of a larger design cycle.
It should be also capable of starting and maintaining multiple
designdevelopment cycles simultaneously, and upon different levels,
interlinking these cycles or the components of these cycles, in a meaningful
way. The designdevelopment project cycle for any single system or kind of
applied system, represents therefore a general methodology for the solution to
the problem set that is related to that system or kind of system. It provides a
manner of constructive application and work that allows us to investigate
alternative systems and explore the possibilities for their developmental
refinement and evolution as adaptive systems.
Within such a framework, specialization of systems or
subsystems would follow on the heels of the development of the basic applied
designdevelopment cycle, and would represent the elaboration of such a cycle
and its splitting into multiple subcycles. We can imagine therefore as well the
higher level organization of such a framework of cycles within cycles as a
single comprehensive metasystems framework by which all projects and programs
are interrelated to one another and made coordinate in their development.
Any measure of reality we may adopt may be said to
be, if nothing else, discrete and therefore arbitrary. This is because our
reality is anthropologically constructed in terms of symbols that are by design
and of necessity discrete and arbitrary. Measures, to be useful
collectively, objectively, "intersubjectively" must also be
consistent, (i.e. standard) or else they are merely idiosyncratic constructions.
We may in a sense look at our words that we speak and write as collectively
shared measures of meaning, somehow pointing, however indirectly, to some form
in the real world, or else some imaginary form.
Collective meaning can only be created through
language and the communicative sharing of meaning, and hence we can make a
claim, a very serious claim, that meaning and semantics are linguistically
relative. It is the translatability of human language, largely because, no
matter what the modulations of any particular pattern of speech, we share the
same fundamental language (speech production/recognition) apparatus, and because
all symbols, even words, are ultimately arbitrary, that we can come to mutual
agreement on common forms and measures of meaning in reality. Europeans have
meters and litres and Americans have yards and quarts, but because these are
standardized units of measure, we can apply simple mathematical formulas to
translate one into the other, and back again.
Through the sharing of measures of meaning, largely
defined symbolically, human beings arrive at a collective worldview, a common,
standardized frame of reference, that arbitrary design of symbols becomes
thereby overlaid by convention and common agreement. All of human culture, which
is largely behavioral and cognitively based in symbolically organized behavior,
may be said to consist of shared conventions, whether these are explicit, in the
form of meters and yards, or in the form of laws and rules, or remain implicit
and indirect in our our common behavioral constraints. This in fact is an
empirical, experimental, working definition of culture that allows us to take
our presuppositions to the field and form conclusions about observations of
behavior. It forms the basis for an empirical science of human systems and human
behavior.
Conventional constraint therefore overlays arbitrary
and ultimately idiosyncratic organization of symbolic reality, and comes to
demarcate a common field of shared cultural meaning by which people can organize
themselves on a social basis into institutional systems. Conventional constraint
with underlying arbitrariness of meaning entails a builtin flexibility of our
received symbolic systems that enables them to be easily carried, transmitted
and transformed over space and time. At the same time, conventional constraint,
ultimately arbitrary, becomes reified and naturalized as if nonarbitrary and
habituated as if automatic and even reflexively instinctual. It becomes
ingrained and embodied, even upon a physical level of our being, such that we
are conditioned and quite comfortable with such conventional constraint, and
rendered quite uncomfortable without it. Conventional constraint takes on a
certain inertia and momentum in terms of its direction, rate and conservative
resistance to change, and many anthropologists have confused this with issues of
natural speciation and natural selection, which it is not.
Society that we are born into, raised with, and
become members of, have a momentum, a mobility, and an institutional,
"larger than life" presence that is greater than ourselves, and upon
which we come to depend for our very survival and well being. Conventional
constraint is not arbitrary. It is agreed upon, a consensus, and often also, a
conflict of competing interests. It is not natural, genetic or instinctive,
either. Being founded upon arbitrary principles of symbolic design, it is
ultimately constructed by a process arrived at through compromise, coordination
and cooperation of a group of people through time.
To reiterate, symbols mark our meaning, parsing up
our phenomenal experience of the world in discrete and therefore comparable
quantities or entities. In fact, we depend very much upon this symbolic process
to achieve adaptive success in our lifeworlds, and without it our world would
be chaotic indeed. The symbols that we arrive at and are compelled to accept and
use, are done so not from personal choice, but as the product of social process,
group agreement, and continuous articulation and rearticulation in social
contexts.
We may say our symbols, to be effective, must be
achieved with consensus and agreement. They must be received in our social
setting, or else they fall on deaf ears and hence are of no use beyond some
psychologically solipsistic interest or need. Schizophrenics appear symbolically
bound up just in this way. They are unable to use effectively the social
symbolisms that are a standard coinage of the larger system, and who instead are
entrapped within a private and narcissistic symbolic world of their own private
construction, that is transparent from without but opaque from within.
If our terms, used to give reality an objectified
sense of structure, to provide it a place in a shared symbolic universe of
meaning, are our measures of reality, then the relationships we hypothesize
between our terms are used to build a symbolic universe onto which we can map
our conceptual systems of reality in a coordinate and understandable way. We are
aided greatly in this endeavor by the fact that natural systems, for all their
selforganization, tend to be naturally organized into shared patterns that fall
into larger categories and groupings that allow us to label and generalize
across sets of systems, and even to arrange sets of systems in relation to other
systems.
Each tree in a forest may be individually unique in
terms of its exact physical characteristics and measures, but fortunately our
understanding of the forest is greatly aided by the fact that all the trees may
belong to only a handful of groups of trees bound by homology and analogy, by
common descent, shared form as a function of common adaptation, etc.
We may thus categorize and label all the trees of the
forest by the several types that are found to occur there and to characterize
such a forest biome. And so it seems to be with all reality. Reality is
organized not only upon the level of individual systems, but in terms of sets of
similar kinds of systems, either homologically (as a result of common origin) or
analogically (as a result of common function). It is from the classification and
understanding of these natural sets and the generalizations that are implicit to
them that apply to all members of the set, that we arrive at what we refer to as
natural laws that are the basis for our theories of reality.
The natural laws that apply to one set of systems
upon one level of observational analysis, do not necessarily apply to other sets
of systems at other levels of observational analysis. In general such laws may
be said to be general statements about the periodic patterning we associate with
the members of a common set, and this periodic patterning is associated with the
typical or characteristic organization of the prototypical member of the set,
and the emergent properties that are the consequence of this organizational
patterning.
At the same time, sets of systems do not occur in
nature in isolated or pregrouped form, and it is most often the case that
different sets, at different levels, overlap and interpenetrate one another in
terms of shared space and time and the relationships that may occur between
different but interacting members of distinct but overlapping sets. This has
been the cause of much academic equivocation, especially in fields like biology
and the social sciences, when the exact homological relationships between
taxonomic sets, or taxons, for instance, cannot be determined in a precise or
conclusive manner, or when for instance we articulate theories of natural
selection based upon the speciation of populations, though in natural context we
find interacting individuals of different populations with ambiguous
reproductive boundaries, often with pure chance and happenstance playing a large
part in selective processes.
Heterogeneous metasystems, or systems of
individually distinct and different subsystems, emerge in reality with their own
characteristic properties. All ecosystems tend to be complex and heterogeneous
metasystems in this manner. The earth itself may be said to be a complex
heterogeneous geophysical metasystem, composed of a variety of elements that to
some extent interact with one another in regular ways. It has an iron core, and
different hydrologic, platetectonic, and atmosphericnutrient cycles maintain a
fragile framework for the biosphere.
We symbolically group and parse up our experience of
reality, and attempt to organize the totality of our phenomenal knowledge of
reality, in terms of broader groupings on the basis of generalizations that we
apply to all members of groups. Working with groups, instead of with
individuals, is a way of simplifying otherwise complex realities and dealing
upon a level of general analysis in an expanded frame of reference that leads to
the formulation of worldviews and general principles about reality.
This leads to the question of alternative frames of
reference for understanding the same kinds of observational phenomena. Eleventh
Century Europeans saw a sun rise and set upon the earth, thinking that the earth
was the center of their known universe. We see now the earth as traveling around
the sun, as the earth spins daily on its axis, and even though we still refer to
the rising and setting of the sun, we do so with a much clearer view of the real
system than did our 11th Century counterparts.
If one is a member of a nonliterate and fairly
superstitious culture, then one is unlikely to view a diurnal eclipse of the sun
by the moon as a natural event, and more likely to attribute it to supernatural
forces at play. You would be, in terms of the logic of your own symbology, no
less correct than your modern counterpart, only less realistically accurate.
Pure mathematics are examples of abstract systems in
which the relational identity of all known values are founded upon the basic
idea of equality. The equal sign permits us to assume that what value exists on
one side is either the same or otherwise as the value occurring on the other
side, and we can perform common reductive operations which demonstrate this
equality in terms of reflexive identity, or that demonstrate inequality in terms
of basic difference.
We can even perform manipulative operations, as long
as we perform the equally on both sides of the equation sign, in order to solve
the "problem" of simplifying equality. We sometimes substitute
comparative signs (greater than or less than) for the equal sign, but this is
usually the extent of our relational activities, but even such signs always
allow for a clear dichotomous resolution of the implicit problem. The
transformations we make on both sides of the sign we use are otherwise guided by
the pure deductive logic that informs mathematics in terms of the axioms, laws
and their corollaries that we employ.
This is the same form of positivistic, two value
logic that we find with formal logical philosophy. In fact, logical
philosophical positivism was derived from the logic implicit to mathematics,
based as it has always been on dichotomous (true/false) values. Logical
positivism or syllogistic twovalue logic only works in natural language to the
extent that we can clearly restrict the basic meaning of terms to dichotomous
(true/false) values. Often, in such operations, conventional meaning of truth is
substituted for what is presumed to be natural truth"common sense,"
being nothing but the operation of conventional meaning, takes over. We do not
question whether the sky is really blue, the ocean is deep or that roses are
red. We simply say, modus ponens style, If all roses are red, and this flower is
a rose then this flower is red. The fact that we do not normally, naturally
think this way seems to have little to do with the status enjoyed by logical
positivism in academia.
So how do we really think? We think symbolically, but
without the necessary logical constraint of dichotomous truth value, except in
very practical, common, everyday terms and applications. Our logic is less
precise and more bound to the relative semantics of psychological/behavioral
context, innuendo and association, whether this is conventional or arbitrary. We
think rationally with a form of logic that is not constrained by twovalue
choices and that can move in more than one direction. We commonly employ a form
of analogical association in which like is compared to like, and there is
presumed similarity on the basis of proximity, cooccurrence, or preoccurrence.
We may tend to act in dichotomous terms, and even
delude ourselves that we are right in thinking in black and white truth, but we
tend in fact to think in looser terms that replaces the equal sign found in
abstract models of relationship with alternative signs designating similarity,
onetoone correspondence, approximation and equivalence without the necessary
constraint of the law of absolute identity.
What does this entail for our general understanding
of systems? In our scientific models and symbolic representations of reality, we
typically employ mathematical formulations that are based upon logical
positivism and that are derived from the basic relationship of identity or
equality. In chemistry, the equal sign is typically changed to a reaction arrow,
or set of reaction arrows in systems with equilibrium, but we are always
balancing the energy/number/mass budget on both sides as if it were an equal
sign. In physics, equations seem to work really well because primarily we are
dealing with energy pure and simple, and we know that energy always balancesit
cannot be created or destroyed. We can of course reduce everything to chemical
and physical reaction terms, and hence transform all event structures in reality
into nice mathematical equations, but this would indeed become quite tedious.
This is not necessarily so when we deal with
macrobiological systems or social systems. We can of course apply demographics,
population measures and formulas, and other statistical measures and devices to
our models, and we frequently do to great benefit. But we recognize basic
limitations in these approaches at this level of integration of natural
phenomena.
For instance, if we have two pile of stones, seven
stones to each pile, we can proceed to act and treat each pile as if they were
completely identical and the same to one another, even if each stone is actually
unique in terms of its exact physical characteristics. And because the stones do
not act spontaneously (they are not living) and especially they do not talk back
to us and behave in contradictory ways, we can treat them in our counting games
as if they are in fact the same.
We may easily do the same with many living organisms,
such as amoeba, dogs, trees, and even ourselves. But at some point we must come
to recognize a couple of limitations to our formulations especially when it
comes to living organisms, and especially thinking organisms. Even if we tend to
define evolutionary processes of speciation upon a group population level, the
actual selection, transmission and mutation occur effectively upon the level of
the individual organism. Organisms of a common set, a common geneexchanging
population, must vary continuously upon a genetic level, otherwise they will not
evolve, and they will thus lose out in the long run. Treating all organisms of a
common populational set as identical therefore does not solve our basic problems
of understanding the fundamental mechanics of speciation.
Beyond this, if individual organisms are enmeshed in
complex webs of ecosystemic relationship with other species, then the simple
classification of these organisms into their populational groupings will not get
at the dynamics of metabiotic organization and interaction that lead to certain
fitness and selection regimes.
It is even more the case with human populations,
complicated as these have been by culture and human civilization and all the
weaknesses associated with these phenomena. There are numerous instances and
times when it has been of great value to treat people in a quantitative way in
statistical manipulations, but so far very few if any universal laws of human
nature or human social systems have been derived in this manner, with very few
exceptions.
So, the "hard" scientist used to the
comfort of working with numbers and equal signs, will advocate throwing the
human sciences out as "soft." This is not really coming to terms with
the central problem because human systems are natural systems of their own
right, at their own level. The theory of emotion is a good example to finish
with. If we say that he is angry, and it is his anger that made him do it, and
we then generalize that all people who do similar things do so because they are
angry in the same way, we have reached a kind of hypothesis generalization based
upon certain presuppositions. But in doing so we do not ask if the emotion of
anger is a clear and universally shared feeling or even what it is as a feeling,
or if other circumstances may cooccur to predispose a particular individual to
commit a certain act, or if the sense of anger shared by all people is the same,
for the same reasons, of the same quality or intensity, or may be different and
even unique for different people. Upon further investigation, we may discover
that in fact different people do the same sorts of things for very different
sets of reasons, and the reasons are not always one and the same. There may be
precedents and precursors of behavior resulting in similar consequences. Nor do
we even really ask if similar kinds of acts, all lumped together, are really in
fact the same acts, committed for the same sets of reasons, or perhaps different
sets of acts, committed for different sets of reasons.
So, in such cases, of which there are far too many to
count, do we simply throw out the problem as being somehow unscientific, or do
we amend our scientific view and methodological approach to reality to be able
to better account for the problem? I will only answer by stating that, in
general, as we progress the hierarchy of emergent properties associated with
different kinds of natural systems, we move from strictly logical, mathematical
equations, to more linguistic, generally verbal generalizations in the form of
basic statements based upon often imperfect classification and terminological
systems. Even all our understanding of physical systems and realities cannot be
completely coaxed in purely mathematical formula without reference to
generalized verbal expressions.
Operationalizing
Systems
I propose a set of methodological procedures that is
rooted in basic presuppositions of metasystems science and natural systems
theory. I do not ascribe the same set of operations for every area of knowledge
at different levels of natural stratification. Certainly the use of these
procedures in dealing with human systems is fundamentally different that the use
of similar or related procedures upon a biological or physical level of
abstraction and analysis. I do ascribe upon a fundamental level, in terms of
metasystems theory, that there are basic abstract and mathematical models that
are pertinent to all classes of real or naturally occurring systems. For
instance, theory of automata describe all classes of linear forms of digital
computing, at least. Whether this theory, which incorporates Turing machines, is
sufficient for the description of natural intelligence or naturally occurring
information systems is not yet clear, and I doubt it is, at least in any
unadulturated form. In this regard we must distinguish between information
theory on one hand and intelligence theory on the other, and what is natural or
innate, and what is artificial and preprogrammed in some arbitrary way.
Furthermore, to address theoretically in any
exclusive sense the informational aspects of naturally occurring systems is to
thereby ignore the energetic considerations of such systems as naturally
occurring machines. A mechanistic model that is construed in a conventional,
Newtonian manner is found to be insufficient to all classes or levels of
systemic functioning that involves some form of energy exchange dynamics. Energy
exchange dynamics in natural systems upon different levels, as well as in
artificially created systems, can be demonstrated to include nonNewtonian
mechanics. The conventional example of course is Heisenbergian uncertainty of
quantum mechanics, but similar kinds of uncertainties exist at other levels, and
in other forms. We have not yet fully modeled, for example, gravitational
dynamics, and we may be quite surprised at how this form of energy exchange
defies even our conventional modes of thought about quanta.
It is clear that the informational problem
represented by all classes of natural and metasystems is separable analytically
from the energetic considerations of such systems as real systems. These
fundamental differences in natural systems theory in general reflect the
mind/body duality or the material/ideal dichotomy that is typical of all western
rationalist thought. In this case, both informational and energy dynamic aspects
of systems can be represented in an analytic manner that is quite similar to one
anotheralmost to the extent that they can be considered analogous or at least
as two sides of the same coin. We know for instance that energy exchange without
some kind of informational constraint results in random or chaotic processes. We
also know that there can be no sense of informational constraint or quality
within a system without some sense of energy quantities or dimensions that
represent such constraint.
The real challenge of such system models is figuring
out the pattern of integration that they may achieve or follow, and the
principles that underlie these integrative patterns. Furthermore, this question
of integration of systems leads to other questions of determining in an accurate
if general manner the contextual relationships such systems have to larger
systems of which they are a part, and how intersytems regulation occurs in
natural process. Furthermore, such theoretical problems of natural integration
also lead to questions about the alternative pathways any given system or set of
systems may follow in their differential statepath trajectories. In other
words, systems are to some extent underdetermined systems, and to the extent
that they are underdetermined, no two systems will be exactly identical, nor
will any two systems follow the same exact patterns of historical resolution.
Finally, such questions also lead to broad and more general problems of
developing a typology and taxonomy of systems in a manner that is realistically
representative of the natural distribution and relational patterning of systems
in a general and comparative sense.
It can be said that if all systems are by definition
underdetermined, then any system will be unique in an exact sense, and will
demonstrate some minimal degree of possible variability of change pattern. Even
systems in nature that we hold to be fudamentally stable, such as the atomic
system of the periodic table of the elements, which is held to hold true under
all normal conditions on earth, must be suspect as a kind of typology that hides
some degree of minimum variation of its elemental classes. It is know that
isotopic configurations of elements is estimated and to some extent variable,
which confers differential atomic weights and therefore estimated averages. Any
particular sample of any particular element or molecule may be more or less the
molecular weight that is predicted by the periodic table, with some minimal
degree of isotopic variation.
We tend to assign relatively discrete mass
measurements to nucleonic particles, and energy measures to electrons inhabiting
what are known to be discrete orbital levels. It is possible that these
measurements of mass and energy of discrete entities, which are themselves more
the nature of energyentities, may be continuously fluctuating about some normal
distribution, and that they may even on occasion jump between levels. At a
quantum level of measurement, we may even say that such measurements are in fact
statements of a certain kind of probability, of likelihood, of finding a
particular entity in a given state in a particular instance in time.
We can therefore modify even our initial statement
that all systems by definition of their underdetermination will be unique in an
exact sense, by say that each system will tend to be instantaneously unique and
variable as a function of timein other words systems will be unique states at
any discrete instant in time, and will be variable through the longer continuum
of the duration of time. This is a basic change principle:
1. No two systems are exactly alike in time or across
space.
2. No single system is exactly like itself through
time.
3. All systems are underdetermined, and hence are
dynamic.
4. The only absolute about such systems is the
dynamic of change.
If we are to get at the fundamental principle of why
all systems are inherently underdetermined, we must at some level come to the
problem of the relative structure of systems that is a function of their
inherent complementarity. Complementarity suggests that any system may exist at
any particular instant in more than one possible state with a given distribution
of probability. Complementarity suggests furthermore that it is possible for the
same system to exist in more than one possible state simultaneously in any
particular instant, depending upon how this system is being observed. The nature
of the observation affects the instantaneous state of the system, and reflects
as well the basis for such distribution in the universal relativity of all
systems.
We can put this another way and say that no system
exists in an exactly or precisely discrete sense. All systems are inherently
distributed and continuousi.e., they are fundamentally nondiscrete. Their
sense of being discrete is a function of our observational constraints that we
superimpose upon such systems, and are thus a result and residue of the fact of
observation. To a great extent, the determination of discreteness for any
system, of its exact instantaneous state, is a function of the precision of our
instruments of measurement, their resolution and accuracy. It is also a function
of the relative units of analysis and scale of observation that we select. It
turns out that to an atom, a second, or even a femtosecond, may seem like a
lifetime to a human being upon a much larger scale. If a human observes a small
microbe through a light microscope within the frame of a minute or two, chances
are that microbe will be construed as an instantaneous event structure that has
not changed during the entire period of observation. The countless numbers of
biochemical transformations and processes occurring within the cell, too small
to be seen even with a highpowered lightmicroscope, may go missed by the
careful observer and therefore be discounted. In general, we see change process
in such microbes within the span of generation time, be it twenty minutes or an
hour, or for eukaryotic cells, within a twentyfour hour cycle. Generally, if we
seek to understand processes on a molecular level within the cell, it is
necessary to perform procedures leading to the death of the cell as an entity
and its isolation as a momentary event structure that is arrested in time.
The complementarity of structure of all systems are
due to several related properties of such systems. 1. All systems are stratified
and relative to other systems upon some, and usually multiple, level levels of
interaction. 2. All systems are by their basic subsystemic structure continuous
and nondiscrete at multiple levels of analysis. 3. All systems are by their
energetic exchange dynamics situated within a relative surrounding environment
and thus are partially open within that environment. Furthermore, the
surrounding external environments are by definition a part of a larger
encompassing system of relations.
From an energetic standpoint, we may invoke the basic
laws of thermodynamics for most mechanical systems involving energy and matter,
though this may not subsume the entire class of systems or energy exchange
relationships that compose such systems. Basic evidence suggests strongly that
the laws of thermodynamics are covering law models that are part of a larger
energy dynamic system. Thus, energy dynamics, however imperfectly understood,
form fundamental mechanical constraints in the functioning of basic systems that
results in inherent change and variability of all systems in time and space.
These mechanical constraints can be understood in either a quantum or a
classical manner with the same end results.
On a quantum level, basic phenomena can be explained
that appear to violate thermodynamics upon a classical level, as for instance
the phenomenon of superconducting or the tunneling of electrons through a
substrate. Furthermore, these same energy dynamics appear to occur in all
systems that are classifiable as real systems, no matter what the level of
integrative functioning or scale upon which they occur. We may characterize
biological systems in such a manner, in terms of their fundamental molecular and
atomic dynamics, and we may furthermore characterize even brainbased mental
systems with a similar kind of model, though the latter set of system is as yet
incompletely described or understood.
It can be demonstrated though that the
characterization of biological or brainbased systems by means of molecular or
atomic models is inherently insufficient to the full scientific or naturalistic
description of such systems, as levels of integration are complex by many orders
of magnitude in such systems, leading to new sets of intrinsic properties
characteristic of such systems. Systems that are integrated upon supercomplex
levels can be said to exhibit both intrinsic functional properties and extrinsic
statepath properties that are emergent from the integration of the system and
that are, as a class, distinct from the kinds of properties of the subsystems
that compose them.
In understanding the integrative stratification of
systems in reality, we can make the following kinds of statements:
1. Functional stratification is based upon relative
differentiation within systems, between subsystems, and without systems, between
supersystems, which differentiation is a result of the continuous variation of
such systems.
2. Functional stratification leads to increasing
levels of integration that exhibit the following characteristics:
a. exponential complexity of relational patterns
b. increasing underdetermination
c. increasing alternative variation of resulting
patterns
d. increasing emergent properties associated with
such systems
3. We may distinguish in reality between forms of
intensive stratification, or intensification, of natural process, and extensive
stratification, or extensification of natural process, associated with systems.
a. All systems will exhibit some degree of both
continuous intensification and extensification.
4. Such processes of intensive and extensify
stratification lead to emergent forms of integration between systems at one
level to create entirely new systems at another level.
5. Because such processes in nature are fundamentally
underdetermined, we may say that all such processes and patterns of integration
are fundamentally stochastic and unpredetermined. However unlikely such systems
may be, all naturally occurring systems emerged as a result of chance
distribution and occurrence without any a priori controlling force or sense of
predetermination.
It is the case that in terms of our language of
description to match in an empirical manner our level of observation and to
designate our units of analysis, we are thrust upon the horns of a dilemma to
the extent that we must deal not only with the physical relativity of natural
systems in terms of our observational experiments, but we must deal with the
anthropological relativity of our language and knowledge in terms of the
designated units of analysis and description that we apply to our observations.
On a naïve level, basic descriptors derived from "natural classes" in
any language appear to be sufficient to the tasks of basic qualitative
description. We have mathematics, the language of science, to come to our rescue
especially when we are referring to basic and "average" physical
processes, as for instance those entities represented by the periodic table and
those energetic event structures described within the framework of classical
mechanics. But even upon a microbiological level the language of mathematics
and its inherent logic begins to break down under the shear weight and
complexity of the problem of natural description.
The
function that mathematical language serves upon a biological level is
fundamentally different than the function it serves upon a physical level. A
strong case can be made that mathematical description breaks down almost
completely upon the even more complex human level of analysis, except in the
form of applied statistics and rather gross and concrete numerical descriptors.
But even upon the fundamental level of physical analysis and observation, resort
strictly to mathematical description is inherently insufficient to the inclusive
problem of descriptive explanation. Most physical properties or laws that govern
systems upon these fundamental levels are defined in terms of linguistic based
variables or logical syllogisms that are held to be generally if not universally
applicable to all cases, and most such properties, principles or laws were
derived at through empirical observation and experimentation in conjunction with
deductive reasoning that is applied to the evidence at hand.
In such a context, mathematics as used in the
theoretical or applied sciences takes on a basic applied function that is
distinct from its abstract articulation in pure mathematical theory. In such a
case, as demonstrated for instance through statistical description and
manipulation, mathematics is applied to natural data sets or samples or
populations of "points" which discrete point determination, as
referred to previously, is inherently problematic from a linguistic and
observational point of view. Dealing with natural sets of data points defined
experimentally or observationally is fundamentally different from dealing with
abstract sets of numbers or points defined arbitrarily or by means of logic. If
we hold to our initial pressupposition that all entities and event structures
are inherently underdetermined and continuous, then the application of discrete
and discontinuous labels or attribution to these sets must be on some basic
level fundamentally problematic.
We can often proceed, as with many covering law
models, on the basic assumption that the degree of continuous variation is
negligible or can be discounted and that our data sets are, for the limited
purposes that they are used, sufficient in a substantive and theoretical manner.
Science could not otherwise proceed in a normal manner unless we make these
heuristic leaps of faith regarding the basic reliability and validity of our
data sets. And even when such presuppositions become extremely suspect,
especially with human systems, we even still like to invoke mathematical models
and formula in a general and usually overly simplistic manner, and usually with
the consequence or intention of simplifying theoretical explanation. We hold
inherent complexity temporarily in check, as it might be, in order to build our
model or defend or rationalize our argument. We assume the units we describe to
be relatively discrete, and often ignore the relativity of our analytical
indiscretion.
This problem of anthropological relativity leads us
directly to the fundamental challenge in all the sciences of building reliable
and empirically consistent taxonomies and typologies that allow us to
systematically compare and relate different systems at different levels. The
archetype of such a model is of course the periodic table of the elements in
chemistry. A system of subatomic classification of fundamental particles has
emerged, though its systematic definition is still incomplete. Increasingly
sophisticated biological taxonomies are emerging, all fundamentally based upon a
modified Linnean system that is explained in terms of an evolutionary tree model
rooted in Darwinian theory. It is recognized though that upon this level, major
classes and categories of biological patterning are not taken into account, and
there is deepseated desire among many biologists who feel the insufficience of
their concatenated system for a new kind of "synthesis" that will
integrate the many subdisciplanary focii of the overall field. A call is
sometimes heard for a systematic system for classifying ecotrophic niches in
ecological models, though this has not yet been accomplishe due to the enormous
variability found at this level of integrative analysis. The study of human
systems, at whatever level, are even less satisfactorily organized under any
comprehensive framework of systematic classification, typology and taxonomy. So
much is this lack of synthetic unity the case in the human sciences, that there
are entire disciplines that are essentially in competition with one another over
basic definitions of units of analysis and classes and nomenclature, much less
the systematic relations that these descriptors imply.
CrossCorrelational
Systems as Heuristic Models for General Scientific Description and Explanation.
The quest therefore in natural systems theory and
metasystems science is for a generalized operational system that will permit
integration and synthesis of knowledge upon a number of representative levels,
and across a wide plethora of different fields.
I propose the general use of crosscorrelational
systems, based upon advanced number, measurement and set theory, as a sufficient
heuristic model for the general description and explanation of phenomena in the
sciences. These systems, in variant and modified forms, appear to have general
applicability and functional utility in most if not all scientific fields of
endeavor, and they lead as well to the description and explanation of real
alternative systems, investigation of hypothetical systems, and the development
of abstract and artificial systems as well. I do not claim that this is the only
or necessarily the best set of operational procedures to be used, but I do claim
its general validity and broadbased reliability.
In the delineation of crosscorrelational analysis, I
recognize five levels of abstraction that are involved:
1.
Number theory deals primarily with mathematical languages, principles and
problem sets. Advanced number theory attempts to work with complex numbers that
are represented only or primarily as relative variables. I am concerned in
relation to advanced number theory primarily with the systematic use of varables
that are inherently dynamic and comosite.
2.
Measurement theory is based conventionally upon descriptive and predictive
statistics, but involves as well the basic issues of deriving data sets and
their manipulation based upon descriptive inference. Generally the criteria of
measurement is relative objectivity that is achieved by the superimposition of
some conventional standard or unit of analysis that is relatively nonarbitrary,
and the explicit and systematic uses of these standards in descriptive
observation.
3.
Set theory concerns two interrelated dimensions, the language of types and
labels and the problem of the classification of things or events into some
comparative framework. Set theory conventionally leads to the use of deductive
and inductive inference in the construction and at least the implicit comparison
of mulitple sets. Hence sets are generally constrained by the terms and rules of
logic that we apply to such systems, and logical inference forms the basis by
which we construct and manipulate sets in relation to taxonomic frameworks or
typologies. Generally, a taxonomy will imply some kind of logical system of
inference that underlies the construction of the taxonomy.
4.
Relational theory is the basis of crosscorrelational analysis, and concerns
primarily the systematic comparison and interrelation between different or
multiple sets, of the pattern of variation of the same set over time, in such a
manner that we can explain processes of integration or disintegration that occur
at different levels. Relational theory is concerned primarily with the
scientific explanation and description of change in and between systems upon
multiple levels. Therefore it is concerned with the dynamics of variation of
systems, and with the ranges of alternation available to such systems over time.
It is concerned as well with the problem of integration of sets into systems,
and the integration of subsystems into supersystems.
5.
Heuristic modeling theory concerns the use of the results derived from crosscorrelational
analysis to generate or construct systemic or mechanistic models of systems that
permit some degree of pattern prediction and simulation under controlled
circumstances. Modeling theory is primarily heuristic and experimental in
orientation, but it leads secondarily to the application of model systems for
solving real problem sets in a systematic and controlled manner. Heuristic
modeling theory can be said to encompass most of what is received as the
conventional scientific method, and it leads to the fomulation and testing of
competing alternative hypothesis about the structural explanation of reality. In
general, successful scientific models have not only results that are predictive,
but that also can be simulative and even creative in the sense that they lead
directly to the development of new and alternative kinds of systems. Models from
this standpoint can be said to be theories or exemplary representations of
reality in a simplified and condensed form. They can be said to be prototypical
or archetypical of the full range of phenomena that they theoretically subsume.
A successful model can be thought of as a correct solution for a given problems
set, that, when applied under universal conditions, will lead to the same
results.
6.
Advanced Systematic Taxonomy depends upon the development of realistic and
predictive models for the construction of larger taxonomic systems of
classification based upon the principles derived from the model. A valid
theoretical model should lead to at least a partial taxonomic constructionthe
more comprehensive the model, the more complete the taxonomic framework. The
taxonomy provides the general frame of reference for the definition of the
supersystem, and therefore the taxonomy comes to embody and express through its
structure the theoretical model upon which it is based.
The basis of crosscorrelational analysis is the
systematic comparison of relational complexes that occur between different data
sets. No degree of interdependence is necessarily presumed to exist between
different sets, though there is an assumption of dependence existing between
components within sets. This intradependence is not assumed to be complete, but
only partial. It is also not assumed to be static but dynamic. It exists in no
particular instance of an event or an entity, but is distributed throughout,
unevenly and in different ways, across all possible events or entities.
The assumptions in which crosscorrelational analysis
are rooted include the following:
1. For any given system or set of systems, there are
three analytical levels that must be specified: i. Subsystems composing a
system; ii. The System in itself; iii. The Supersystem of which the System is a
subsystem.
a.
This designates the general order and suborder of systems in reality.
2. Any given System at any given level of analysis
can be characterized in three ways: i. As a System in itself; ii. As a Subsystem
of a surrounding supersystem; 3. As a Supersystem containing subsystems.
a. Higher order systems demand analysis that is more
general rationally and less precise empirically.
3. A System at any given level of analysis is
subsumed by all higher Supersystems, and subsumes all lower Supersystems to
which it is directly related.
a. Systems become increasingly complicated and
underdetermined with the increasing order of the system. The more complex the
system, the less inherently determined it will be.
4. For any given System at any given level, there
will be an open class of higher and lower order systems that can be said to
exist contemporaneously with that system and which can be said to be indirectly
related to that system as a part of the intensive surroundings.
5. For any given System at any given level, there
will be a open class of alternative systems that can be said to exist
contemporaneously with that system, and which can be said to be indirectly
related to that system as a part of the extensive surroundings.
6. All systems are minimially connected upon one or
more analytical levels, however indirectly, hence all systems contain some
minimal degree of relational similarity with other systems upon at least one
level.
7. The descriptive characterization of any system is
always assumed to be instantaneous and continuous, subsuming an inherent degree
of variability that leads to error and uncertainty (parallax) in relation to
knowledge about that system.
8. A system as a conceptual model represents in
abstract form the hypothetical structures (redundant or reiterative patterns)
that are observed or alleged to exist in the phenomenal pattern of experience.
9. The objective of scientific inquiry is the
excoriation and explanation of such models in a manner of increasing correctness
of fit between the conceptual model and the experiential patterns that it refers
to and subsumes, and a corresponding decrease in the relative uncertainty or
probability of error associated with that pattern.
10. All systems, at any given level, have a
lifecycle trajectory and are subject to rules of random and regular change. All
systems have a beginning, an indefinite intermediae period or set of periods,
and an ending.
Before proceeding with this digression upon operational systems and their
application to general problem solving procedures at various levels in systems
science, it is important to go down one or two other tangents.
Scientific
Description as Rational Explanation
Scientific description is an attempt to
linguistically represent the patterning of reality in a reliable and faithful
manner. Such description can proceed at different levels, in alternative
circumstances, and may lead to different kinds of results. As mentioned
previously, description brings us to the problem of language parallax, and
largely, the problem of anthropological relativity of the knowledge that such
language entails.
We may say in general that the goals of scientific
description are to lead to explanation in the shortest and most succinct route
possible. Therefore, explanation is in a sense inherent and a part of scientific
description, and should be a logical outcome of correct description. We see as
well that preconceived views or models about reality can have the influence of
channeling our description metaphorically in certain directions that may or may
not reflect the actual patterning of reality.
Description is not necessarily to be confused with
explanation. We may say that it is appropriate to separate the two problems
analytically, as in a lab or field report. But we can say that description and
what gets described and how is as often as not preconceived by the explanatory
models we may have or want to have, and that at some point the two levels may
come into dialectical conflict, in terms for instance of frame disruption, error
and frame repair, or they may come into a kind of convergence, as in the case of
constancy of perception that allows us to see what we want or at least think we
are seeing.
The selection of descriptors and the sentential
construction of a description refers to the direct perceptual response to
empirical experience and observation. It connotes a studied approach to
information.
Explanation refers only indirectly to the
observation, or to the phenomena involved in a general sense, but refers
primarily back to the description that we have formulated in relation to the
observation. Observation, especially when this is constrained experimentally by
systematic measurement, is itself a form of deliberate description, or at least
the selective perception upon which such description is based.
Explanation carries the entire process one step
further, and depends upon a deliberate "distanciation" or alienation
from the source of the information, as well as upon the reliability of the
descriptive information that was derived from the source. Explanation
furthermore is concerned with the logic or coherence of the resulting statements
concerning the prototypical patterning, or structure and its validity.
It can be seen that the primary preoccupation of
description is consistency and reliability, while the primary concern of
secondary explanation is coherence and validity of the models that are derived.
It can be said in a reciprocal way that explanation is really a form of
secondary or derivative description that takes description from a specific or
methodological level of analysis to a general or theoretical level of synthesis.
Again, the feedback nature of this process must be emphasized, as the
development of theoretical explanation will in turn condition our initial
responses and observational frameworks, and will lead to refinement of our
descriptive informational background.
In a general sense, we can say that description leads
us, by systematic steps based upon inference, from the particular to the
universal, and from the analytical to the synthetic. It leads us from
descriptive information to explanative understanding, and this continuum can be
said to form a knowledge system that is defined by a certain order and kind of
information upon which it is based. We develop explanatory models to organize
our descriptive data sets, or information, in ways that are coherent and make
sense, either from our own preconceived or arbitrary standpoint, or from a
standpoint that can be said to be relatively independent of our own a priori
judgement.
I have sidetracked in this essay about scientific
description and explanation because, upon a fundamental level, operational
systems in metasystems science occurs and works in this framework of
understanding of a feedback loop in dynamic information systems, from empirical
description to rational explanation leading back to exemplifying or experimental
description under rationally controlled conditions. Our knowledge is locked
perpetually within such a feedback loop between our descriptions and
explanations of reality, and we are always testing new frames of reference with
new units of analysis to achieve some level of systemic equilibrium and sense of
coordination if not control over such knowledge systems in general.
In general, it can be said that science as opposed to
ideology, does not privilege any particular explanatory frame of reference that
might lead to a preselection or conditioning of our descriptive units of
analysis in terms that are inflexible or constrained. It tends to privilege
descriptive units of analysis rooted in observational experience before it
privileges explanatory frameworks, however rational or rationalized.
Paradigmatically it can be demonstrated that scientific theory can frequently
smuggle back into its explanation of reality ideological conceptions that may
become inadvertently priviledged or in a sense a posteriori to the data, but at
least in science the ultimate reference points are supposed to be the empirical
observation of data that is descriptively defined in as clear and careful a
manner as possible.
Abstract
Frames of Reference and Concrete Units of Analysis
The basis of number theory is strictly arithmetic and
mathematical. Number systems and their manipulations are considered purely
theoretical and abstract. My point of departure for metasystems theory in
relation to number theory is to propose a class of complex number in which a
number stands as a mixed heterogenous variable that may be used differentially
in a number of different kinds of systems. Each number then would be in
indexical reference/inference marker, representing a complex variable, that
could stand for a large number of subsets of numbers or variables, while at the
same time, standing for itself, and standing as part of a larger system as well.
It may seem that this is a way of rendering
mathematically systems extremely unwieldy and overcomplicated. To get at the
issue, we must go to the basic meaning of what a number is and what it
represents in reality beyond its own logical representation. Generally, we count
things in sets. If we count a set of five pennies, we can assign the number one
to each penny, and the number five as a denominator to the set as a whole,
especially if we recognize a five cent piece as a whole unit of which a penny
can be considered to represent a proportion of that set.
Alternatively, we can say the following:
1
+ 1 + 1 + 1 + 1 = 5
1/5
+ 1/5 + 1/5 + 1/5 + 1/5 = 5/5 = 1 nickle
We
can then simplify the equation by multiplication:
1
x 5 = 5
1/5
x 5 = 5/5 = 1 nickle
All other manipulations from this follow, for we can
subtract or divide one or more pennies from the whole to define what some number
of pennies represents in relation to the entire set.
The question that I ask is what the assumptions are
when we count pennies and compare a set of pennies as equivalent to a nickle.
For all intents and purposes, each individual penny may and probably will not
weigh exactly the same, but conceptually we treat them not only as equivalent to
one another, but as mathematically identical and interchangeable within the set
that can be subsumed by the name of "penny." The variation of weight
and size of any particular instance of a penny is irrelevant to its estimate of
value from a monetary standpoint. I do not wish to go into the symbolic
dimensions of money and value, but there is a strictly logical operation
performed upon the penny in which it is assigned a discrete numerical value and
is classified at the same time with all equivalent pennies sharing the same
value. Pennies in this case become interchangeable as numerical units, and they
are used in precisely this way in the exchange of money. We could perform the
same numerical operation if we count out a set of pebbles, however oddly shaped
and composed, in a pile. We treat each pebble, however different, as numerically
equivalent as discrete units. Anything that can be counted in this way is
defined as something that is discrete as a unit, and equivalent to other similar
units, no matter what the variability actually subsumed by the class.
We would say that the set of pennies or the set of
pebbles (or oranges, applies, flies, etc.) are simple sets that are defined by
their countability and conceptual equivalence. Any one orange would be as good
as the next, no matter what their individual virtues or faults. We are
essentially treating a set of real objects as if they are representatives of
abstract sets, allowing thereby their mechanical manipulation in terms of
abstract operations.
If we can say that simple numbers in general define
simple sets, then we can say that complex numbers define complex sets. We can
therefore learn what a complex number is by the kind of sets that they form. If
countability is at least one of the abstract operational procedures
characteristic of simple sets, then it strikes me that a complex set would be
one that cannot be characterized by the procedure of counting. We cannot simply
add up all the units of the set, and say that the set is (N)1 in size. There may
be a number of different reasons for this noncountability of complex sets. In
this regard the kind of sets I am after are those that are defined by
nondiscrete entities, continuous rather than discontinuous variables, unlike or
nonequivalent members, open sets, noninterchangeable members, relational
complexes and sets that are composed of other sets that are themselves complex.
It may well be asked, what good are sets that cannot
be counted, as it would appear that from the beginning such sets are not
amenable to basic arithmetic operations or manipulation. How can we determine
for instance, the size of a set of air or a set of sea water if we cannot
determine the number of molecules contained in our set of air and sea water?
Linguistically, it makes no sense to call an area of air a set if we cannot
count its fundamental units in any obvious manner. It defies the covert
categories of semantic meaning that we distinguish between count and noncount
values. We can count pebbles and rocks, however small, but we cannot count dirt
or mud.
We can of course measure the mud out in a number of
buckets, or the sea water in a number of jugs, or the air in a number of
balloons, and then count the buckets, juts and balloons as countable units of
mud, water and air. But this is not solving the central problem of identifying a
complex setit is rather systematically transforming a complex set into a
simple one that can then be counted. This is what we do in scientific method,
and this issue will be dealt with in measurement theory, but it begs the
question of identifying and dealing with a complex set.
I would say that a complex set can be treated
essentially as an unknown set. Its dimensions are uncertain and undescribed as
is. We may not know its boundaries or its limits. We may say for instance the
set of all birds in Australia, not knowing the full range of bird fauna there,
the extent of any one species or the possibilies of flight by different birds
from and to different surrounding land masses, or migration patterns. We do not
know, for instance, the rates of death or birth of different bird populations in
Australia. On the surface, "the set of all birds in Australia" is
conceptually very simple, but if we try to determine or specify this set in any
exact sense, we quickly run into enormous difficulty and complexity. It is the
nature of complex sets, I believe, that if we try to solve them in any direct
mathematical procedure in terms of their component entities, then we quickly run
into an exponential increase in complexity of component variability and
relationship. Take for example the following kind of set: Suppose that a set is
contained of 5 variables, (x's) and each x is a composite variable of (yz)
variables and each y variable is a random number between 1 and 100 and each z is
yet another subset of two more variables, one of which is also a random number
between 1 and 100. It can be seen that even if we eventually came down at some
level to purely countable numbers, the number of operational procedures that
would be required to determine the solution, or the range of possible solutions
for such a complex set becomes quickly astronomical, requiring probably the
assistance of a computer.
We can state in a basic way that the scientific
operation is to determine a set of measurements that will simplify a complex set
to a simple set that can be somehow counted and thus manipulated. Until we an
perform such an operational procedure, we can say that a complex set is a kind
of problem with an unknown solution or method for solution.
A complex set is a problem set of uncertain
dimensionality and unknown solubility. Theoretically and methodologically,
complex sets are the stuff of scientific research. Science attempts to apply
systematic means to reduce complex sets without known solutions to simpler sets
with known solutions.
Now that I have identified a complex set in a
negative sense, we have yet to ask what it is in a positive sense. We can say
that while a simple set is characterizable by countability, or what we can call
the cardinality of simple numbers, we can say that a complex set is likewise
characterizable by noncountable computability, or what we might refer to as the
cardinality of complex numbers. A complex set is therefore characterizeable by
the complex numbers that it component subsumes or represents. So, then, hedging
the question a little further, what is a complex number?
Suppose for instance we have two odd assortments. The
first assortment is of 10 eggs, 2 chickens, a rooster, a farmer, five flies and
three ducks. The second assortment is of 10 cars, 2 trees, 25 mice and an old
tire swing under one of the trees. How can we systematically compare these
different kinds of sets? We can simplify the problem and count the items in each
set, and say that the first set has 22 assorted items and the second has 38
assorted items. But this is, I believe, comparable to our buckets of water, in
that we are lumping into the term "item" a connotation of countability
and thus interchangeability and equivalence that ignores the obvious and
pronounced differences between the items being counted. "Item" in this
example obvious disguises more than it simplifies.
Alternatively, in this example, we can say that the
first assortment has 6 subsets of equivalent items of different components, and
the second set has four subsets of equivalent items of different components. In
this kind of solution, we are typologizing our sets in subsets, and essentially
creating a kind of matrix for each set by which to compare it to the other.
We can go the other direction and claim that a
complex number is an uncertain number with an unknown solution. It may or may
not have a possible solution, we just do not know. But as with the
characterization of a unknown complex set, we cannot define a complex number by
what it is not, rather that by what it is.
Therefore, I will venture a definition of a complex
number, and say that a complex number is a polynomial variable each of which is
composed of an unknown subset of other variables, which may be discrete or
polynomial. At some point in this reductive analysis of our complex number, we
may come to a known simple number as a constituent of the variable. In this
case, we are sort of systematically chasing out what is unknown about a complex
number by making it more complex than it already is, and thereby possibly
factoring out as many discoverable values as simple numbers. We are factoring
the problem in the hopes of obtaining a solution to it.
We can say then that a complex number, like a complex
set, is an inherently undetermined and possibly undeterminable number. Any
complex number remains to some extent underdetermined as a number, and any
complex set remains inherently underdetermined as a set. Complex numbers are
therefore capable only of partial determination through factorial analysis, and
complex sets can be resolved only partially.
We may risk a generalization then, and say that a
complex number is always some composite number. It is a number composed of other
numbers, some of which may be known or knowable, and others of which will remain
unknown. If we call the number 60 simple, then it is designated by one and only
one value, however written. We could write it as 15 x 4 or as 120/2 or as 240/4
or as 10 x 6 or just as plain old 60. It would remain simple because it is
reducible. But what if our complex number sixty where really the composite
polynomial XY = 60, in which both X and Y could be any number in relation to the
other. We end up with almost an infinite number of possibilities for X and Y if
we consider not only whole numbers but fractions. If we could perchance
determine one of the variables, say X, then the determination of the other
variable Y could be achieved by rapid mathematial deduction. The equation XY =
60 represents therefore a kind of complex number without clear solution, while
any of the other examples represents simplified numbers or equivalents of 60.
The complex number above would only increase in
complexity if we split one of the variables into three, as for instance XYZ = 60
or WXYZ = 60. Then the number of possible combinations, and the required
combinatorial space, jumps up exponentially.
Does a complex number exist in reality? No, not
really, but then neither does a simple number which exists only as an
abstraction. Just as countable objects exist that can be characterized by simple
numbers in simple sets, so too do noncountable things appear to exist that can
be characterized by complex numbers in complex and unknown sets. Science deals
with these kinds of sets all the time, indeed most of the time. Biology is
replete with examples that do not go conceptually far past the oversimplistic
statement of "the set of all birds in Australia" without really being
able to discretely identify this complete set. Such a set therefore represents a
complex and unknown set of complex and unknown numbers that can be only possibly
partially factored out and simplified. And when we really try to crunch numbers
in biological systems, we quickly run into astronomical complexity and high
levels of uncertainty which strongly suggests that we are playing at least
conceptually with complex numbers of things we do not fully know.
The problem set "of all birds in Australia"
can be said to represent a kind of conceptual solution in itself that
symbolically summarizes the problem in a gross descriptive manner without
solution, though this problem represents from the standpoint of a scientific
solution an oversimplification of the problem. Oversimplification by conceptual
definition is not always a wrong recourse, and I believe much of theoretization
at the levels of biology and social science relies upon such conceptual
strategies in a generalistic solution to problems where exact kinds of solutions
would be impossibly complex and underdetermined.
We may say that a complex set poses a problem that
entails a combinatorial explosion of possible solution space. It is interesting
in this regard that only in certain computer languages, can such combinatorial
explosions be handled in a logical manner that can solve for finite puzzletype
problems, however complex, in just a few lines of symbolic code.
The contrast of a simple to a complex number may seem
in itself oversimplistic, or perhaps, unnecessarily complicated, but I believe
it gives us a direct handle on understanding what can be considered to be
fundamental designative dilemmas in normal scientific operations, and that is
the determination of units of analysis among unknown variables that will permit
some degree of manipulation, even systematic comparison, of these units. Indeed,
scientific method is about taking complex realities and systematically
simplifying them down to relatively simple solutions. And this is done by
factoring out the knowns from the unknowns with the hope of eventually reducing
the unknowns to a smaller and smaller subset of the knowns. There are many
natural systems, of all classes and kind, that can be characterized as complex
sets as I have defined this term.
A complex number can characterize a range of possible
simple number solutions. We can say that any complex number will be solved by
more than one possible alternate simple number, and usually by a complex
combination of simple numbers.
Information
Theory and Mechanical Systems
In terms of energy transactions, all real systems that have a physical
existence can be considered to be mechanical systems of some kind and order. We
may deploy different "machine" models to describe different kinds of
mechanical systems. The machine system can be analyzed in terms of its
components, and it can be studied in terms of the patterning of interaction
between components. Studying the mechanics of systems in terms of enegy
transactions that involve work and relative efficiency of some kind, and some
degree of entropy, invites a theoretical model of the informational correlate of
the machine, as an order producing, order maintaining system that has a capacity
for information and that has a certain measure or degree of noise associated
with that system. To put this in short form, where we find work in systems, we
find order and information about that system.
Reductionist theory would claim that the mechanics of any system of any
kind is reducible to the fundamental laws of physics governing such systems.
Antireductionists, or as von Bertalanffy called, "perspectivists"
would asser that such fundamental laws are insufficient to the full accounting
for the behavior of the system. "…The presently existing laws of physics
and chemistry may well turn out to be inadequate in the description of the
living system for the same reasons that the laws of Newtonian mechanics were
inadequate in dealing with the interior of the atom." (Gatlin, 1972: 16)
According to Michael Polanyi, reductionist explanation in terms of
fundamental laws of physics ignores the laws of information theory governing any
information producing machine: "all objects conveying information are
irreducible to the terms of physics and chemistry." Any machine is an
information producing machine as well as a working machineorder required for
work produces information. Any such machine cannot be understood in terms of its
information processing capacity by a mere description of its hardware.
Information producing machines, or real systems, are the result of higher order
operational principles governing their design and function that cannot be
deduced from the analysis of its hardware regardless of the accuracy and
precision of its physical measurement. Any such machine is furthermore
controlled primarily by its boundary conditions, and the operational principles
and boundary conditions constitute a more sufficient and fundamentally relevant
explanation of a machine than the systematic accounting of its hardware and
mechanical operation. Higher operational principles within the hierarchy of
determinations of stratified natural systems determine the boundary conditions
that serve to define any information processing machine.
Mathematical knowledge is applicable, in some as yet undefined form, to
any kind of natural or real system. This application of a body of mathematical
knowledge becomes, if successful, a part of the theory that is used to explain
this kind of system as a general model. The description of any system
mathematically becomes accurate and precise when this description becomes an
explanation by laws of physics, when mathematical description of the system is
"so exact in numerical terms that quantitative prediction of experimental
fact inevitably follows."(Gatlin, 1973:19) Anything less than this,
anything only approximate and less exact, does not constitute a covering law of
the physical description of the system, but only a symbolic and hypothetical
interpretation of the system.
The information content of any possible system is defined by the number
of alternative informational units or states that compose the system. Binary
systems of digital computers use a bit system based upon values of 0 or 1. In
any information processing system, we can denote the size of its informational
content by determining the capacity of each unit and the total number of units
in the system. For discontinuous or discrete data, we can ask how many bits or
units of information for the total system; for continuous and indiscrete forms
of data, we can ask how much information is in the total system.
Information theory hinges on the definition of entropy we adopt. In the
case of living systems, it is apparent that with the evolution of organisms,
there has been a corresponding increase in the negative entropy of such
organisms. In the most general sense, entropy may be defined as the degree of
uniformity or sameness or redundancy in anything. Entropy comes from the theory
of thermodynamic systems, and it is defined as the degree to which the energy in
a closed thermodynamic system or process has ceased to be available energy. In
reversible processes, entropy in systems remains the same, but in natural
irreversible processes, the entropy increases. Entropy is said to be increasing
for the universe as a whole. Thus, put an ice cube in a room at normal
temperature and it will melt slowly. Put an icecube in a hot room and it will
melt more rapidly. Put an icecube in a walkin freezer, and it will not melt.
We do not expect to see an icecube freeze in a warm room, for otherwise we
would expect a violation of fundamental principles regarding the order of
natural systems. An icecube melting in a warm room is an example of an
irreversible process of heat gain from the environment into the ice.
Entropy is the measure of the alternative states a system may assume, and
in communication theory, it was developed as a measure of information in a
system.
Any mechanical or physical system will under constant
conditions approach equilibrium with its environment if the heat exchange
between the system and the environment is irreversible. Equilibrium is the
natural state of a system that maximizes the entropy of the system at constant
energy, consistent with the constraints of the system.
Natural processes, such as ice melting, proceed always in the direction
of equilibrium, and is an irreversible physical process. Unnatural processes are
impossible processes that move in the opposite direction, towards greater
disequilibrium, and hence, never occur. A reversible process is an idealized
natural process passing through a continuous sequence of equilibrium states,
depending upon changing conditions between the system and the environment. If
the temperature of the room in which the icecube is melting suddenly turns to
freezing, then the ice cube will cease melting, and whatever water produced by
melting will begin freezing once again. Work is accomplished in systems by
slight changes in system state variables or boundary conditions that result in
reversible processes.
The entropy function, S, is introduced in relation to natural and
reversible processes in the heat flow of systems. Lowercase q is the measure of
the heat flowing into the system from its surroundings, and T is the absolute
temperature of the system. Thus:
1.
dS > q/T for a natural change and dS = q/T for a reversible change.
2.
Entropy of system S is made up of the sum of all the parts of the system such
that:
S
= S1 + S2 + S3 + …..
Unlike energy, entropy cannot be conserved. Increased work in a system
increases the entropy of the system. The work, the conversion of energy into
heat, is 100 percent efficient, and work increases the entropy of the system.
Converting work into heat is an irreversible natural process, but it is
impossible to reverse this process, converting heat from the environment into
work in a system producing greater entropy, unless changes to the environment
are made. This becomes the second law of Thermodynamics. In a cyclical system,
heat can be converted to work through a system so that the system will return
periodically to its initial state, but the efficiency of this process cannot be
100 percent, with a portion of the heat being lost. The lost energy results in
the degradation of the original energy state. If a restoration system is used to
restore lost energy to its original form, this system of restoration degrades
the energy even more. Therefore, all mechanical processes occurring in the
universe result in an overall increase in entropy and a corresponding
degradation of energy. While the energy of the world is always conserved and
therefore constant, the entropy always tends towards its maximization.
Understanding systems upon a fundamental level of the atomic theory of
matter, the increase of entropy towards its maximum value at equilibrium
corresponds to the change of the system toward its most probable state, its most
mixed or most random possible state, consistent with its constraints. Mixing
includes configurational mixing of particles, as well as the diffusion of energy
over the particles being mixed, as for instance, in the expansion of gases into
one another or over a given three dimensional space. Friction spreads energy
over constituent particles. Energyspread entropy is not always compatible with
configurational entropy within a system, and a compromise state of dynamic
equilibrium must be obtained.
Any substance at finite temperatures has an absolute entropy. At zero
temperature, entropy vanishes from a system. Any thermodynamic state of a system
at a finite temperature corresponds to many microstates of the molecular
components of that system that undergo continuous rapid transitions during
observation, and the entropy of the system corresponds to the logarithm of the
number of available microstates. The state of the system as a whole of all the
molecules is referred to as the macrostate, and entropy of the system
corresponds to all possible microstates of the molecules of the macrostate of
the system, and is written as W and is referred to as the thermodynamic probability of the system.
At zero temperature, the thermodynamic state corresponds to a single microstate.
Higher entropy entails higher numbers of microstates characterizing a
system, and hence higher configurational variety of the system, which entails as
well greater freedom of movement of elements, and greater freedom of choice,
greater probability of error in the prediction of outcomes of random sampling
procedures. On the other hand, greater constraint in a system results in greater
reliability or fidelity and hence concomittant reduced sampling error.
Entropy is used as a measure of information by its probability
characteristics. Absence of information about a given instantaneous situation
corresponds to an uncertainty (H) associated with the nature of the situation.
This uncertainty is the entropy of the information about a particular state or
situation of a system, such that:
H
(p_{1},p_{2},….p_{n})
= å^{n}/_{k=1}
p_{k} log p_{k}
Where
p_{1},p_{2},….p_{n
}are the probabilities of mutually exclusive events, the logarithms
are taken to an arbitrary but fixed base, and p_{k} log p_{k}
always equals zero if p_{k} =
0
In this formula, if p_{1}
= 1 and all other possibilities (p_{2},….p_{n})
are zero, the situation is completely predictable and the entropy of the system
is zero because there is no uncertainty of the state. In any other case, entropy
with be a positive value and the system of a partially uncertain state. In terms
of an information space, a source of information is described by its entropy H
in bits per symbol. The systems relative entropy (H_{r}) is the ratio of
the entropy of the source to the maximum rate of signaling that it can achieve
with the same signals. 1  H_{r} is the redundancy of the source.
Shannon's entropy function in information theory is referred to as
redundancy and is composed of two parts, D_{1} and D_{2}. Any
sequence of symbols has a redundancy that must be characterized by two
independent numbers, one defining the amount and the other the kind of
redundancy of the sequence. For any problem, the amount of increase or decrease
in entropy must be determined, and the kind of entropy must also be determined.
Information represents potential knowledge about the order or
organization of a system. Information can be defined operationally like energy
as the capacity to do work, as the capacity to store and transmit meaning or
knowledge, not the meaning or knowledge itself. In defining information
operationally, we always calculate the numerical value of its capacity, and not
the qualitative value of its content. Shannon's entropy function is the measure
of this capacity in information systems.
In short, entropy measures the randomness of a system, which is
determined probabilistically. The individual outcome of a random event cannot be
predicted or predetermined in any other way except chance, but as a related
member of a group of events that are not always identical, a random phenomenon
leads to a group of outcomes that fit a natural or Gaussian curve of
probabilities, allowing statistical prediction based upon likelihood. A random
event is a single, particular outcome of a random phenomenon that is amenable to
statistical description and prediction because its relative frequency of outcome
approaches in the structure of the large and the long run a stable limiting set
of values that define the probability of the random event. The random phenomena
is potentially an infinite, or openended, series of events, and its limiting
values. The limiting value of a coin toss is 50% heads or tails, but it is only
by a very large number of coin tosses that the sample space of all possible
tosses begins approaching this limiting value.
A set, denoted by brackets, is a collection of things of interest, in
which the identity of the individual elements or members contained in the set
defines the value of the set, whatever the relative frequencies of members or
their order of occurrence. A space is a set that is in principle complete,
including only and every member that belongs to the set. A sample description
space is the set of all possible outcomes of random phenomena, with each element
being the elementary random event. Every element is assigned a number between 1
and 0 representing the probability of its event, and this is referred to as a
finite probability space if the number of elements is finite.
Independent random events are those in which the probability of the
occurrence of one event does not affect the probability of the occurrence of the
other event. Two random events are independent if the probability of their joint
occurrence is the product of the probability of their separate occurrences, or:
p(ab)
= p(a)p(b)
If
a is event 1 and b is event 2.
Two random events are dependent if the previous occurrence of one alters
the probability of the occurrence of the consecutive event. The subsequent event
is a conditional probability of the first event. For two dependent random
events, the probability of their joint occurrence is the probability of the
first even multipled by the conditional probability of the second event given
the previous occurrence of the first. Hence:
p(ab)
= p(a)p(ba)
Any system has a state of relative entropy, and can vary between
relatively high entropy to lower entropy. A system of high entropy can be
characterized qualitatively as random, disorganized, disordered, homogeneous,
mixed, characterized under random sampling procedures by a high frequency of
equiprobable and independent events, high configurational variety, high
uncertainty of statechange or outcome, high error probability, high potential
information, and a high degree of freedom of choice, if choice can be said to be
involved. A system that is low entropy is said to be highly determined,
structured, organized, ordered, separated, heterogenous, diverging from
equiprobability (D_{1}) and diverging from independence (D_{2}),
with restricted arrangement or configuration, high constraint, reliability of
pattern, high fidelity, and much stored information.
A system with high entropy contains high uncertainty and high probability
of error in guessing the outcome any particular elementary event. Constraining
and ordering a system somehow, reduces the entropy and increases the reliability
of the system and reduces the probability of error.
Systems maintain equilibrium about some
asymptoptically stable point within limiting constraints that keep the ratio of
entropy to order in that system in a relative balance, within acceptable
boundary conditions. If a system has an ordering force that arranges its
elements into relationships of interdependency, then that system has lowered
entropy and high determinancy.
A state of maximum entropy is characterized by
equiprobable, independent elementary events.
A state of minimum entropy (maximum determinancy) is
characterized by maximum divergence from equiprobable (D_{1}) and
independent (D_{2}) elementary random events.
For any given macrostate, we may write:
S
= KW
Where
S is the entropy of the macrostate system
W is
the thermodynamic probability of the system
K
is an arbitrary constant
Entropy of one system may be additively combined with
the entropy of another system, such that:
S_{x} + S_{y} = S_{xy }
Because the combined number of microstates of two
conjoined systems is a multiplicative and not an additive function, the
properties of the previous two equations are joined according to Boltzmann's
definition as:
S
= K log W
The
entropies for both systems may be written as:
S_{x}
= K Log W_{x}
S_{y}
= K Log W_{y}
These
entropies, if additively recombined, become:
S_{xy} = K Log W_{x} + K Log W_{y }
_{ }
Simplifying:
S_{xy
} = K Log W_{x} W_{y}
= K Log W_{xy}
In statistical thermodynamics, all microstates are equiprobable. The
probability of each individual microstate of such a system becomes:
p_{i}
= 1/W or W = 1/p_{i}.
If we substitute for W in the previous expression S =
Klog W, then we have:
S = K log 1/ p_{I }
And
if the log of 1 = 0, then
S
=  K log p_{i}
This expression permits the expression of entroy in
terms of probability rather than in terms of a large number such as W which is
often impossible to determine. Entropy can be expressed also as a statistical
average of a system, or its expectation value, which would be the sum over all
possible outcomes of the probabilyt of each individual outcome multiplied by the
numerical value of the individual outcome, for any numerically valued random
phenomenon. This may be expressed as Shannon's formula:
H = K åp_{I }Log p_{i }
In
which for every arrangement of the system there is an associated a numeric
value, K log p_{i}, which is the Boltzmann variable, and the
probability of each arrangment is p_{i}
E_{x}
= å_{i}
p_{i}n_{i}
This formula may even be used when all microstates
are not equiprobable, and serves to render the concept of entropy a part of
general probability theory rather than just a function of restricted
thermodynamic settings. Its value is its generality referring to the probab
ilities of any elmentary events defined by any sample description space.
When K is equal to 1 and base 2 logarithms are used, the unit of entropy
is the bit, the most generally used unit.
We may describe systems by two qualitiesdivergence
from equiprobability (D_{1}) and divergence from independence (D_{2}).
In the first case, (D_{1}) is the maximum
value H can have in a system minus its actual value:
D_{1}
= H_{1}^{max}  H_{1} = log a  H_{1}
In the second case, (D_{2}) divergence from
independence, is the difference between the entropy state of the dependent event
(H_{2}^{D}) and the entropy state of the event if it were
independent (H_{2}^{Ind}), or:
D2
= H_{2}^{Ind}  H_{2}^{D}
The sum of D1 and D2 is called the total divergence from the maximum
entropy state.
Measurement
Theory
A measure is a standard arbitrary unit or system of
units used to determine by numerical count the dimensions or size or quantity of
a system in reality. While there are many derivative measures based upon the
concatenation of basic measures, like acceleration, velocity, density, volume,
or gas pressure, and while there are many alternative systems of measurement,
there are in fact only a few irreducible basic measures: length or distance,
time or duration, mass or weight, count, temperature, direction.
Sciences depend upon measurement for establishing
quantifiable and hence comparable results that can be duplicated and hence are
considered objective, and the sciences have instituted standard systems of
measurement to reduce the problem of conversion between competing standards.
In all scientific research methodology, there is a
premium placed upon both precision and accuracy of measurement, the two values
being interrelated but not the same. A large amount of research budgets is spent
on acquiring instrumentation that allows for the most exact or precise
measurement possible, for there is an inherent dilemma in all measurement. Even
though it is critical to be as precise and exact as possible in our measurement,
all measurement has a degree of residual error that creates uncertainty of
measure, which is based upon the smallest unit of measure available. Any
instrument of measurement is only as good as the smallest unit of measure it
allows for, and any measurement that is smaller than this smallest unit creates
imprecision and inaccuracy of measurement, leading to uncertainty of final
values. There can be no perfect, or exactly certain measurement.
Science deals with uncertainty inherent in
measurement by stating and establishing confidence limits and by statements of
error assigned to any given measure. In other words, science splits the
difference, and in scientific notation dealing with very large or very small
numbers, it applies procedures or rules for rounding.
Measurement theory deals principally with two sets of
problems and a third kind of problem interrelated the first two. First, how do
we accurately measure process and distribution in reality, and, for any given
kind of pattern that we might encounter, what are the best instruments of
measurement that we may use. Of course, selection of the best instruments
invariably hinges upon the question of what purposes we wish to put the data we
collect. Generally, research resources are limited, and this imposes constraints
on the kind and amount of data we can collect, so we must be selective and set
priorities for research that tend to leave out many possible avenues of
information for the few we prefer. Of course, we may be mistaken in this regard,
and find that serendipidty and intuition in information gathering often carries
the day.
The second set of problems is related to the first,
and concerns the methods of analysis that we put the data through that we do
manage to collect. Analysis by statistical techniques has become a standard norm
in most scientific endeavors, par for the course, and it represents a second
level of measurement that is derivative of and based upon the first level of
actual data collection. We end up with a wide possible variety of secondary data
sets (averages, Z scores, correlation coefficients, regression equations) that
cannot be found anywhere in the data itself, but is implicit to the data as it
was collected.
We all acknowledge that there are no 3.4 person
families in America, but the may well be the average, and this average is no
less real or valid (nor any more real or valid) than the raw counts upon which
it was based. The second problem of measurement theory is like the first
therefore in that analytical research budgets are also circumscribed by limited
resources, and we must pick and choose what kinds of tests that we wish to
subject our data two.
Between the first level of actual measurement and the
second of data analysis, there is a third kind of problem characteristic of
measurement, and that has to do largely with the results of the dichotomization
of the two sets of methods such that, by the time we analyze our data, we cannot
go back to the conditions of the original experiment to retrieve or reevaluate
any of the information we first collected. We can conduct a repeat experiment,
but the informational value of the original experiment will be mostly lost. It
goes without saying that the quality of our analytical results will be directly
dependent upon the quality of the data we collected, but it is probably less
obvious to assert that the quality and kind of data we collect may indirectly
depend upon the kind of analytical models or constructs we have created for
ourselves or that lie dormant somewhere in the back of our small heads.
The third kind of problem is important to consider as
well as the first two. The dichotomization of data between collection and
analysis is important and most often necessary. It is in effect, like any cause
effect relationship, unavoidable. Some would argue, rightfully so, that one
should not mix methodological metaphors in field situations. At most it is
valuable to conduct preliminary analysis of results, but full analysis must
await complete samples and finished data bases.
A great deal of scientific progress has hinged
critically upon the invention and development of new methods of observation and
measurement. Almost any field of scientific inquiry has been made possible only
by the refinement of such instrumentation that permits independent replication
and nonarbitrary observation. Alvogorado's number in Chemistry has been vital
to the unification of the field. The development of the microscope and optical
density devices have been critical to an understanding of microbial life and its
patterning. Carbon 14 dating techniques have resulted in a revolution in the
paleontological and archaeological sciences, before which such fields were
dominated by relativistic frameworks of chronological interpretation.
Undoubtedly, a great deal of what remains unknown to us about reality is so
because it remains essentially unavailable to us observationally or analytically
because we have not yet devised adequate techniques or technology.
Measurement parallax begins with inherent inaccuracy
of our measuring instruments, and the inherent variability of standards and
inconsistency between procedures. Furthermore, there are both quantitative and
qualitative degrees of freedom and innate complexity of pattern that is being
thus measured. Measuring in a discrete manner 6 atoms is not equivalent to
measuring discretely six ripe oranges or six successive days at the same
location. Measuring complex event structures or composite phenomena is not as
straightforward a proposition as weighing a gram of calcium carbonate or marking
out the length of a pencil line to the nearest quarter of an inch.
Measurement parallax addresses what can be called the
fallacy of measurement, which can be stated as a habitual or intentional
predisposition to record and report measurements, and to think subsequently
about such measurements, as if they were in fact real or reified units in and of
themselves, and not just derivative and reified artifacts of our own conceptual
devices. Measurement fallacy leads to the denial or ignoring of inherent
variability of patterning in all natural or real phenomena and inherent error of
all measurement used in analyzing and describing that phenomena.
Measurement theory becomes interesting, I believe,
when it reaches a problem of having to measure in some realistic or
representative way a complex set as I have defined this above, in which the
total number or even types of variables may be unknown. Such a set is by
definition open and incomplete in terms of the known determinants that define
the set, as we cannot specify a finite limit to its size or composition without
greater information about the set. In this case, the best we can do it seems is
to "sample" the set as much as possible within our limited research
resources. When we sample the set, we usually use some hypothesis or theory to
define our sampling error or selection priorities. We look for certain kinds of
patterns, probably ignoring others, without being certain in any absolute way
that the patterns we choose or observe are the optimum or best possible.
Such sampling may be analytically driven by our
statistical models that we will employ in their selection, as in highly
developed medical research designs that target select types of population, or it
may be more encounter and directively oriented, such as when an archaeologist
purposefully conducts a preliminary surface survey to determine the viability of
digging in a certain area.
Either way, we are never 100% clear as to the total
size, limits and structure of our sample, and even presuppositions of randomness
are only loosely approximated by any randomization procedures we may superimpose
upon our sampling. Such problem sets tend to be contextbased systems, and they
are structured by the unknown variables more than by known factors.
Possibilistic statistics is rooted to advanced
measurement theory in the operational problem of defining and determining what
can be called complex sample sets as predeterminants of the unknown complex sets
that they represent.
Possibilistic statistics is proposed as an intrinsic
part of measurement theory as a means of providing a way of systematically
dealing with complex sets of data that are partially factorial.
In this case, the object of possibilistic statistics
is to try to determine:
1. The range of variation of alternative possible
sets that may be represented by any given complex set, this being given as an a
priori unknown. In other words, to attempt to define the possible limits of the
searchsolution space that is would be theoretically required to solve the
problem.
2. The hypothetical "normal" distribution
of the alternative possible patterning within a paradigm of a complex set in
order to establish criteria of significance and for a nullhypothesis. Witin
this framework, anomalies can be determined that can be rejected as
nonrepresentative of a complex set, though if such anomalies are discovered to
occur they have to be given special consideration.
3. How to break down a complex problem set into a
number of different subsets that may be more completely factorially determined
that the whole set.
4. How to factorially determine each subset in as
complete a manner as possible, part of which factorization depends upon the
relational similiarity with other coterminous subsets.
5. How then to define the means by which these
subsets interrelated and may be put back together to further determine the
entire set.
In other words, possibilistic statistics is a
prescribed technique of sampling large, unknown sets of undetermined complexity
and size, borrowing a basic heuristic advice from computer sciences. Take big
and complicated problems, and break them down analytically into small problem
sets, solving each as one goes, and then interrelating the solved subsets back
to the total problem. In other words, break a large problem into the smallest
manageable units possible.
We must recognize that complex variability of
patterning of complex sets implies an order of integration that is unknown, and
complexity that defies simplistic description. In this case, presuppositions of
randomness or of descriptive accuracy are possibly not even relevant to our
understanding of the problem. We proceed on the assumption that all problem sets
are minimally integrated and maximally variegated, and hence we seek to find
both the fullest range of pattern variation and the fundamental substrate of
relationship within this range.
The problem is that we have no presumed
"baseline" from which to start in our differentiation of samples. The
point of possibilistic statistics is sort of the continuous reiteration of
clustering distances to determine best fit between multiple possible data sets.
The aim and purpose of such a procedure is to define a probabilistic
"baseline" from a derivative model of the problem set, from which we
can then operate using more conventional probabilistic statistics. We would
actually generate multiple alternative models from complex sets, each of which
would then be subsequently tested for likelihood of best fit. We end up with not
a single whole set, but with a fractionated ratio, of a partial set among a
range of alternative sets.
In the following, what do we choose as a baseline by
which to define a "normal distribution" and the limits of our
distribution for any given complex sample?
2x/15,
10y/25, 1xy/5, 3y/10, …..
How
can we know where to draw the line in our sampling, such that our number
represents a significant proportion of the unknown whole?
First, though a complex set is open and undetermined
as an unknown problem set, we may say that all complex sets are analytically
finite and hence minimally determined sets. If an analytically solvable set is
finite, then we can predict that in general, though we cannot know where to
determine the final limits:
1. A larger sample set is better than a smaller one,
as long as the larger one is unbiased and within scope of our sampling
procedures.
2. A maximally variable sample set is better than a
less variable one for a smaller than a larger system.
3. The range of countable variability within a sample
set of unknown size may be partially determinable by the ratios of repeatability
of different variable sets or sequences.
In regard to the third statement above, this has to
do with the definition of noise and information in systems. We want some noise,
but not too much, and some nonrandom order, but not too much. Any natural
system is expected to have both noise and order. A noisy system will have less
nonrandom variation of pattern, however complex, but may exibit greater simple
chance nonrandom variations that are the result of simple stochastic
probabilities. Say we flip a penny on successive trials, 0 for heads and 1 for
tails, and we do this ten times, coming out with the following order:
0,
0, 0, 1, 0, 0, 0, 0, 1, 1
Without being able to repeat our experiment again,
and without any other knowledge of a system of flipping pennies, we would have
to assess the unknown probabilities of turning a head or a tail on each turn.
Knowing nothing more about a penny, we might assume that the odds of turning a
head versus turning a penny is 7/3. If we ran our experiment again, with ten
flips, we might come out with a completely different ratio, of perhaps 6/4 or
2/8. Our baseline that we are searching for would of course be 5/5, but this
might only be discovered after a very large number of 10 series flips. Nothing
would prevent us in the long run, after 10 such series, from coming out with an
average that reflected 4/6 or 6/4 rather than 5/5. Instead, if we ran one long
100 sequences series, we might find our overall average to more closely
approximate 5/5, though it may still only approximate 3/7 or even 2/8. Knowning
the real probabilities involved, we would know that after 100 times, the
probability of turning a ratio of 5/5 is much greater than the probability of
turning a ratio of only 1/9, and if we did it a thousand times, our inferable
probability would be much much closer to 5/5 than any other ratio value.
This simple situation exemplifies well the
requirements and types of procedures necessary for possibilistic statistics to
be used. The quest of possibilistic statistics is the derivation of an estimated
probability that can serve as a baseline for subsequent analysis and
measurement. If we go into the entire coinflipping affair knowing that on any
independent flip our odds are always 50/50, which are good odds in the betting
world, then we are likely to risk the bet that the next flip will be in our
favor. Of course, most problems from a possibilistic standpoint are not so
simple as this. A two by two matrix or a 3 by 3 decision tree would yield
exponentially complex odds. It can be said that possibilistic statistics is a
kind of decision theory, and a kind of game theory that is applied
systematically to complex sets of possible outcomes.
Another problem in possibilistic statistics is
defining the range and probable limits of variation in a system. For instance,
if we were using a sixsided dice, not knowing how many facenumbers a dice had,
how many tosses would we have to cast the dice before we could reasonable decide
that the dice had a range of six possible numbers, equally distributed. Suppose
for instance, that we generated dice tosses after ten trials with the following
values:
1/x,
5/x, 1/x, 3/x, 5/x, 1/x, 1/x, 2/x, 2/x, 1/x
How would we analyze the results. We might conclude,
even though we didn't pull up a four, that the dice had five sides.
Alternatively, because we pulled up five ones, we might conclude that the dice
was in fact 8 sided, with four of the nonadjacent sides having one.
Repeating our experiment over 100 tosses, we may be
able to conclude, for instance, that there are indeed six sides, even if we
pulled up only a handful of sixs out of a hundred. We may not know the exact
relative distribution of numbers, and would increase the number of trials to
1000 before we could generate a reasonable probability of 16/6 odds any number
16 on any given toss.
In this kind of exercise, the only nonrelative kind
of information we may have are the partially defined real factors that are known
or discovered to exist within any given sample or samples, and the inferable
relationships we may derive on the basis of their cooccurrence, consequence and
distribution between different samples sets.
The interest and deliberate intention of
possibilistic statistics is:
1. Systematic nonrandom pattern recognition against
a noisy background, presuming that:
a. nonrandom pattern will have inherent noise that
may appear random
b. background noise will have inherent pattern that
may appear nonrandom
c. random and nonrandom pattern may interact
2. from this kind of analysis, we would infer a
probable likelihood of order in the patterning over multiple samples or event
structures, presuming that:
a. nonrandom pattern will be recurrent between
successive or over multiple event structures.
b. random pattern will tend to cancel itself out over
the metastructure of the long run and the large.
It follows that possibilistic statistics is concerned
centrally with the problem of stasis and change in complex problem sets, the
range of variation of such sets being definable more as a function of time than
of spatial distribution. We can infer that stable structures will recur over
time with given rates of expectation, while nonstable structures will shift
over time with given rates of expectation. We are not attempting to make
predictions with possibilistic statistics, but only to state accurate
expectations from our knowledge of systems from which we can then derive
stateable and testable expectations within known parameters. We are attempting
to narrow by focus the range of possible variation in pattern in order to more
selectively make decisions regarding the "unfactored" remainder of our
systems. The possibilistic baseline is the starting point for secondary
probabilistic analysis utilizing more conventional statistical procedures, and
not the end point.
Of course, the examples used were very simple and
straightforward to conceptualize. We quickly approach exponential complexity in
the conceptualization of even slightly more variegated types of patterns. We say
that in general, complex sets tend to be multiply determined, and this multiple
determination of such sets is the cause of the inherent variability of pattern.
Such sets are also by definition open sets, and their openness is the cause of
increased random variability of the background pattern. The aim of possibilistic
statistics then is to partially determine such complex sets by sample factoring
of the possible determinants that may define such a set.
It is apparent that with possibilistic statistics
applied to very large and complex systems, our profile of possibilities will
tend to be continuously shifting with the addition of new information. As in the
case of the hominid fossil record, for instance, where the evidence is few,
fragmentary and far between, and the gaps of the unkown loom large on any
indexhorizon, each new discover tends to have significantly great impact on the
understanding of the whole. This is indicative of the relative lack of knowledge
relating to this fossil record, a function of its potential size and complexity.
In other words, if we have very small sample sets to
infer about very large and complex real sets, then each new bit or variable of
information added to our knowledge is likely to have a proportionately greater
effect in restructuring our estimates of variability about the system as a
whole. The next "nth" thing found in a complex system is more likely
to be unlike any previous thing found than like. If this is not found to be the
case, then it can be presumed that the larger system is in fact a simple and
relatively stable one.
It appears that, inspite of much synchronic
variation, the hominid pattern through time was quite stable and its rate of
change rather slow. This lends greater credibility to the tendency for lumping
versus splitting of the hominid fossil record. If new fossils are found that
show significant differences from previous sample sets, it is likely that the
fossil record will prove to be much more transitory and variable over time and
place than is currently inferrable from the record. It does appear that there
were episodic periods of sympatric speciation during certain periods of this
record, with sidebranches, presumably more nichespecialized, eventually coming
to an end. The main line, otherwise, or trunk of the hominid family tree,
appears rather stable and steady in its transition characters.
If a large sample is accumulated, with an emergent
degree of order in the pattern that is recognizable, and then a completely
anomalous specimen or datapoint is discovered that does not fit the pattern,
then the stability conferred on the entire system is not thereby jeopardized or
compromised. If such anomalies are entirely unique, the possibility of a random
fluke existsif such anomalies are rare but recurrent enough, then it suggest
that these occupy a special subset in an important relation to the larger set we
have already accumulated, and that together these are subsets of a larger and
even more complex "metaset" the nature of which has not been fully
described or measured in a possibilistic manner.
Measurement of complex sets depends upon our ability
to partially factor such sets into relative subsets. This type of partial
measurement is relative measurement and is context dependeent. We are
essentially, systematically deriving and segregating the knowns from the
unknowns in any given set, while preserving the information about their
relationships.
From this standpoint, the following complex set:
5x,
2y, 10z, 20w
can be said to be partially factored when we convert
the known factors to fractions with a common denominator, and then apply the
principle of algebraic distribution to the set as a whole. Thus, for the
previous set, the following can be said to be the partially factored set:
1/20….(5x,
2y, 10z, 20w….)
(5/20x,
2/20y, 10/20z, 20/20w….)
(5/20,
2/20,10/20, 20/20….) + (x, y, z, w…..)
We can say that there would be more relative
information in the first subset than in the second, and more potential
variability in the second subset than in the first. This kind of set can be
factored out or partially determined in more than one way, giving, for instance,
the following:
(1/4,
1/10, 1/2, 1
/1…)
+ (x, y, z, w….)
(.25,
.1., .5, 1.0…) + (x, y, z, w….)
In this case, we have expressed 1/20 as a relative
cardinality factor of the entire known set, and we may predict that the value of
1/20 is important to the system as a whole, but if we discover on the very next
event that the value is not within the range of 1/20 but 1/27, then we will have
to redistribute the values of the entire set and we would have to readjust its
relative cardinality to reflect this redistribution.
Each additional value affects the relational values
of all the variables together, not necessarily because each successive event is
directly dependent upon the values previous events, but because all of the
events together can be said to be underdetermined by the same shared structural
variables, which remain unknown and complex. In the examples above, the
structural variables of our pennyflip experiment was the 50/50 odds of landing
either a heads or a tails, and this could be distributed independently
throughout every successive flipevent. Likewise, in our dicetoss experiment,
the 1/6 odds of landing any whole number between 1 and 6, inclusively, is the
shared distributional structure underlying all possible tossevents and
therefore determining the structure of each event.
We can say that in a complex set, the derivative
cardinality value is relative to the instantaneous event structure of the system
as a whole series or distribution of sets, and this variability or stability is
a relative measure of the overall variability or stability of the system as a
whole.
Finally, in conclusion, we may say that possibilistic
statistics has the aim of determining from a plurality of complex sets the
instantaneous cardinality values relative to all the sets, and therefore the
hypothetical system that these relative values define for each and every similar
or related set.
The distinction between similar and related sets is
an important one to make. Related sets may not appear similar, and similar sets
may not in fact be related. Similar sets on the other hand may be interrelated,
or indirectly related, and related sets may be similar. Possibilistic
statististic can be said to have the aim of determining the relative similarity
between different sets in a relatively precise manner, in the hope of stating an
expectation of some direct or indirect relationship between alternative systems
that defines a larger paradigmatic structure, or hypothetical model, defining
such systems. In general, it can be stated that related sets will share basic
underlying cardinal structures, while similar sets will only share surface
patterning that are possibly shaped by external factors. In the latter case,
similarity can be said to be the result of nonrandom patterning that is
relationally spurious between the sets being related. The example of convergent
evolution in natural history is sufficient as an example of similarity that is
nonindicative of genetic relationship, as for instance parallel wing structures
of bats, birds and pterosaurs, and genetic divergence in the genetically related
structures of sea mammal flippers and mammal feet and hands. The periodicity of
the elements in their specific groups with shared chemical properties is another
example, I believe, of a form of similarity relationship that is a function of
the same number of electrons in the outer orbitals.
In general, it should not matter whether we are
dealing with genetic relationship or similiarity of sets, except that the
underlying structures governing these different kinds of patterns may be
fundamentally different. With genetic relationship, we expect systematic and
continuous variation, or divergence of common structures. With similiarly
relationship, we expect convergence of different structures due to similar
underlying cardinal properties.
One of the key techniques in advanced measurement
theory and possibilistic statistics is in the definition and application of an
arbitrary analytical frame of reference to create comparable or differentiatable
units of analysis with complex sets. As was mentioned at the start of this
section, standards and instruments of measurement have greatly facilitated and
made possible the advancement of science. As was mentioned previously as well,
it is not always possible to determine what the appropriate frame of reference
might be, given a variety of alternative possible frames to deal with. In a
sense, the determination of the baseline by means of deriving the instantaneous
cardinality of a factored sample representative of a more complex set is the
manner proposed for developing an appropriate frame of reference for applying
units of analysis in differentiation of subsequent samples. As was implied,
Chemistry didn't advance very far in a numerical sense until Alvogorado count
estimate reliably the number of atoms per mole of any given substance. This may
be accomplished qualitatively rather than numerically, and abstractly rather
than concretely, if for instance roundness were the cardinal of our complex set
of all round things in Ireland. As stated below, we risk oversimplification.
There may be only 10 round bowls of a certain kind in all of Ireland, but
millions of round common bowls, wheels, windows, and cups. We might also
discount all round balls or spheroids, which may also number several millions,
if we distinguish stricktly between what is round and what is spherical.
Set
Theory
Measurement theory involves the accumumlation and
definition of sample sets derived from systematic observations made of patterns
in reality, with the aim of deriving what can be induced as significant pattern
structure from a theoretically noisy background. The result is a paradigm of
limited possibilities, inclusive of possible exceptions or anomalies, by which
we can conduct further experiments, and devise new means of analysis.
At some stage in this process, if it is successful, a
point should be reached where there will exist multiple sample sets that require
arrangement in some kind of order or frame of reference, and which may need to
be partially integrated or related to one another in the definition of the kind
of metasystem that the structure of the sets exemplifies.
At what point does a set of sets, or a series of
sets, or some kind of set distribution, become a system, and, inferentially, a
kind of "metasystem" from which we can determine the underlying
predictive structures that theoretically account for our observations. We are
moving by a series of steps from description and measurement analysis to metaset
construction and hypohetical system development.
Set theory concerns the abstract definition of sets,
the logical interrelation of sets, and the formation of metasets.
In general, we can say that a simple set is defined
by its cardinality, or by the shared determinant representing each member of the
entire class of members of the set. A size of a set is determined by the
population of its members. In simple sets, we select some key attribute or set
of attributes by which to characterize the set as a whole. In this sense, simple
set theory can be found to be implicit in most typologies and taxonomies, when,
for instance we can say that a Beagle is a kind of dog, or a representative of a
set of dogs.
We can pick key determining traits from what can be
called polythetic sets, which are sets whose membership is defined by
representation of more than one kind of trait, but rather by a number of
interrelated traits that may be more or less apparent to any one member of a
class. A member of such a polythetic class that is defined by five key traits
may in fact only possess two or three of the defining traits, but nevertheless
be represented in that class. We may thus interlink different polythetic sets
together, for instance, if they share members between different sets, and in
complex kinds of set patterns, it is possible that such interlinkages between
sets extend indefinitely or across a very wide field of systematic variation.
Set theory has important implications for both
semantics, or the structure of meaning, that can be said to be symbolic, and
hence culturally determined, as well as for abstract systems of mathematical
quanitatization and logical relation that is rationally systematic. The
definition, organization and interrelation of sets permits us to perform
fundamental logical functions, and permit on a basic level a systematic
unification between meaning and its correspondence within onetoone type
quantized systems. Advanced set theory therefore involves this kind of
relationship with especially metasets that are derived through samples of
complex sets.
A state can be said to be an abstractly objective
relational set that is the partial instantiation of a system. A metastate can be
said to be some hypothetical description or theory of a state or set of states
composing some metasystem. A metastate is always partial to the whole
metasystem.
The total set of a metasystem would in theory be the
total number of instantanous state transitions between the time of origination
to the time of ultimate disintegration of the system as a system. In fact, the
total set of such a metasystem would be a continuous set of alternative
statevectors, each of which would constitute a subset of the total. To view the
system synchronously at an instantaneous point in time would be to view the
subsets of the system in a way that is distinct from that if we viewed each of
the statevectors of the system from the point of their initiation to their
respective terminus. We could plot this on a matrix in which the horizontal axis
represents the temporal vector of the system, and the vertical axis represents
the spatial vector or distribution of points. It can be seen that from one
instantaneous interval to the next, that the distribution of points of the set
would not necessarily be the same.
To understand set theory in terms of our metasystems
model, it is therefore necessary to construe sets as dynamic entities. Dynamic
set theory would derive from a nonlinear topography, and would lead to
continuous intercorrelational matrices. There are, I believe, many implications
in this model, and it demonstrates as well the character of applying basic
mathematical theories to the model of a metasystem.
Set theory deals with the abstract organization of
collections, or sets, of entities. All
systems are composed of or compose sets or collections of things that are
identifiable in some abstract sense. Because of the paradox that any set or
collection of things must necessarily be both a subset of some larger collection
and also a set containing subsets of smaller systems, we must be careful in our
specification and identification of things that determine their order and
relation to other things.
All pattern in nature, if it is recognizable as such,
exhibits some sense of "order" that is symbolically resonant with our
understanding of reality. Often we observe pattern in complex phenomenological
events and see no pattern or sense of order whatsoever. We construe only what
appears to us to be somehow random or at best some subliminal sense of pattern
that we do not notice and construe as only something of the background.
Our ability to recognize pattern in natural
phenomena, or in the larger sense, in our phenomenological experience of
reality, is directly contingent upon the preconceptions and gestalt frameworks
of symbolic attention and understanding that we bring to bear upon such
experience. We will not see in natural order what does not accord with our prior
knowledge structures, and which, also paradoxically and somewhat systematically,
in turn derives from our previous experiences.
In a sense, as we peer through a telescope or through
a microscope, or we just peer out a window onto the outside world, we embrace
the whole of the structural patterning of nature, indeed, the basics of all
reality, in a single instant. This would be true if we understood clearly
what we were looking at and what to look for in the patterning of what we
observe. Ascetically, we could develop the whole of a very successful
natural science just based alone upon our ability to look out of a single window
onto the natural world, at least in theory. Technically, we could claim this to
be hypothetically true, because everything is connected somehow to everything
else, and thus the infinite set of all things is indirectly inferrable from any
finite set of small things it contains. The only requirement, again, is that we
knew what to see or how to see what we were observing.
But our history of science and current sense of
scientific worldview did not arrive full blown in a single vision from some
window, nor did it come overnight in a single passage of the moon. It was built
slowly with with many stops and starts, over a long period of the accumulation
of experience and observation by many different people from many different
points of view. It arrived to where it is today only after a long struggle with
alternative arguments and different points of view. It marched with falsehood
and folly as much as it cavorted with truth and wisdom. And except perhaps for
Kepler and Galileo, few scientists have also been saints.
But it has clearly arrived at the doorstep of the 3rd
Millenium with a selfconscious awareness of its own resolving and inferential
capabilities. In the structure of a subatomic particle it is viewing the entire
universe, and in the structure of the nucleus of a cell it is viewing all of
life, and in the structure of a simple book or poem, it is viewing the structure
of all human reality. This is its power and its sublime elegance, that in all
the confusion and apparent chaos of our reality, as infinite and openended as
this is, there reigns a supreme and supremely simple sense of order. And, except
for the admonishments of Einstein, if we have science, we almost do not need God
any longer. Of course, I say "almost" in an agnostic rather than an
atheistic manner. I will not go so far as Kierkegaard, Marx or McCluhan to claim
that "God is dead."
It is the effort of this third chapter of this first
part, to attempt to reconcile our limited understanding of set theory,
especially as this underlies much of what we do in mathematics and in the
scientific organization of knowledge, with our equally limited understanding of
patterns of natural order, simple and basic or complex and elaborated,
especially as these are encountered apperceptively and apprehended immediately
in our phenomenological experience, unconstrained by the preconceptions and
points of view we bring to every event.
Hopefully, in the process of this reconciliation
between abstract theory and concrete experience, we can transcend the
limitations of both forms of knowledge, to arrive at a transcendent sense of
order that is both synthetically holistic and analytically systematic.
It is quite clear to me that if we are to move
forward with our metasystems models based on mathematical symbolisms and
symbolic mathematics in nontrivial ways, then we must achieve such
reconciliation.
Implicit to the preceding argument is the sense that
the application of settheory to our apprehension of phenomenogical order is
greatly conditioned by the sense of order we bring to such experience. If we
dichotomize our abstract systems of meaningful identification, hence of
definition and accounting, from our experiential systems of meaning and pattern
recognition, then we are sundering what is in fact a unity of experience and our
sense of reality. Reality is necessarily dichotomized only if we make it so, and
only if we emphasize difference over unity of experience.
If we construe this process of pattern recognition and conceptual
construction as interdependent, as part of a knowledge system itself involving
dynamic feeback, then we are able to step beyond the boundaries implied by such
a dichotomization between the real and the ideal.
But it is also quite true that not all patterning we
construe in nature, especially upon very basic levels of apperception and
response, are necessarily "preconditioned" by our own preconceived
constructions. Many reponse patterns are direct and rooted in our nature, and I
am sure as well that there are basic universal patterns of perception that we
are born with and that forms a substrate, however unconscious, to our meaning
systems. But at the same time, it is in the selection and interpretation of
experience, beyond mere fright reactions, natural curiosity and inchoate
feelings we bring to our experiences, that we find the work, necessarily, of our
cultural and conceptual constructions.
Implied in this kind of understanding is of course
the basis for an argument about the validity of a gestalt approach to scientific
phenomenology and theoretical construction. Consideration of formal set theory
and its applicability to real systems, and consideration of the limits and
facets of our sense of order in natural phenomenal patterning, upon which our
inference abilities and our sciences are based, is the beginning move toward a
systematic excoriation of abtract metasystems.
Nature seems to organize things in one way, and
abstractly ideal entities are organized in some related, but not exactly equal
way. The fundamental disparity between our abstract systems and systems of
realization are essentially measurable or determinable in terms of the basic
identities or thingness of groups or collections and the relationships between
things and groups. Thus, set theory is really a theory about grouping and
groupability, or the ability to sort and arrange things into groups. It is in a
sense foundational to our ability to organize reality in some coherent way that
makes sense, whether abstractly or realistically.
A great deal of abstract set theory is implicit to
most of mathematics. I will construe what is technically known as a mathematical
series as an implicit and special kind of set. In deed, it seems to be the case
that our ability to deal with things at all in any general sense is based on our
ability to group and form sets and to relate sets and things of sets to one
another. It furthermore provides us with the means of relating our abstract
notions and ideas, or rather our generalizations, with naturally occurring sets
of things that are alledged to be representative of our generalizations. A
generalization can be construed as being at least an implicit set, or an
explicit statement about an implicit set, that is made explicit through
systematic definition. Systematic definition would proceed through both the
application of a mathematical mechanics to the description of real systems, and
by means of an elaborated symbolic calculus that serves to integrate the sense
of reality in a gestalt framework pertinent to such a system, as a hypothetical
metasystem.
Technically, set theory refers to the mathematical
study and description of collections and sets. In a larger sense, in terms of
logic and semantics, it deals with taxonomy and the systematic organization of
knowledge based upon relational properties, similarities and differences. Thus,
it is very important to science on a number of levels. It is easy to find the
role of taxonomic organization of knowledge in many different areas, for
instance, in biology. Evolution or the engine of natural selection would make no
sense and demonstrate no apparent order or dynamic outside of an understanding
of natural taxonomic systems. Indeed, a natural taxonomy as framed by Carolus
Linnaeus preceded and had to come before the development of a realistic theory
of natural evolution. Also, we cannot understand natural history in any deep
sense if we do not have the common referenceinference framework that our
natural taxonomic system provides. Of course, the natural taxonomy of biological
life is imperfect and many arguments still rage about what group is related to
what. But we couldn't have developed biological sciences, especially not in any
comprehensive sense, without such a taxonomic system being constructed in the
first place. And once such a taxonomic tree was consistently, and mostly
correctly constructed, the theory of evolution was implicit to its structure and
sense of order. The relational similarities and divergence of species could only
be explained by some mechanism of change as applied to such a system of
classification.
Taxonomic classification is implicit to all our
knowledge, especially as this is organized scientifically and systematically to
serve functional purposes in our world. Evidence indicates that children are
creating their own taxonomic classifications of their lifeworld long before
they begin learning to apply the rules of language to it or act within it in any
meaningful way.
Set theory underlies in an ideal and abstract sense
all our systems of classification and taxonomy. A set is a collection of any
kind of objects that may be denoted by a variable, say Z.
Set Z may be formed by identifying a property (P) that is possessed by
certain elements of a given set X. Z would be the set of elements of X with the
property P.
That p is an element with property P of X is
designated by the following:
p
ε X
Therefore:
Z
= {p/ p ε X and p has property P}
The Z set can be said to be characterized by
determinative properties that characterize its membership. But those properties
are also characteristic of the Z set as a whole irrespective of what its
elements are in any exact sense. In set theory, a basic property assigned to all
sets in a hypothetical sense, what can be called a metaset, is the property of
cardinality.
Cardinal in its root means "cardo" or
hinge, and rfers to that on which something turns or depends. In reference to
the property of cardinality in set theory, it refers to the basic sense of
chief, or principal or primary or fundamental properties that are definitive of
a set, or upon which, the definition or collection of a set depends. Dependency
that is implicit to the term also implies the notion of a functional and
determinant relationship that defines the set as such. A cardinal number is one
that is in answer to the question "how many." Thus, a cardinal is a
member of a set. More technically and mathematically, cardinality has a more
exact denotative definition of onetoone correspondence as this is construed as
a system of positive integers or absolute numbers. This has important
applications and implications in the extension of set theory to advanced systems
analysis.
Technically, two sets are said to have the same
cardinal written C(A) = C(B), if there is a presumable onetoone correspondence
between the elements of A and the elements of B. In other words, both sets are
relative to the same cardinal number system by virtue of their onetoone
correspondence. The two sets are said to be matched along the cardinal property
of C, which is the shared or common determinant or denominator of both sets.
In finite sets this implies the notion of equal sized
sets such that we can say A has the same number of elements as set B. It implies
in a loose symbolic form an exact quaternary analogy between sets A and B. Two
symbolic sets can be said to be analogically cardinal if for each symbolic
element of set A there is a corresponding analog in set B.
For infinite sets the application of cardinality
yields interesting consequences. If A equals the set of integers and B the set
of odd integers, then the function ƒ(n) = 2n  1 represents the cardinality of
C(A) = C(B). This can be interpreted that an infinite set A may have the same
cardinal (functionally defined) as its subset B. The cardinality of an infinite set A and its subset B
suggests the polynomial expandability of infinite sets. This paradox has
interesting implications, for instance, in its application to the understanding
of the physical structure of the total universe, if this is presumed to be an
infinite system.
The notion of subset is intrinsic to this paradox. A
subset of a set is one in which each element of subset A is also an element of
set B. Hence, a subset may be smaller than a set, whether finite or infinite, or
any set may be a subset of itself. This allows us, among other things, to
subordinate or rank or order properties that are determinative of the same set.
Another way of forming a set Z is to assume that Z is
the set of all subsets of a given set X, such that it can be show that:
C(X)
< C(Z)
On the other hand, the collection of all sets cannot
be regarded as a set. If a collection X were called a set, and Z denoted the set
of all subsets of X, then the impossible ordered relation would exist:
C(X)
< C(Z)
If an infinite set cannot be put into a onetoone
correspondence with positive integers, then the set is referred to as
uncountable. Any statement of functional cardinality of such a set is referred
to as the continuum hypothesis and has as yet been unproven and remains
unprovable in conventional set theory. It remains one of the unsolved puzzles of
pure mathematics. It is stated thus:
If
X is an uncountable subset of the reals R, is C(X) equal to C(R)?
This broaches one of the basic dilemmas of improper
integration of real, infinite sets. It is a dilemma underlying the application
of ideal and abstract systems to real systems.
Cardinality of sets are said to be comparable if
onetoone correspondence is said to exist between the elements of set A and the
elements of some subset of B, such that:
C(A)
≤ C(B)
Any two sets are said to be comparable if:
C(A)
≤ C(B) (and)/or C(B) ≤ C(A)
The cardinality of any two sets is comparable if each
set is less than or equal to the other, such that if:
C(A)
≤ C(B) and C(B) ≤ C(A)
Then:
C(A) = C(B)
Cardinality is established by means of setting up
onetoone correspondence between two sets by means of ordering the sets. An
order relation is designated by the sign < if the following three conditions
are satisfied for a set X:
1.
If x_{1}, x_{2}, are two elements of X, either x_{1}
< x_{2} or x_{1} > x_{2} . In this case, any two
elements in set X are relatable.
2.
If x_{1 }is not less than x_{1 }. In this case, no element is
less than itself.
3.
1. If x_{1} < x_{2}, and x_{2} < x_{3} ,
then x_{1} < x_{3} . In this case, the relations between the
elements is transitive.
Ordering implies a countable series of elements, or a
sequence that is rankable. An odering of a set is called a well ordering if it
satisfies a fourth condition:
4.
Each nonnull subset Y of X has a first element. In this case, there is an
element y_{0} of Y such that if y' is another elementof Y, y_{0}
< y'.
Well ordering of sets invites theorems about sets
that are considered strange and counterintuitive, and that are frequently used
as "pathological" counterexamples for various kinds of conjectures.
Positive integers are naturally well ordered, but neither the integers nor the
reals is a well ordering. A well ordering for real numbers cannot be written,
but it can be proven that there is one.
Sets may be related to one another in operations of
addition, subtraction, multiplication and mapping. The sum or union of sets A
and B is given by the following:
(A
+ B ) or (A U B) is the set of all elements in either A or B; that is:
A
+ B = {p/ (p ε A or p ε
B) }
The intersection, product or common part of sets A
and B are given by (A · B, AB, A ∩ B) and is the set of all elements
of both A and B, such that:
AB
= {p/ (p ε A and p ε B) }
If A and B share no common elements, then they do not
intersect and their intersection is written as:
AB
= 0
The difference between A and B is written A  B and
consists of the collection of elements of A that do not also belong to B, or:
A
 B = {p/ (p ε A and p ε/ B) }
If A is a subset of B, then the difference between A
and B is zero. Some boolean algebraic relations follow from these
considerations:
A
+ B = B + A
A
∙ (B + C) = A ∙ B + A ∙ C
X
 (A + B) =
(X  A) ∙ (X  B)
X
 A ∙ B = (X  A) + (X B)
Boolean algebra underlies a theory of relations and
closely relates set theory to probability and computer circuit design. It
describes combinations of the subsets of a given set I of elements, taking the
intersection of S ∩ T or the union S U T of two such subsets S and T of I,
and the complement S' of any one such subset S of I. Thus, we can write the
following:
S
∩ S = S
S
∩ T = T ∩ S
S
∩ (T ∩ V) = (S ∩ T) ∩ V
S
U S = S
S
U T = T U S
S
U (T U V) = (S U T) U V
S
∩ (T U V) = (S ∩ T) U (S ∩ V)
S
U (T ∩ V) = (S U T) ∩ (S U V)
If an empty set is denoted by 0, and I is the set of
all elements under consideration, then:
0
∩ S = 0
I
U S = I
0
U S = S
I
∩ S = S
S
∩ S' = 0
S
U S' = I
From these fundamental laws, other algebraic laws can
be deduced. If the logical connectives and, or or not are substituted for union,
intersection and null set, respectively, then the same laws hold. Deductive
propositions and assertions also hold when these laws are combined by the same
connectives.
Set X may be transformed into set Y by means of a
transformation function that assigns a point of Y to each point of X. At this
point, sets are representable as matrices. The point assigned to X under a
tranformation function ƒ is called the image of x and is denoted ƒ (x). The
set of all points x sent into a particular point y of Y is called the inverse of
y and dnoted by ƒ^{1}(y).
The transformation ƒ(x) = x^{2} takes each
real point x into its square. Geometry provides many examples of
transformations. Generally,
transformations change the size and shape of an object. From set
transformations, topology can be studied.
It can be said that each system comprises some
hypothetical matrix structure, and the diagrammatic representation of such a
system can be derived from the compounded matrix that the system represents, and
it can lead to a construction of the implicit structural matrix embodied by the
system. It can be said that such matrices tend to be compound, integrated,
multifactorial, and open. They frequently subsume other matrix structures, and
are part of a larger multiple matrixes.
The matrix structure that is comprised by any
hypothetical system emphasizes the relational functions occuring between points
at whatever level of analysis we are upon. I will hypothesize that, just as
there is a single unified space within which to represent all systems in uniform
and comparable ways, this space embodies and expresses an implicit matrix
structure that is implicit and that can be used differentially and alternatively
for the expression of any system.
Just as we can minimally represent most systems in a
twodimensional plane geometricized translation, we can minimally represent most
systems by a hypothetical discrimination table of M x (j) rows and columns.
The most minimal representation we can make is a
simple chisquare table that represents the values geometricized over the x or y
axis:
(X,
Y) 
X
+ 
X 
Y+ 
+X
+Y 
X
+Y 
Y 
+X
Y 
X
Y 
The
chisquare type table above is quite common in scientific theory, and is the
maximally congruent between idealized and nonparametric values. On the other
hand, it tends to represent the most simplifed form possible and therefore
disguises the most variability occuring in any system.
Thus, most systems, it is worthwhile to elaborate
tables systematically by elaborating the diminsional characteristics embedded by
each idealized variable. This is done by the backward chaining extrapolation of
the functions underlying each data point on some ordered scale of measurement.
In general, I've adopted and assume for most instances a cardinal scale of
measurement that is sufficient for both parametric and nonparametric sets of
values. It must be seen that the actual frequency distribution represented by an
actual system may be composed of multiple alternative matrixes that would result
in the same distributional pattern.
All possible matrices, which may be infinite,
represents the total possibilistic space or potential sample area that the
actual distribution would represent. In such a system, each instantiated point
or event interval is always represented by some translated and interdependent
point upon both the x and the y axis. Each point is represented by a complement
pair (x, y) that is projected from the x, y axis. Each point would therefore be
represented by some complex equational relation with at least x and y values
that would always be expressed as ratios greater than O and less than 1. The
total size of the elaborated matrix would be determined by the total number of
points or the sample size. The RC dimensions of the data points would always be
equal and the matrix would always be squared.
The actual data points themselves may have been
derived by another set of dimensions that can be labeled qualitatively and that
may not be squared. M in the formula above is usually a complex set of
parametric values that represents both the number of data points and the main
ideal dimensions of the actual matrix. Setting these values to the x and y axis,
respectively, embodies that the values of these composite variables represented
by M are minimally differentiable on the basis of some standard equation or set
of equations applicable to all members of the set. These values may be mapped in common space along the same x
and y scales. The actual dimensional characteristics may be lost in the
translation of the sample to the xy coordinate system, and these cannot be
recovered from the table except by labeling the individual data points with
their dimensional headings.
Understanding the matrix structure of any complex
equation is critical because it determines a great deal that can be done with
the equation. Spreadsheet functions and databases that integrate multiple
matrixes in feedback control structures are derivable from these. By extension,
it allows us, among other things, to build and functionally organize computing
functions that enable us in turn to more dynamically model a system in virtual
space.
Matrix theory is extremely important then to the
operational definition of symbolic mathematics as the basis of advanced systems
science. Matrix structures can be hypothesized to occur at every level that we
can analyze. A matrix is in a sense
a translation of any unification space of a common set of points definable
within a Cartesian coordinate system to a common framework of a discrimination
table. Such a table allows us to systematically compare and relate values along
critical dimensions of differentiation that are implicit to the structural
relations that define the identity of the points.
The point of departure here is to hypothesize that
total reality as expressed by the Reality principle, can be represented as a
single complex, composite matrix structure of infinite size and complexity. Any
subset of Reality, at any level, can be represented as a component matrix of the
unified matrix structure, and each specifiable sample of points in reality, can
also be represented as a constituent and derivative matrix of the unified matrix
structure. All occurring or representable matrices are therefore partial
matrices of the unfied matrix structure.
It is a central design of symbolic mathematics in
advanced systems sciences that all forms of data that are measureable upon some
scale, are representable within the framework of some kind of matrix that is
defined by the units of measurement. This entails that we may build matrices
representative of all systems at all levels of naturally occurring phenomena.
Furthermore, if we hypothesize that all systems are in fact composite systems of
more basic systems, then we can see all matrices as being composed of, and in
part determined by, the underlying submatrices that compose the data points
upon which the matrix is based. This presupposes that reality is composite,
because it is constituent, and that therefore our analysis of reality is
composite. It also presupposes that we may construct larger and derivative sets
systematically from more basic and smaller sets.
We may build our unifying matrix structure
empirically from the ground up, or we may build it hypothetically from the
abstract top down. Ultimately, in our operational procedures, we must attempt to
do both at the same time, hopefully meeting somewhere in the middle.
Matrix theory is conventionally rooted in a linear
conception of reality. Matrices only really become interesting to advanced
systems sciences when the nonlinear control aspects of their functional
operators are taken into account, and when the derivative structure of embedded
functions underlying matrix stratification and integration is taken into
account. At this stage of their developmental application, computation devices
must be relied upon to generate the solutions for such complex structures.
For definitional purposes, a matrix can be said to be
any rectangular array of numbers or elements with m rows and n columns, such
that any matrix A has a size of m by n, and is representable in the compact form
when the size is given as:
A
= (a_{ij})
Where
a is the element in the ith row and the jth column and a_{ij} is known
at the typical element of A where i takes on the values of 1, 2, 3...m and j
takes on the values of 1, 2, 3,...n
This
describes a table of A that can be depicted as follows:
A 
n
= 1 
n
= 2 
n=
....n 
n 
m
= 1 
a
(1,1) 
a
(2,1) 
a
(....n,1) 
a
(n,1) 
m
= 2 
a
(1,2) 
a
(2,2) 
a
(....n,2) 
a
(n,2) 
m
=....m 
a
(1,....m) 
a
(2,....m) 
a
(....n,....m) 
a
(n,....m) 
m 
a
(1,m) 
a
(2,m) 
a
(....n,m) 
a
(n,m) 
Conventional matrices are useful computational
devices with a number of useful applications in diverse fields of applied
mathematics. They are used in mathematics especially in the study of linear
systems of algebraic equations and linear differential equations. In such
structures, the rows are usually used to represent string formulas that are
aligned in parallel fashion and are of equal size.
If m = n, then A is called a square matrix of order
n. If m = 1, then A is called a row matrix and if n = 1, then A is called a
column matrix. The elements a_{ij} of A, for which each i = j, are known
as the principal diagonal elements. A diagonal matrix is one where a_{ij}
= 0 if i ≠ j. A scalar matrix is a square diagonal matrix with equal
diagonal elements. An identity matrix is a sclar matrix in which the common
diagonal element is the number 1. An n by n identity matrix is denoted I_{n}.
Matrices are regarded as generalized numbers, and
they can be combined in certain definite ways. The matrix operations of
addition, subtraction and multiplication are defined in terms of these same
operations for the elements, and they satisfy some, but not all, the rules of
ordinary algebra.
Two matrices A = (a_{ij}) and B = (b_{ij})
are equal if they have the same size m by n and (a_{ij}) = (b_{ij})
for all i, j. Two matrices of the same size can be added by adding the elements
of the corrsponding positions of each matrix together, such that A + B above
equals C = (c_{ij}) and meets the criteria stated above for equal
matrices. Matrix addition is therefore associative and commutative, such that (A
+ B) + C = A + (B + C) and A + B = B + A.
A null matrix is a matrix with zero in every position
and is denoted as 0. A + 0 = 0 + A = A. The matrix A = (a_{ij}) is the
negative of matrix A and it follows that A + A = 0. Subtraction of m by n
matrices is defined by B  A = B + (A) = (b_{ij}  a_{ij})
A matrix B is said to be conformable with matrix A if
B has size n by q and A has size m by n, such that B has the same number of rows
as A has columns. The product of AB is defined only if B is conformable with A,
such that the product matrix C = AB is an m by q matrix and the element in the
i, j position of C is obtained by multiplying the n elements in the ith row of A
into the ne elements in the jth column of b, term by term, and adding these
products.
If two matrices are square and of the same size, then
the product of both matrices is commutative. Matrix multiplication is
commutative such that If A is m by n, and B is n by q, and C is q by r, then
(AB)C = A(BC) and both are m by r matrices. If A, B and C are of the proper
sizes for the operations to be defined, then A (B + C) = AB + AC and (A+B)C = AC
+ BC. If A is m by n, then for identity matrices of the proper sizes, A I_{n}
= I_{m}A = A I may happen for matrices that AB ≠ BA and AB = 0 if
A ≠ 0 and B ≠ 0.
The product of a matrix A and a number a is called a
scalar product and is obtained by multiplying every element of A by a. The
transpose of an m by n matrix A is an n by m matrix B in which the n column of A
is the n row of B and the m row of A is the m column of B, for every element of
B and A. If the transpose of A is denoted A', and B is conformable to A in every
respect, then (AB)' = A'B'. A matrix is symmetric if A = A' and it is always
square. A square n by n matrix is nonsingular if the determinant of A is not
zero. Otherwise, A is singular.
This kind of relative mathematics becomes more useful
when we consider that H stands for some hypothetical state of an implicit system
that is represented by the relations of the matrix M x (j) and H. When we do
this, we can see that the original equation represents a cyclical feedback
pattern that fits our original conception of the operational model. At this
point, we must entertain a nonlinear form of matrix calculus, in which matrices
consist of elements that are functions of one or more independent variables.
The original state matrix that defines the principle
elements and determinants of the system, become articulated n number of times,
such that each subsequent state matrix is of the same size as the original
matrix. Though a matrix represents a set of parallel linear equations, multiple
reiterated matrix structures represent a nonlinear function such that the
results obtained in the first transformation are outputs which are feedback to
the values of the original matrix, resulting in an intermediate nth state matrix
that begins the reiterative cycle over again.
We will also assume what I will call the "almost
closed" system where we assume that for each system is almost completely
represented by a number of continous/discontinuous states with a definite start
state and an eventual definite end state.
If we go back to our principle of unification and to
the reality principle, we can state that in the total sense, absolute A stands
for the total unity or total system of reality in some ultimate sense. All other
systems are derivative subsystems of A and are fit together in some complex
composite way to constitute absolute A as a total system. I will state that in
the total system, A will equal 1 or the principle of total unity. But like
absolute zero, total unity cannot be achieved, but will always be expressed as
relative unity, such that:
U
= M(u) + H
where
H = U  M(u)
and
M = Z
This same sort of equation can be used for any
system, or any subsystem that is a derivative of the system. In the differential
expansion of our system to encompass subsystems, we must always retain the
original and intermediate values in the successive embedding of the formulas,
such that the original values will always be embedded in N as a derivative.
If we wish to capture the cyclical reiteration of a
system we can begin by assuming some initial start state that can be represented
above as Zs. We will speculate that eventually some end state represented by Z_{0}
will be reached through an (n) number of intermediate states represented by Zn
such that:
Z
_{0 } =Zs  [Zn  Z(n1)]
The interval limits intrinsic to a system define its
contraining our boundary limiting factors. The size of a system is defined by
the degree
The dimensions of the system:
Size,
Polarity, Parity, Periodicity, Limits, Inputs, Outputs, Duration, Variance
Taxonomic
Systems
Taxonomy is not context bound, it is model driven.
Scientific taxonomies depend upon successful theoretical models for their
construction. They do not depend upon the typological constructs upon which they
may have been originally defined. Typologically defined taxonomies that lack
theoretical unification, in an empirically verifiable manner, are simply
ideological structures that have no scientific validity or efficacy.
Taxonomic systems are the result of successful
construction and testing of hypothetical models, as complex constructs of
reality. Taxonomic systems come to incorporate, and represent in basic terms, a
sense of worldview to the extent that such systems can claim to be universal or
at least general in application. They therefore are a statement about the
conceptual organization of reality that reflects as much as possible the
nonarbitrary divisions that the natural patterning of this reality takes.
Taxonomies are basically defined by implicit rules
governing the order of the relations between the components of such systems. The
theoretical models upon which scientific taxonomies are built define in an
explicit manner the rules by which a taxonomy should be constructed. There is
feedback to typological description and even observational technique and
selection that creates increase in scientific knowledge upon a basic level.

Intensive
systems 
Extensive
Systems 
Hybrid
Metasystems 
Initial
States 



Fundamental
States 



Atomic
States 



Molecular
States 



Intermediate
StateI 



Microbiological 



Mesobiological 



Macrobiological 



Intermediate
StateII 



Individual 



Cultural 



Social 



IntermediateStateIII 



Alternative
Systems 



Abstract
Systems 



Automated
Systems 



Final
State Systems 



Numbers
and Symbols
Mathematical
Mechanics & Symbolic Calculus
Terminological
Systems of Functionally Complex Polynomial States
It has been demonstrated that a pure mathematical
system describing a metasystemic model of reality is both trival and unrealistic
without its hypothetical transformational applicability to any and every real
system. We cannot ever prove this to be so in a nonscientific way, and any
scientific proof can only be at best inductively inferred.
Nevertheless, mathematical modeling plays an
important role in numerous applications in the language and operationalization
of science and in our general understanding of reality. This is primarily
because mathematical modeling approximates a mechanistic view of real systems
and this can be deductively derived in abstract terms. Such systems are known
for the internal coherence of their deductive inference structure, and this is
derivative of their stable and deterministic relational patterns. If these are
referentially attached to real or natural systems in a consistent manner, then
they constitute the most powerful models that science has yet produced. It is
critically important therefore to realistically consider and define the limited
role of mathematics in its application to our understanding and elaboration of
advanced systems of conceptualization if these are to have any hope of
constructing and construing an alternative scientific worldview and praxis.
I offer herein only alternative deductive systems
based mostly on my own limited experiences in anthropological research. They are
only a point of entrance into and an alternative basis for development of
analytical and synthetic operational procedures for advanced systems sciences,
but they are neither the only nor the best alternative sytems that may be
developed. In their construction and application, I have attempted to render
them as consonant as I am capable with the theoretical primes I am most
interested in understanding. It is hoped that their application to real problem
sets will be as interesting as they are nontrivial.
Mathematics, as a language of scientific
communication, is a limited system of signification. It achieves its power
through its sense of deductive exclusion and tight terminological definition. I
seek to elaborate a model of mathematics that is inherently more open and
flexible as a sytem of communication, hopefully without a substantial loss in
its inferential capabilities for our sciences.
At the same time, I seek to elaborate a more rigid
and mathematically restrictive model of symbolic language derived from natural
language models that may serve us better in the theoretical formulation and
formalization of our sciences. In general, this can be achieved through precise
and concise denotative definition of our symbolic primes.
Mathematical or symbolic logic is a point of
departure for this alternative system, but again I see symbolic logic as being
fundamentally "hung up" upon its own dilemma of identity as a
dichotomized truthvalue system. Symbolic language structures and mathematical
signification systems are both necessary and complementary in the processes of
scientific generalization, yet alone, both systems have, I believe, shortcomings
that are not intrinsic to their strengths but due to their own unnecessary
restriction or lack of restriction in certain basic ways.
In the applicability of natural language to the
problem of truthvalue, we can consider the following philosophical problem. At
what point can we say, in our statements about the truthvalue of a rose, that
our answers go from being confirmable by some means of nonarbitrary descriptive
validation, to being one of primarily prescriptive affirmation:
This
is a rose.
This
rose is red.
This
rose is a flower.
This
rose smells sweet.
This
rose is beautiful.
This
rose represents love.
These types of problems have mainly to do with the
identification and denotation of primes, as variables or values, and their
operational relations. It has as well to do with the natural flaccidity of
symbolic constructs and the smuggling of tacit values into our terminological
definitions and understandings of the world. The association of "truth
value" to our meanings, symbols and their implicatures entails that we must
understand what "truth" is in the first place and how it is attached
and manipulated in our meaning systems. In otherwords, we are dealing with the
problem of the language of science and general understanding, and how this
constrains and enables our inquiry into nature and reality.
In terms of our natural symbolic language, we attempt
to achieve a form of descriptive explanation of the underlying structures of
complex phenomena, in the form of strong generalizations that have a marked
degree of formalism. We approach systematically such a general theoretical model
by refinement and correction of our terms and their stated and implicit
relations. Such refinement occurs often by default and by lack of critical
selfawareness. I believe it marks out the principle of "perfectness"
in a metaphysical conception of reality that is complementary to the notion of
"correctness" in our puzzlesolving efforts in science. Our theoretic
generalizations, over time, become magically like Mary Poppins,
"practically perfect in every way" in spite of their fundamental
relativity and ultimate groundlessness of truthvalue.
What is achieved by this means, I believe, is a
relative degree of fit or coordination of internal frames of inference and
external frames of reference about some central problematic. There appears to be
little or no noise arising from the lack of coordination of these two
frameworks, one abstract and ideal, the other real and natural. Perhaps Charles
Darwin was the master of such argumentation when he framed his theory of
evolution, but even is basic terms, like natural selection, smuggled in some
undesirable if hidden connotations of value.
We cannot render a completely airtight and
unleakable generalization of the natural order based upon natural symbolic
language alone, but we can get a pretty close fit that holds for most purposes.
In a natural language system, the anchor points of
our truthvalue are both cultural and natural experiences as these are
symbolically articulated. In a sense, there can be no nonrelative truth in such
systems. Hence, our definitions themselves cannot obtain that molecular level of
descriptive explanation that can be set without equivocation. This appears to be
achievable only in the physical sciences where definitions take on precise
numeric and mechanical descriptions. It appears to be partially true in the
biological fields, especially as this is reducible to biochemical explanations,
but it introduces greater symbolic ambiguity and parallax of meaning when it
deals with naturalistic description of behavioral phenomena and events. It is
especially true in the human sciences that deal with anything other than human
biology.
It is partly true that in our biological and human
sciences especially, we have not arrived at the degree of theoretical closure
and exactitude of definition that is probably desirable. This is directly
proportionate to the difficulty and degree of complexity of the phenomena being
descriptively explained.
To enforce a restrictive model upon descriptive
explanation, especially upon the natural sciences, is perhaps to risk loosing
the artistry and power of words to animate discontinous worlds. But in itself,
if it can be well done, can also be a source of artistry of our
generalizationswhat I will call the consistent matching of words to the ideas
they represent. We cannot eliminate ambiguity completely, but we can
systematically reduce it down to minimal proportions by minding our p's and q's.
In regard to metasystems, I have adopted what I
construe as a mechanical model of mathematics as this is applied to the
conceptual validation and demonstration of metasystems and in their inductive
instantiation in terms of real systems, especially those that occur in nature.
Mechanical mathematics can be thought of as an applied mathematics of systems
emphasizing structural integration and functional operation. But the mechanical
model of systems that I seek to employ is itself derivative from a classical and
conventional conceptioning of mechanics, in a form of modeling that I call
nonlinear mechanics. It is therefore unconventional and leads to remarkable
consequences in our understanding of systems.
The heart of a mechanical model of metasystems is the
conceptioning of a machine as a relatively determined system of parts that
cooperate to produce some kind of joint or coordinated effect, usually in nature
an effect involving energy and motion and leading to some kind of meaningful
pattern. Mechanics, I believe, provides the appropriate framework for construing
metasystems as something that is scientifically interesting. One aspect of any
machine is the sense of integration of its components that leads to a causal
patterning of action or reaction between them. I believe that a systems
theoretic approach is fit to a mechanical and mechanistic description of
phenomena in a naturally mathematical way.
We can call a nonlinear machine one the holistic
patterning of which is not fully describable or predictable in terms of the
reductionistic analysis of the cooperation of its parts. In other words, the
interactions between the components of such a system are not fully determined or
determinable, but only partially so, thus begetting epiphenomenal outcomes that
may be variants within a range or continuum of alternative possibilities. These
may in turn lead back to state changes and structural alterations within the
system itself.
To the extent that the parts of a system are
definable in terms of their relational identities and properties within that
system, we can say that for any nonlinear system, identity of any part or
element is essentially relative and also by definition "partial,"
within the framework the system provides itself. A theory of partial identity,
or partiality, is therefore in order, which goes something like this:
1.
Any thing is never whole to itself, but always a partwhole of something else.
Thus, we have a partwhole relationship within a larger framework of possible
relationships.
Mathematically, we may express this partial identity
as:
A
= a {ƒ (X)} + a' {ƒ (X')}
where a is some presumable and significant subset of A
X
is something else functionally related to subset a
And
a' is the complement of the subset a, such that the union of a and a' function X
equals A.
We may approach the problem of partial identity
symbolically in terms of a delimiting system of definition, such that, we may
say something like the following. Given that A represents all possible roses,
then
A
is representable by means of a particular rose (or subset of roses) of a
particular kind (a) that is determinable by a transformation function X (by
color, type, etc.) and all other possible alternative types of rose (and their
associated functional values) and things like roses (flowers, colored things,
plants, etc.).
Then we may say something like what follows:
2. In any system of abstraction, whether mathematical
or symbolic, the partial realized value may stand for and represent the abstract
total value of the whole as long as the operational transformations of
derivation and partition is definable and the complement is assumable and sub
or superscripted.
3. In any system of application, we may substitute
the sign of the abstract total value for the partial derivative in any
occurrence of the partial, or by commutation, in any system of abstraction, we
may systematically substitute any partial derivation for any abstract value, as
long as the complement can be subscripted and superscripted.
4. In order to perform systematic substitition, we
require some table of reference that allows us to clearly state the partial
derivatives and directindirect complements of each abstract whole.
5. For each mathematically represented set of values,
we can assign one or more relative sets of symbolic terms & their associated
definitions, such that we may substitute the alternative mathematical and
symbolic statements at any point in our explanation.
6. In all real systems, we expect that both the
mathematical and symbolic forms will be used in a polynomial manner that
reflects algebraic abstraction of basic terms, such that for each hypothesized
abstract entity A, there is both a mathematical and a symbolic partial that
cooccurs at the same time (A(Rose)). I will call this "partial
duality" of our metasystems and their elements.
7. Finally, systematic substitution procedures are
guided by frameworks and rules of inference and reference that are said to
hypothetically underly and inform the metasystem in question, and in some larger
sense, all metasystems.
For each and every metasystem in question, there are
always at least two sets of governing operational rules that are applicable to
that system:
a.
A core set of universal inferential rules that relate that system and its design
to a larger class of systems.
b.
A derivative and relative set of inferential and referential rules that defines
its pattern of variation and alternation as unique and different from other
systems.
I will call the first (7a) unification rules and the
second (7b) differentiation rules. Finally, I would say that in any given
delimited metasystem, there is a third set of synergistic metarules that are
based upon the interaction patterns of a and b above, and these will be called
integration rules that apply to the metasystem as a whole. From the standpoint
of set theory, integration rules can be construed as the cardinality of a system
as a whole.
It is apparent in the description and explanation of
any hypothetical metasystem, whether this is real or abstract, we are interested
both in the systematic definition of the prime partials and of the prime rules
governing the system. We can say that the partiality of any system is determined
by the relatability of the parts to the whole which always includes some larger
framework.
Since all systems are partwholes of larger systems,
we can say the following:
1. No system is completely whole or independent.
2. All systems are part of some larger systemic framework that is
universal and infinite.
We cannot describe the infinite framework that embeds
any particular partial metasystem, only the primary relationships that
effectively determine that system as both separate and dependent upon that
framework. We subsume and supersume this contextual identity through
subscripting and superscripting our indirect primes.
We seek to outline and detail in our metasystems
framework the possible ranges that statealternation may achieve for any
particular or general system we are describing. We cannot do so in an exhaustive
sense, as indeed, scientific description can never be exhaustive of phenomenal
reality, because it would be infinite. We substitute general explanation in a
way we presume to be consistent with exhaustive description.
In fact, we prefer general explanation, over
exhaustive description, because the results are more interesting and
nontrivial, even if they are wrong, while exhaustive description becomes
quickly tedious and does not resolve anything in the long run. At best it
consumes valuable research resources. We need exhaustive description of course,
as our empirical, scientific frame of reference, but we must impose generalistic
limits to our scientific explanations in order that our explanations remain
parsimonious and not overloaded with trivial detail.
The substitution of general explanation for
exhaustive description is done in a systematic manner that should be regulated
by rules of deductive and inductive inference and by terminological rules of
concise description and definition. But first and foremost it needs to be
externally consistent and noncontradictory to the observed or inferrable
evidence. This is not to say that conceptual and symbolic systems cannot handle
contradiction. Indeed, ideology is a system based upon some implicit form of
tautological selfcontradiction that is disguised as noncontradiction. This
generally happens when the ideological constructs and their inference structures
are at some level fundamentally dissociated from the external realities they
purportedly represent.
Science as normal praxis and theory can tolerate a
wide margin of error, indeed it thrives on error at all levels, as long as it
can deal with error in a systematic way that allows it to expand and refine its
knowledge system in a more realistic manner. Science often proceeds
paradigmatically in spite of mounting error, so error by itself does not cause
revolutions in science. They are only forms of counterevidence that eventually
accumulate and build up to critical levels, and thus represent precursors, or
advanced earlywarning signals, that entail that science itself is as chaotic in
the long run as the phenomenal patterns of nature it seeks to understand.
It seems logical to conclude that systematic
inclusion of the possibility of error, and the occurrence of error, into our
formulations about reality, is a good way of assuring that ideological closure
will not occur in our normal scientific activity. But this is easier said than
done. We, as symbolic creatures, prefer closure, even if forced, to chronic
ambiguity and antinomality. We want certainty to such a degree that we are even
willing to sacrifice the realism of our constructs in the name of preconceived
truth.
It is the purpose of this first part, and especially
of this chapter, to outline in as much detail as I can muster alternative
systems of symbolic abstraction that are realistically and hypothetically
applicable and appropriate to advanced metasystems.
Most naturally occurring systems are essentially
nonlinear machines. Humans have tended to conceptualize and construct real
machines that are superficially and ideally linear in design, but the
functioning of which usually also describes nonlinear statetrajectories,
especially over the longterm. In this latter regard, we must understand how
such machines as finite, actual entities composed of and determined by natural
proccesses, change in their composition and interrelational patterning between
their components as a function of time and operation.
I believe that mathematics is the appropriate
language for such metasystems, whether they are construed as linear or
nonlinear in design, because the relations between the parts, even
indeterministic aspects of these relationships, can always be represented
mathematically in terms of measurable variables and values. These are terms that
are always systematic and deductively ordered in terms of logical operators
occurring within a system. For such a set of conditions to hold, any such system
must be finitely bounded in a discrete and deterministic way as an internally
isolatable mechanism with the caveat that such bounding is never perfect but
always partial.
It can be demonstrated empirically, and I believe,
proven rationally, that no real system can be perfectly ordered in a
"closed" sense. Hence, all real systems will in time show
disintegration and decay of their normal patterns, as systems, and this is an
expected aspect of any real system. Mathematically we should be able to
represent this in realistic ways. The challenge and inherent problem of
mathematics is that it is based on ideal models of closed systems that are
therefore considered to be fundamentally unrealistic. We suffer a loss of
coherence in the application of mathematics to real problem setsit entails
that we must break mathematical systems apart as systems of symbolic
conceptualization, and apply them piecemeal towards the integral resolution of
complex problem sets.
The mathematical system I am proposing is based upon
primes that are derived ultimately from real (i.e. nonideal) definitions of
identity and relation within a metasystems model. I believe that most linear
models and theories in mathematics that represent ideal systems, can be readily
converted to incorporate nonlinear systems in a homologous way, by means of the
reidentification of the fundamental identities of the primes involved in the
system as partials, derivatives and relatives. At this stage, absolute values
are translated into relative values, with the sense of discrepancy or difference
this involves been explicitly defined as intrinsic to the identity of the prime
itself at every step of its application.
This sense of difference translates into what I
believe to be a set of explicit confidence values that can be associated with
defined value sets in a statistically accurate way. I will not say that it is
nonarbitrary as would be expected in ideally abstract systems. I would say that
the degree of aribrariness infinitely diminishes to "zero" in a
nonzero reality. Some complex point in our calculations is soon reached beyond
which such difference makes little difference at all. At this stage, science
becomes robust both internally and externally without a sense of ideological
closure or an essential loss of realism of its main lines of argument. It
remains fundamentally open to error and expectable nonlinear deviation of
pattern.
It can be demonstrated from this that metasystems,
when regarded from a mechanistic point of view, are always isolatable and
definable in general terms as such. This process gives hope for our sciences to
the extent that they allow some minimal and relative degree of absolute
abstraction to occur in reference to a finite system or metasystem. This is
always relative to some larger system of reference and inference, but this is
the best that we can do in our sciences. In otherwords, limited truth is better
than untruth. By heeding and observing the limitations of our science, we can
systematically violate these limits in interesting ways.
The fundamental question becomes therefore how do we
delimit truthvalue in our conceptual formulations and abstract contructions of
reality. We need to do so in an empirically consistent way and yet remains
logically coherent in a rational manner. We know of the fundamental tradeoff
between description and explanation. We know that parsimony of explanation
cannot be served by infinitely extending our linguistic constructs and by
exhaustively describing the minutia of reality. We know also that usually
parsimony of internally elegant conceptual models cannot be achieved without
some fundamental leap of faith beyond which we tend to sweep contradictory
evidence or patterns of variation under the carpet as just so much clutter and
confusion.
What I am proposing is a builtin system of
allocational tradeoffs between opting for empirical consistency and rational
coherence in our model building. This system is built into the very language of
scientific description and explanation itself in several ways.
It proposes relatively tight denotative primes when
it comes to our descriptive language, even involving, of course, quantitative
measures. These are abstractly representable as nonquantitative variables that
define the system or the parts of the system in question. These primes, if need
be, as variables of our metasystem, are expandable in either a qualitative or
quantitative manner (preferably in both ways at the same time). These primes
should be relatively restrictive, especially and even in very complex and
derivative real systems where the identification of such primes usually remains
ambiguous and without clear points of reference.
Thus, a great deal of effort must go into the concise
definition and refinement of the primes at every point. This constitutes the
basis for what I would call Scientific Philology and this represents a companion
project that I will attempt to undertake consequently to this work. Of course,
explicit elaboration of denotative primes entails and demands a clear and
concise framework of reference/inference within which its definitions can be
constructed. This of course describes a metasystem of phenomenological
epistemology and metaphysics. The definitions themselves are usually constructed
from looser models of natural symbolic language, and this invites a substrate of
a groundless ground of meaning in our knowledge systems into which a great deal
of essential arbitrary values can be imported surreptitiously or
unintentionally. Elaboration of a systematic framework of reference/inference is
thus a complementary part of such a work in scientific philology.
At the same time, it proposes a relatively
unrestricted identification and application of the primary operational
relations that articulate within any system. Classical scientific methods were
based upon mathematical and logical models that implied, among other things, a
kind of strict causality of implicature and truthvalue. This has been clearly
mechanistic in a linear and deterministic sense. It entailed, among other
things, a blanket application of an additive construction of systems in which
there was a clearcut boundary demarcating parts, sets and samples from one
another. In other words, it imposed
a kind of abstract sense of discontinuity upon systems that were in reality
relatively continuous, and it did so in a manner as to hide the arbitrary nature
of this superimposition.
In embracing the inherent complexity of nonlinear
systems, we must sacrifice the language of description based on finite
unidirectional causes, what might be called a "chemical reaction" view
of natural relations. This is not any great sacrifice, I believe, as the search
for causes has often led us on wild goose chases in our theoretical constructs,
to the implicit foreclosure upon construal of systems functioning as such.
Natural relations appear to be not so much
deterministic, as they are interdependent, and not so much causal, as they are
correlational. If such opening of our models confers upon them a basic sense of
directionlessness, the absolute directness of time and change comes to our
rescue, and also the notion that most patterning in sytems is cyclical rather
than linear. If we confuse cyclical process as linear timeordred cause and
effect, we restrict our understanding of such natural processes that distorts
the real relations that occur.
Particularly appropriate in the adjustment of our
langauge, is the search for ultimate causes and prime movers in complex,
multidetermined systems. It can be said that usually there are no clearcut
prime movers that can be said to account for systemic patterning, except if
these are destructive in their consequences. Most systems can handle some
threshold of change without disintegration of the system being the net
consequence. Even in the cases of catatrophic events, prime movers can be
construed more as the catalysts precipitating systemic crises, rather than as
the efficient cause of such events themselves.
I therefore propose in the spaces of this work to
undertake a revision of this system of mathematical abstraction and mechanical
modeling of reality as much as is possible. I do so with the purpose of making
explicit the ways and points at which arbitrariness enters into the application
of abstract systems to real systems.
There is an important proviso in this. Abstract
mathematical systems are, in the purest sense possible, absolutely nonrelative
constructions. This is the basis of their sublime power and irreducible
truthvalue. But as such ideal systems of abstraction, they are essentially,
unmodified, unrealistic systems that cannot exist in pure form in reality. In
this regard, I propose that there is a fundamental dichotomy between a priori
and noumenal systems of abstraction, which pure mathematics represents, and
essentially a posteriori and phenomenal systems of realization that are
represented by applied mathematical systems. What I propose herein is
essentially an applied system, but one that hopefully transcends this dichotomy
in important ways. I attempt to do so by means of demonstrating as explicitly as
possible the transformational operators necessary to the application of pure
mathematical constructs to real systems. Hopefully in this regard we can retain
a limited sense of the abstract truth value inherent to such ideal systems,
without sacrificing at the same time the applicability and descriptive
consistency to real world problem sets.
Perhaps this is somewhat of a compromise approach, a
bastard of science that will prove to be an infertile oddity and hybrid.
But even if it is only a freak of an abstract system, it may open the
door to something better beyond that we do not yet understand or know.
Natural language finds its sense of order in the
symbolicrelational structure that the human brain creates within a larger
cultural system. It gains its power by indirect contextual reference to abstract
meaning as well as lived experience. The
power of language is realized in its capacity for reification, for making seem
real what is in fact imaginary.
This sense of order is minimally constrained
internally in terms of its semantic value by loosely implicit principles of
noncontradiction, or what we can call the dialectical contrast of opposites,
and analogical association. For the most part it relies upon its external
reference coordinate system to achieve its degree of realissimum. In essence,
one thing cannot mean its opposite at the same time. This is imaginable and
possible in the symbolic universe, especially in mythology, but it is not
structurally desirable as it creates dissonance within the meaning system it
embraces. Otherwise, almost anything is relatable to anything else, and the
actual deterministic patterns of relationship are included only by progressive
degrees of direct relationship. Thus, in the symbolic structure of natural
thought and language, almost anything can stand for anything else, except the
opposite of that thing. Technically, a thing can come to embrace and stand for
its antithesis, as long as it is marked in an acceptable manner that allows it
to do so within a larger system of symbolization. This is the power and potency
of natural human symbolization, especially as this is articulated and expressed
by natural human language. It is the power to resolve contradition and
ameliorate "marginal" realities that contradict our knowledge. This is
the basis of the natural symbology underlying most human ideological systems.
Mathematical language is a subsystem of the more
general form of symbolic system. Its main difference is that mathematical
language is internally constrained in ways that normal language is not. Thus,
mathematical language achieves a degree of extreme internal coherence that is
often lacking in natural language. It pays a price for this in not being fully
or sufficiently functional as a natural symbolic system. It lacks the power that
natural language can achieve in its description of reality and in its ability to
resolve contradition. But it gains a power of internal coherence of structure
that is much greater, and finds a broad range of applicability in precise,
formulaic and scientific descriptions of physical reality, especially in
mechanical systems.
Mathematics is not even a true symbolic systemit is
a system reduced to one of signification that does not depend upon communicative
efficacy. It lacks the duality of patterning found in natural language, but it
achieves thereby the consistency of exact correspondence between terms. It is
true that the functional design of natural language, that of making sense of and
promoting adaptation to the real world, demands an inherent flexibility and
external reference orientation of its linguistic structure that precludes the
possibility of setting up such a restrictive tautological system.
There are several clear implications of this. A
mathematical model of structural linguistics is not sufficient to a full
description of natural human languageit is at best a limited heuristic device
applicable mostly to grammar. Furthermore, to arbitrarily restrict natural
language by the superimposition of rules of relation and definition, is to
curtail and cut short its symbolic capacity. This is not the most desirable
thing to do if we depend upon the full power of our language to describe reality
at any level.
Between symbolic natural language and mathematical
language that is essentially a system of signification lacking many of the
design features of true language, there is a tradeoff. Mathematics works well,
especially in mechanical and physical descriptions of reality where measures
predominate and in the abstract generalization of closed models or universal
relations that are essentially mechanistic in nature.
Natural language remains the preferred, indeed,
necessary, mode of communication when it is important to try to encapsulate and
describe complex realities that resist denotative analysis in every way. Of
course, this tradeoff is never very clearcut. Science requires both the
language of natural description and rational explanation, as much as it needs
mathematical formulas for achieving theoretical validation. This is true at
almost every level, and from a scientific standpoint, natural language and the
language of mathematics are not mutually exclusive in theory building, but are
mutually complementary to one another.
Of course, attempts have been made to try to
constrain natural language and semantic systems in ways similar to mathematical
systems. Mathematical or symbolic logic is perhaps the best and most productive
example of this kind of deliberate deductive constraint. Ideological systems
that are fundamentally closed and symbolically restrictive usually impose some
restrictive constraints upon the language process as this is employed in
ideological articulation, though at some level or other nonlogical leaps of
faith and unquestioned presuppositions are smuggled into the system of
rationalization. The consequence is that if a religion teaches us that two plus
two equals five or six, we are liable to believe this even if it represents an
internal contradiction of formal logic.
Such systems are fundamentally "closed"
systems of rationalization that do not permit a testing of its truth
propositions on any level, either logically or empirically. Even mathematics is
an inherently open system in this regard. Because it is based on deductive logic
alone, it does not require faith for its apprehension or extension in the world.
Accepting mathematically that 2 plus 2 equals 4 is correct from a logical
standpoint, and so does not require any other form of conviction or symbolic
legitimation. It does not require that we agnostically abnegate or publically
confirm our faith in God or the Devil or in any other form of belief. It only
confirms our own confidence in our objective knowledge.
From this standpoint, objective knowledge has always
two facetsinternal and external. Not all knowledge has these two facets
simultaneously. Subjective knowledge, feelings, intuitions, and dreams, do not
need necessarily a set of external reference points or an internally airtight
system of deductive inference. Belief systems have two facets, but the external
facet of belief is conditional upon social sanctioning of the system, and not
upon the validation of phenomenal experience. The internal facet of belief is
conditional not upon the application of deductive logic, but actually upon the
suspension of logic or else the employment of "symbology" that is
relatively unconstrained and at least from one standpoint would be considered
illogical.
It is clear that mathematical language is at its
best, though not exclusively so, in its internal coherence. It is clear though
that mathematics can be used to reinforce an empirical description of reality at
every point. There is no sense in abandoning what is best about both language
systems in the extension of these systems to advanced systems science,
especially just to offer some mixed system which is specious at best and at
worst trivial and spurious. On the other hand, we should also recognize the
intrinsic limitations of design and applicability of both systems, and try
through our advanced systems science to overcome as much as possible such
limitations.
I propose that we need to try to work towards a
broader paradigm of the limitations and strengths of language in the sciences,
according to something as what follows:
Scientific
Language Systems 
Natural
SymbolicRestrictive/explanatory 
Natural
Symbolic Inclusive/descriptive 
MathRestrictive/deductive 
1.
Pure mathematics 
2.
SymbolicApplied Math 
MathInclusive/inductive 
3.
Mathematical Logic 
4.
Symbolic Language 
We normally have at our disposal mostly systems of
types 1 and 4 above. Limited
systems have been developed in type 3 and also some type 2 systems can be found,
particularly in the application of math to especially complex derivative
systems. These hybrid type systems are at best ambitious and at worst
overextended and clumsy, bogging down in their own topheavy structures.
It is difficult at this point to tell where
descriptive statistics, as a form of mathematical language, would be applied,
but it is a form that is important to our integration of our scientific
languages. Statistics includes types 2 and 3 respectively, and is a good
starting point in the elaboration of a procedural language appropriate for
science, but it is not itself without important limitations.
I propose that we need to try to work systematically
to achieve a tight interfunctional integration of all four types of language
systems. We must as well work to elaborate a more realistic and abstractly
integrated system for each of the types if we are to achieve the degree of
functional comprehensivity that we hope in our advanced systems sciences. In the
course of the first part I work towards development of such a broader language
base for our sciences through the development of the ideas and operational
systems of these types in an abstract sense. In the second part, I propose to
work towards the extension and procedural application of our language as
operational systems.
The basis of symbolic mathematics that I propose
herein is to be able to extend a mathematical model to the description of
complex derivative systems without the necessary overloading of variables and
functions that usually characterizes such constructions. Elegance can be
preserved and consistency conserved if we are careful and precise with our
definitions and formulations. We must be careful in this regard to hit with our
scientific hammer the proverbial nail squarely on the head, and not on our own
thumbnails.
The basis of symbolic mathematics is first to provide
a concise formulaic description of the core operational procedures of advanced
systems science. It then provides a means for its systematic extension to the
development of hypothetical and working heuristic models relating to any
possible system. It should be powerful enough to accurately and hypothetically
describe any actual system in a minimally sufficient way such that its complex
event structures can be comprehended in a realistic manner, and its
epiphenomenal outcomes made known such that this knowledge relates the system to
all other systems. The core operational procedures are derived ultimately from a
mathematical model of the scientific knowledge of mechanical systems. They are
therefore considered to constitute a purely abstract system that is based upon
strict hypotheticaldeductive rules of logic and measure, and which is
nonetheless hypothetically and experimentally applicable to all and every system
in a scientific manner.
We must ask to begin with "exactly what is
mathematics" and how is it used and useful in our sciences? The root of
mathematics, manthanein, originally
meant: "to learn, what is learned, or learnable knowledge."
Mathematics is formally defined as "a group of sciences (including
arthmetic, geometry, algebra, calculus, etc.) dealing with quantities,
magnitudes and forms, and their relationships, attributes, etc., by the use of
numbers and symbols." (Webster's Unabridged, 1979)
I will offer a minimal definition of mathematics as
the system of relating quantitative measures of some standard kind in an
internally coherent way that always results in some kind balanced equation.
Implicit to such a definition is the notion of "measure" as a
definable quantity upon some standard, arbitrary intervalevent scale and that
has some kind of numeric value that can be at least theoretically assigned to
it. This has important implications in its relationship to science, which is
operationally and theoretically based upon the principle of measurability and
therefore the systematic relatability of constructs and phenomena to one
another.
As will be demonstrated in the course of this text,
defining and superimposing interval scale measures has important theoretical
implications for our knowledge, especially as this relates to our advanced
systems sciences. It allows us, among other things, to generalize and extend our
range of knowledge from a finite and semiordered set of phenomena to
increasingly larger realms of possible phenomena. It allows us then the
capability of testing our theoretical knowledge by the application of the same
measurement devices to other hypothetically relatable sets of phenomena.
Mathematics is not science, at least not in a natural
or applied sense, though scientific methodologies are almost always based upon
some form of mathematics applied to the knowledge contexts of that science. If
mathematics is scientific in and of itself, it is so only in an ideal sense as a
science of abstraction. Attempts are made to philosophically validate
mathematical ideas and knowledge derived from natural sets and relations.
Mathematics is a field of knowledge inquiry unto itself that makes applied
scientific method possible. If mathematics is a science, it is purely a science
of abstract ideas and relations, forms and systems that exist only
hypothetically in an ideational space. In a pure sense, mathematics does not
deal with empirical phenomena or objects in the external, material world, at
least not directly. It deals with ideational constructs purely that are
considered noumenal, a priori and totally abstract. It imagines therefore the
most perfect of possible worlds, whether this world is assumed to be completely
determined or completely random in its foundation.
The internal sense of validity of mathematics is
considered mostly unquestionable and as being fundamentally independent of the
cultural conditions or constraint which normally occurs with symbolic knowledge.
Proofs for theorems in mathematics are derived purely by logical deduction, and
strict classical logic based upon the principle of exclusive identity is the
basis for mathematical coherence and validation. Hence it is in its purest form
universal to human knowledge, and often the conception of universal structures
in human patterning is construed within a mathematical form or model, as for
instance, structural linguistics. We hypothesize the psychic unity of humankind
largely on the basis for people of all cultures to be able to understand and
employ the same mathematical concepts and constructions in the mechanical
ordering of their experiences. Thus, mathematical languages and constructs form
a foundation for an objective but nonempirical basis of science. It permits the
possibility, occassionally realized, of deriving valid scientific theories by
deductive reasoning alone, without initial or final resort to empirical tests.
We can say that mathematically speaking, a mechanical
view of the world that deals with relations, strengths and potentially
observable, hence measurable, values, however indirectly, is inherently
nonsymbolic. Therefore a purely mechanical view of reality is inherently
nonarbitrary except in some minimal sense of the conventional standards of our
measurement or design of our experiment or operational methodology. While the
latter set of considerations is nontrivial for the metaphysical status of
science in reality, it can be temporarily overlooked in consideration of the
neutral and amoral application of a mechanical viewpoint or worldview that is
free of cultural constraint. Mechanical technology has readily crossed cultural
boundaries, such that we can find Moslems, Hindus, Buddhists, Catholics, Jews
and Agnostics all driving the same MercedesBenz cars in the world, all with
equal moral indifference about the internal working order of the car they are
driving in. A nonevaluative, asymbolic mechanical perspective on reality
extends directly from an immediate and unconstructed phenomenological experience
of reality. We know this to be true in the fundamental knowledge structures of
our brain and how we construe reality. We cannot afford to process reality in
its original and natural form in any other way, as we would soon be overwhelmed
and overloaded with sensory iputs. Thus, in spite of preconceptual frames, a
mechanical view of the world informs our first selective cut of reality in an
experiential sense.
As a purely abstract system of comprehension,
mathematics is yet its own system of knowledge. It is a primary objective of
this chapter therefore not only to understand the general application of
mathematics to advanced systems sciences, but also to understand mathematics
generally as itself a naturally occurring "possibilistic" system that
is purely abstract in character. In other words, as a pure and independent
knowledge system, it informs our understanding of natural order in basic and
important ways. Indeed, it informs our understanding of order itself in critical
ways, as somehow systemic and nonrandom. The rational order we are capable of in
our mathematical constructs, with such great precision, reflects ultimately the
general patterning of systemic order itself as this occurs at all levels of
phenomenal event patterning in nature.
Of course, natural phenomenal patterning is always a
chaotically, complexly "mixed" and heterogeneous system of relations.
This makes inherently problematic the application of mathematical models to
natural systems.
In mathematics, we can conceive of a pure, ideal
sense of order, and this is contrasted with an implicit notion of absolute
disorder or ideal randomness, just as the principle of exclusive, absolute
identity can be contrasted with its dialectical complement of absolute
nonidentity. In a similar manner, so too can positive be contrasted with
negative and affirmation with negation. And if we look about us in the natural
world, we see symmetrical complementarity of structure at very basic levels of
the ordering of natural patterning.
Mathematics is not subservient to science, and
science is not absolutely bound to mathematics. Mathematicians do not need to be
concerned with science, and some scientists ply their trade without much concern
with mathematics. But from both a theoretical and methodological perspective,
mathematics is the operational language of science in the deepest sense
possible, and therefore it critically informs the structure of our scientific
knowledge at almost every level of its articulation. Symbolic mathematics is the
primary form of communication of science, by which science operates and achieves
transmission and progress in its functional application and theoretical
validation in the world.
There is more than a little epistemological &
metaphysical relativity about this. Just as language not only facilitates and
makes possible thoughts, but also creates new thoughts, so too does the language
of mathematics not only specify and define the concepts and constructs of
science, but it in turn often creates these new ideas and operations for
science.
We can say therefore, from the standpoint of the
inherent anthropological relativity of knowledge, that scientific knowledge is
fundamentally relative to the mathematical frame of reference that it becomes
defined within.
Another way of construing this is to state that if we
are to get at the foundation principle of Reality in a scientific and systematic
way, then we must be able to do so in mathematical terms. Any system evinces
some kind of structure that should be representable in a mathematical form. If a
system cannot be represented mathematically, then it is not a true system
scientifically, but only a fictive one, or at best a hypothetical system lacking
in any precise structural coherence. In other words, we do not understand it
well enough yet, and our theoretical constructs can only be partially correct.
Again, most systems occurring in nature are
phenomenologically observed as inherently "mixed" and heterogeneous
systems. Many systems are in fact complex epiphenomenally derivative systems of
more basic, but still complex patterns underlying them on another level of
analysis. Our observations of phenomena are therefore always inherently
"contaminated" with noise and ambiguity. We seek to understand pattern
in a complex field of apparent disorder, which pattern is always construed
against a background of disorder. Hence our ability to represent this underlying
sense of implicit order in natural systems is often fundamentally compromised,
not only by the noise, but by the inherent complexity of the epiphenomenal
patterning of the system itself which fundamentally defies attempts at abstract
and simplifying mathematical formulations.
All mathematics is symbolic in a strict sense, and
this points up the applicability of mathematics, as a single informational
system that is broad and powerful in scope, to the understanding of systems
whether in abstract and ideal or actual and real forms. I employ the term
symbolic mathematics to refer to the special case of the intentional application
of mathematics to advanced systems science. It encompasses and embodies in its
most basic constructs of identity the inherent duality of mathematics as at one
time an idealized construct that can be symbolically represented as an abstract
and exclusive entity. It can be represented at the same time as a set of actual,
measurable realities that underlie and are represented by that identity. This
inherent duality of knowledge patterning in mathematical formulations can be put
to good use in the operational integration of systems sciences at its various
levels, particularly upon complexly derivative levels where the language of
description tends to resist even accurate definition, much less quantifiable
denotation. In this regard it borrows something from symbolic logic, or what is
known as "mathematical logic" though it does so in a sense that is
more flexible and realitistcally adaptable to alternative operational
constructs.
It will be stated at the outset that there is a
general progression in a common continuum of knowledge as it moves from more
basic to more derivative constructs in its application to empirical realities.
Scientific knowledge varies along a continuum between what can be called the
strictly mathematical and measurable to the loosely denotational and
fundamentally immeasurable.
We can clearly mark out upon such a continuum where
the human sciences sit versus the biological and physical sciences. The concern
of this model is to point out the unification of perspective that is possible by
means of mutually constraining both mathematical and symbolic language forms by
means of one another, to constitute its own operational system. Thus, however
quantative we may become in our numerical measurements, we maintain some minimal
attachment to symbolic constructs, such that we never forget the ultimately
arbitrary and anthropological relativity of even our measurements, and these are
always attached in turn to some foundation in empirical phenomena. Similarly, on
the other end, no matter how loosely symbolic we may become in our ideas and
terminologies, some residuum of mathematical precision and measurability must be
preserved in our conceptual formulations and operations.
And it is in formulation, or in the construction and
testing of formulas, that we can find the necessary operational unification for
our advanced systems science. Formulaic thinking underlies the structure of
mathematical inquiry. Mathematical systems of conception are based upon
formulas, which are defined as symbolic strings that are strictly subject only
to specific general rules of composition. Formulas in mathematics are almost
always equations, or at least potential equations or transformations. The same
formulaic structure of inquiry is applicable to the physical sciences as much as
it is applicable, albeit in less precise forms, to biological and human
scientific inquiry.
Formulaic thinking is based upon deductive reasoning
within an explicit and well defined system.
Thus, I propose such a deductive system for our
metasystems.
The same standards and style of formulaic thinking is
applicable in advanced systems science as an implicit structure of operational
inquiry at all levels of informational complexity. We reach a level where the
symbolic entities and constructs we are dealing with, as complex variables,
become inherently nonnumeric in structure, though on some level they can be
hypothesized to be reducible to numerically definable entities or measures. We
have no choice but to proceed in such a manner.
We can say that a scientific worldview is inherently
a systematic view of the world, and that a systematic view of the world that is
based upon the hypothetical design of working systems is a fundamentally
mechanical or mechanistic view of the world. Even the abstract ideas of pure
mathematics itself can be said, as ideational as they are, to be structurally
and fundamentally mechanistic in character. The original definition of mechanics
was the study of behavior of systems under the action of forces. Statics dealt
with systems that were motionless or else motion was considered irrelevant to
the description of the system. Statics dealt primarily with equilibrium or
stable states of "rest." Kinematics has been a special subdivision of
classical mechanics that is concerned principally with the study of motion
itself without concern for explaining the causes of motion in a system. Dynamics
dealt with systemic motions that were the result of forces operating upon or
with a system and that entailed some form of state changes or alternation. The
extension of a mechanistic view of science to naturally occurring systems is
fundamental to the operational design and organization of advanced systems
sciences.
We can distinguish between classical Newtonian
Mechanics, fluid continuum mechanics or classical field theory, and quantum
mechanics. We can distinguish large order or largescale systems, and small,
microscopic scale systems. Statistical mechanics is applied to dealing with
systems that entail large sample sizes.
The notion of relativity is inherent to a mechanistic
view of reality, and it informs our understanding of systems upon all levels.
The definition of mechanism, as "an assembly of movable parts having one
part fixed with respect to a frame of reference, and designed to produce a
specific effect," embodies the notion of classical relativity. We can say
that two similar but independent systems within the same frame of reference will
produce similar effects. Our scientific methodologies are based upon this
principle. Generally, as working systems, in a broad and most general sense, a
mechanism is defined a constituent, selforganized system of parts that
mechanically directs and transforms motions and energies. This is true if we are
describing the system of the total universe, or we are describing the system of
life occurring on earth, or the system of human symbolization popping in and out
of the human brain. Natural informational patterning is the result of this
mechanical sense of order and direction, and leads to an understanding of the
implicit structure and natural laws underlying any mechanically definable
system.
Thus, in our scientific and mechanistic view of the
world, we often employ many analogies, whether derived from abstract
mathematical models or actual mechanical systems. These are often simplified
representations of the more complex systems we are attempting to describe, and
such analogies, or "exemplars" are important to the theory building,
testing and comprehension of science. The rootedness of mechanical models in
mathematical relations makes this kind of model building and heuristic
problemsolving possible in the first place.
Classical mechanics dealt with the description of the
states and positions of material objects in space under the action of forces as
a function of time. This was conventionally construed in a nonrelativistic
framework, though it always implied a more general form of relativism. We know
that in natural patterning, few systems are purely linear in the sense
represented by classical mechanical models, but we can also understand that such
linear models are subsets of more complex, larger, nthscale nonlinear systems.
The models of classical mechanics were based on mathematical description and
utilized symbolic logic to derive a precise explanation for any observable
system of classical motion. It defined the basis for the derivation of
subsequent fields of physics. Many mathematical formula that were derived purely
by internal logic, and which, by themselves, appeared to have no direct
foundation in empirical reality, were found to be subsequently useful in the
elaboration of nonclassical physical theories of reality. Often they became
applicable as working mathematical analogies that described in precise ways the
functional patterns and attributes of physical systems.
The point of departure for understanding the role of
symbolic mathematics in advanced systems sciences is therefore to make the
following kind of statement. Reality (the Reality Principle) is inherently
problematic, whether we want to solve it or not. If we choose to construe
reality as fundamentally unproblematic, then we are living in a world with
intelligence but without using our intelligence. Since intelligence is
functional in a problemsolving manner, it is impossible to live in a world
without applying our sense of intelligence to somehow solve its problems. Even
the abnegation of responsibility to define and solve problems in reality is a
kind of minimally intelligent solution. The inherent aspect of our
anthropological relativity to all our knowledge is the problematic nature of our
reality, especially in any shared or collective sense.
How does science solve problems systematically in
reality, and in an objective manner? It adopts standards of measure that are
ultimately numerical in character. Only by such a means can it achieve an
objective frame of reference that is external to the subject knower, or a
collection of subject knowers, in a nonarbitrary manner.
Mathematics is a powerful system of constrained
symbolic signification that can be said to be truly internally nonrelative to
itself, though it is applied relativistically to external contexts in reality in
the descriptive explanation of mechanical systems. Hence pure mathematics is
noumenally independent in reality, and is based only and exclusively upon its
own achieved internal coherence for its validation. This is derived, I believe,
from the natural, internal countability of discrete things in reality. That so
much that is so basic to our reality and our sense of reality, can be
demonstrated in rather pure and basic mathematical terms, demonstrates the
degree to which naturally selforganizing systems follow and must obey in their
mechanical design fundamental mathematical precepts.
Pure mathematics is almost entirely based upon
principles of constrained internal coherence that are inviolable. Applied
mathematics, upon which science has successfully constructed its operational
methodologies, has been based not directly upon internal consistency of its
mathematically constructs, but on their generalizability and consistency with
external experience. In the scientific use of mathematics, internal coherence is
usually always implicit to the use of these formulas, but their efficacy is
based upon their external consistency to empirically measureable realities and
to their appropriateness in leading to successful teleological applications and
predictions.
The foundation of mathematics I believe to be the
presupposition of absolute identity, such that something at any one time and
place can only be itself, and not something else. This is also basic to
classical mechanical identity of things in physical reality. This is not to say
that we cannot have composite entities that are more than one thing at one time.
But in an absolute sense, we can at least say something like the following: one
equals one, and not two or any other value. Classical twovalue truth logic
derives its strength from this same presupposition when it is applied to
qualitative or nonquantitative values. Hence we can say the following: blue is
blue and not red or any other color. We can say that in a fictive world, blue
can be red and one can equal two, but in reality this kind of statement violates
something fundamental about our basic sense of identity, and thus must be
rejected as inherently false or fictive.
Derivative from this principle of identity in
mechanical reality, are the basic arithmetic computational formulas in
mathematics that are built mechanically upon the principle of addition. One plus
one equals two (and not three or some other number). Logically, we say that blue
and yellow make green (and not red or some other color). All other computational
operations, subtraction, multiplication and division, are elaborated extensions
of our ability to make one and one always equal to two.
Up to this point, standard logic and fundamental
mathematics are closely tied, but beyond this level they diverge and go their
separate paths. Logic, dealing with semantic meaning that is inherently
qualitative, hence subjective, quickly breaks down in the face of the inherently
symbolic values of natural language and discourse. Mathematics, dealing with
ratiocinative values that are inherently and fundamentally quantitative, hence
nonsubjective, leaps to the next level of algebraic abstraction involving basic
principles defined by substitution, distribution, and association, as well as to
geometric analysis of basic forms and shapes. From here it leads ultimately to
extremely complex and sophisticated permutations and elaborations in analytical
geometry, trigonometry, calculus, noneuclidean geometries, probability and
statistics. Mathematics has been highly successful, so successful in fact, that
we could not have had science without it.
The beginning of understanding the role of symbolic
mathematics in the operationalization of advanced systems science is to get at
the fundamental philosophical aspects of mathematics and how this relates to
reality, and especially to our scientific understanding of reality. In a sense,
it can be said that all mathematics is fundamentally symbolic in at least a
restricted sense that attaches value to some coordinate sign system. Mathematics
would not survive as a successful system of rationcination if we loosened its
standards to embrace the symbolic aspects of natural human language, for
instance. It would be reduced to a trivial system of notation that
oversimplifies reality.
If we go back to Kuhn's critique of science, we can
understand that what sets science apart from other forms of knowledge is its
"puzzle solving" character. Science identifies and defines problems
that have, at least in theory, some definite single solution that is correct to
that problem. They are thus like puzzles and less like the dilemmas of meaning
and value that we encounter in literature and literary critique. The measure of
success and progress of any scientific endeavor is the extent to which it is
capable of solving complex puzzles that scientists come to ask methodologically
about reality.
Thus, if we are to posit an alternative variety of
symbolic mathematics as somehow nontrivial and operationally useful to the
functional integration of advanced systems science, then we must define it in a
clear and concise way. This precise definition allows it to identify the
problems encountered in our understanding of reality in such a way as to be
"puzzleposing" and hence "puzzlesolving." If we cannot
accomplish this in some minimal way, then we should stop before we start.
I believe that in one limited and limiting sense
symbolic mathematics selectively and potentially encompasses all the areas of
mathematics, both pure and applied. It organizes all the areas of mathematics in
terms of the comprehensive functional integration of systemic problem solving.
The use of mathematics as a procedural language for advanced systems science is
not spurious or superfluous. It has been designed with the idea of permitting
computational and programmatic integration across all the mathematical fields,
and in terms of its possible applicability to any system in whatever area or
field it is identified within. It is necessary to the structure of this approach
in order to render it procedurally systematic. If terms and events cannot be
expressed clearly in mathematical language with measurable and assignable
values, then it is likely that we both do not understand the systems in question
sufficiently enough, and that we are therefore also unable to
"operate" upon the system whether experimentally or through
alternative application.
It requires therefore an encompassing grasp and
command of mathematical theories, formulas and principles. It is not my
intention in the course of this work to elaborate all of mathematics, which
would be a voluminous and lifetime affair. It is safe to say that as long as we
understand the basic principles involved, we can put our skeptical trust in the
capacity of computers to do a great deal of mathematical processing for us. This
is not only a time saving issue, but an issue of fostering a system that is of
greater efficiency both in terms of work and in terms of its informational
capacity.
The point of departure of symbolic mathematics is to
attach all possible measures, hence all potential numeric values, to some
symbolic system of nonquantitative denotation within a standard relativistic
framework that reflects ultimately the relativistic foundation of our reality
and our knowledge of reality. In a sense, algebra already does this to some
extent, as a direct extension of basic arithmetic equations to embrace
nondiscrete variables.
Underlying this is the fundamental principle of unity
of identity, such that on a basic level there is no difference between
qualitative and quantitative, but they are inherently alternative aspects of the
same physical identity. Hence, if we are going to identify something occurring
in reality as distinct in some qualitative sense, we must also isolate that
"thing" as somehow distinct in some quantitative sense as well. Hence,
we do not talk about blueness in a qualitative way only unless we can offer up
some mathematically quantitative description of blueness, as being somehow a
range of light on the electromagnetic continuum. We can say one blue thing, and
also one green thing. We can add the two things together, as things, but not as
two bluegreen things. We can say, one blue thing plus one green thing equals
two things that are blue and green respectively.
When we apply mathematical formulas to real world
descriptions, we are always assuming some state of ideal equivalence of discrete
value between objects that is not necessarily or exactly so. This is especially
problematic in statistical descriptions of large populations of things. Reducing
complex sets to simple count numbers often conflates and disguises a great deal
of intrinsic/extrinsic variability between things. We treat a classroom of forty
men as all essentially equivalent in our experiment, both qualitatively and
quantitatively, for the purposes of solving our basic problem. We cannot proceed
otherwise in reality without superficially overcomplicating things to an
inordinate and disagreeable level.
Thus, in order to generalize between events or
entities in reality, we must assume some minimal degree of finite equivalence
and discreteness occuring between these events or entities. The elaboration of
empirical reality otherwise leads to infinite differentiation and
particularization between separate events and entities.
In probability theory, that is applicable especially
to physics, we adopt standard terms that describe elementary entities, outcomes
or events as fundamentally isolatable and indivisible constructs, or units, that
we call sample points. Compound events or entities are usually described in
terms of set theory, and defined in terms of our conceptual experimental model
in relation to some possible problem set. They are called a set of sample points
that is united and differentiated on the basis of their relative identity to the
theoretical constructs, and are referred to as the sample space. Every compound
event or state is represented by an aggregate of sample points that are regarded
as relatively equivalent or synonymous in terms of the set theory we are
employing. This is the basis for our scientific generalization and operational
procedures.
Now it can be seen that in reality any set of events
or entities that are hypothetically equivalent are actually differentiable on
some level as realistically nonidentical. To presume otherwise is to violate a
basic principle of physics that says that the same thing cannot be in two
different places at the same time. Thus on some level, scientific generalization
commits itself to a basic fallacy of spurious equivalence of identity between
discrete stateevents. This is especially true in the more derivative sciences
of biology and anthropology, but it even happens regularly at the basic levels
of physics. Indeed, it is at the level of physics that we have unusual
properties operating based on BoseEinsteinian statistics. We need this fallacy
for the sake of preserving parsimony in our theoretical generalizationsfor the
most part this works well enough if we do not become too picky with our data
points.
Thus, scientific generalization normally depends upon
the categorical conflation of data points in the sampling of reality and in the
generalization of its concepts and their relations. To proceed otherwise is to
quickly overwhelm our procedures with a great deal of spurious and nonessential
complexity.
But to completely ignore what can be considered as
spurious to our constructs and to conflate inherently complex realities as
simplex samples is to commit ourselves to a basic kind of error in our sampling
procedures. Indeed, it is in sampling error, usually the result of inherent
variability of our sample points, that we usually and unexpectedly learn
something interesting about the inherent structure of reality as this is
different from our constructs. This is especially true in nondeterministic and
stochastic sampling errors that arise from what can be considered in our
constructs as "random" error.
Underlying this kind of presumption of sampling error
is the implicit presupposition underlying the presumed equivalence of sample
points. Because they are equivalent, they are considered to be interchangeable
with one another. Hence they are considered to occur fundamentally independent
of one another, and thus they occur in what are considered to be essentially
randomized sets. In reality, this underlying assumption of perfect randomization
of equivalent sample points is rarely realized in reality. It is safely presumed
on basic levelsotherwise most that passes for statistical evidence would fall
through the screens of biased sampling procedures. The presumption of ideal
randomization of a sample set is attached to the idea of a perfect descriptor
for an entity in reality, and the conflation of variation within the sample.
Indeed, scientific learning and progress largely arises as the result of the
violation of these presumptions in our data sets. It is in the deviations of
patterns from the ideal parameters of the sample that results in the ability to
detect nonrandom deterministic relations underlying the sample. Thus, it
forces, at some point, a revision of theory to take this nonrandom pattern of
determination into account.
If we can generalize from a random collection of
relatively discrete data points to a sample of a set of such points, we can also
generalize from a set of samples to a larger abstracted sample that is a
compound aggregation of the sample sets themselves. We can even generalize from
very large sets of numbers to a very large sample set that is, at least in
theory, infinite and unboundable. This is theoretically accomplished in
probability theory by limiting procedures that define intervals as aggregate
point sets instead of points as the limit of an infinite sequence of contracting
intervals. The probability of relative zero is assignable to each individual
point.
The law of large numbers is derivable from from this
kind of limiting procedure, which states that the relative frequency of
alternates tend toward their natural expected frequency as the sample size
"n" tends towards infinity, and this aggregate event has a probability
outcome of relative one. This is a basic situation in measurement. If we begin
with a set of basic events, or intervals, that we attribute probabilities to, by
simple and natural limiting procedures, probabilities can be assigned to any
broader class of events by applying set theoretic operations to intervals
(union). To each eventinterval there corresponds an associated probability that
is greater than or equal to zero. The total probability for the larger class of
events is merely the summation of the probabilities of the aggregated intervals,
or the measure of the Borel field of the interval set.
Thus:
P{A}
= ΣP{A_{i}} = 1
Where
A = the union of the mutually exclusive event intervals A_{1}, A_{2},
A_{3}.....
We can see that in our sampling procedures it
technically may make no intrinsic difference to have large samples or small
samples if we can always assume perfect randomization of data sets. But in the
real world a larger set of data points tends to minimize the adverse effects of
nonrandom patterns of variation not accounted for by the theory (limiting
procedure such that P for any particular point equals relative zero). At the
same time optimization of the positive affects of random exclusive
eventintervals, such that the probability of expected frequency patterns of the
total set equals relative 1, can be accomplished by the presupposition of
continuity and the extension of the addition rule from finitely to infinitely
many summands.
These are important considerations that effect both
the realism of our constructs and the ability to generalize based upon our
samples. It is easy to see that how we define our conceptual constructs directly
determine how we identify our data points and how we limit and constrain our
samples as eventintervals and sets. That this is so frequently overlooked in
the design and evaluation of statistical projects in our "sciences,"
particularly in our social sciences, is simply amazing. It points to the degree
to which any pure or applied science, lacking in either world vision or
operational efficacy, becomes the servant of political controlling structures.
Therefore, I have made the point of departure for
symbolic mathematics as a procedural language and set of operations for advanced
systems sciences the central issue of the presumed and differentiable realism of
particularistic data points as complex compound eventintervals upon whatever
level we define our sampl. This is intrinsic to the definition of our sample
points at whatever level of generalization we choose to operate at, or in
whatever area of application or level of derivative phenomenal distribution.
Thus, I impose a uniform set of terminological and relational variables as
intrinsic/extrinsic derivates and alternatives operating implicitly on each
level of analysis and synthesis that we define our samples upon.
The intrinsic disparity between the idealized data
set represented by our conceptual definition of our sample as a randomized set
of exclusive eventintervals, and the realized instantiation of the actual data
points representative of and by the sample, is made in every expression
structurally explicit and intrinsic to the definition in the first place in a
systematic way. If we choose at any level to expand the formula through
systematic differentiation and substitution, a process I call "functional
object embedding" and then we have built into the design of the procedures
a means for doing so. On the other hand, if we wish to replace differentiated
chains of values with a sample set that is ideally defined by a single variable,
we still carry subscripted with that variable the possibility for its
elaboration.
The point of departure for symbolic mathematics from
other forms of mathematics is the realization of a model of a mathematical
system of transcription that is symbolically defined by and defining of complex
polynomial states. These polynomial states are implicitly embedded in the
definition of the key variables at whatever level of sample generalization we
are operating upon. These states at least purportedly represent the hypothetical
underlying event structures relating to any particular system or set of
hypothetically related systems and that would be normally conflated or
systematically excluded in our simplifying procedures.
I believe this is accomplished within the following
kind of framework:
1. Assignment of absolute values of zero and unity (absolute 1) as the
relative limits to any system.
2. Representation of all hypothetical systems within the same
hypervolumetic space called the unification space.
3. In such a system, all discretely occurring values are transformed into
ratio values by means of systematic procedures in which they are transformed
into "ideal numbers."
4. Representation of all states as complex polynomial variables that are
always differentiable into a composite of at least three nonabsolute
derivatives:
a. A numerical value based on some scale of measure or set of measures or
scales.
b. A instantiated variable that may itself be a complex polynomial
c. A derivative that represents the difference between the idealized
statevariable, and its instantiated values and variables.
5. Representation of all relations as complex events differentiable into
alternative sets of determination.
Important to this procedural system at the same time
is the discontinous determination of key variables and their associated discrete
or expected values in any system set, and the capability of contextually
relating this set to supersystems or subsystems to which it is hypothetically
related. In other words, we require some grander sense of a universal
inferencereference coordinate system that is defined at least on a general
conceptual level in terms of ideal, discontinous variables with relatively
concise functional explanations.
Another way of looking at this is to say that the
normal procedures of an applied symbolic mathematics cannot really occur outside
of appropriate theoretical contexts that define the conceptual parameters of its
operation in a hypotheticodeductive and empiricalinductive sense. We can
advance a relatively "pure" model of such procedures in an abstract
way, but it has little value unless and only until we can apply it to real and
generalizable problem sets.
To this end, our analysis of systems must be
intentionally contextualized within an abstract frame of reference that is
general and metalogically constitutes advanced systems science as a whole, and
operationally the unified framework of a mechanicalsystemic approach to all
phenomena. We can relate all phenomena as a part of some system to which the
appropriate units of analysis, or interval measures, are definable by nature of
its positioning within the overall framework.
All systems are part of a larger, total universal
system that is most basic and derivative of systems. Furthermore, we can specify
scientifically a rather precise order or stratification of systems in natural
classes or categories depending upon their level of derivation.
I attempt to set up this kind of generalistic frame of reference in the
second and third parts of this works, with an eye to showing the operational
systematization occurring for all levels and in any area. In the last part I
return to issues of functional integration in advanced systems science,
especially as this deals with issues of applied and artificially constructed
systems, demonstrating how the basic operational procedures can be used in
alternative ways.
In a more fundamental way, we may say that the
language of mathematics, especially in its purer forms, is equipped only to deal
with ideal states, and that it achieves its systematic coherence only when it
can assume some degree of equivalence or correspondence with ideal states.
Applied mathematics must deal with the issue of the translation of the ideal
procedures and coherence of math to the description of real sets of events. This
works well enough for physics especially, which is usually based on a fairly
mechanistic set of relationships between fairly quantifiable forms of data. It
also works well in engineering that deals with some form of mechanics that is
derivable from physics, but this kind of mathematical language tends to break
down and become spurious when we deal with complex derivative phenomenal
patterns in biology and in the social sciences.
We bring advanced systems of statistics to aid us in
the extension of mathematics to these levels of phenomenal complexity, and take
great care in the definition of our data types and their implications for our
procedures. But even these are usually inadequate to cope with the intrinsic
scope of complexity embodied in such systems, especially when we wish to deal
with issues that are synthetically significant and not analytically
overreductionistic.
Symbolic mathematics has been designed therefore from
the point of view of allowing us to more realistic model complex realities
without the risks of oversimplification that are rooted in presuppositions of
ideal mathematical descriptions. If it is done well, it should permit us to
systematically generalize from data points and sample sizes large and small in a
manner that achieves simplification while retaining a sense of empirical
realism. Ultimately, this should lead to more accurate statements of
expectability of frequency distributions and prediction of deterministic
outcomes of nonrandom event structures.
I presuppose first a common hypervolumetric space
within which any and all hypothetical event structures occur, and we can model
these event structures mathematically within this space. I call this the space
of total unification. All terms, variables and values within the framework of
symbolic mathematics are set to occur within this single space. The entire space
encompasses what I call total unity, and reflects the principle of unification
and the Reality principle. Unity is depicted as absolute 1, and disunity is
depicted as absolute zero.
In this event space, we may conceptualize it in
norder dimensions. Each dimension would have a total unity value of one plus
its complement of negative one. Any form of possible event may be represented as
occurring in this possibilistic space, and any kind of mathematical procedure
can be represented within this space once necessary transformational operations
have been performed.
I call this space the unification space, and it
constitutes the basis for the procedural unification of all mathematical
constructs with the framework of advanced metasystems. It is a space that is
inherently differentiable. Its boundaries or limits can never be overpassed,
hence functions can never be completely linear except in narrow intemediate
ranges of its limits.
The space is reversible, such that unification at
absolute 1 can be represented by the origin of the xy axii, or else the origin
can be used to represent absolute zero. It is useful to construe the space as
reversible, because, I believe, it represents a fundamental complementarity of
order and disorder. Ordered systems can be considered to be represented in the
reversed direction, such that disorder occurs at the limits of the system.
Ordered systems are seen from the "inside out" in the nonreversed
view.
The D axis represents the temporal dynamic dimension
of the system. It can be represented in 2nd or 3rd dimensional systems as
reiterated diagrams that represent transformation. We can superimpose these
transformations within the same space, especially if we are to consider it as
presentable within the space of a computer screen.
We can arbitrarily represent the D dimension as
either occurring cyclically in the spinning of the system in a clockwise
direction, or as a straight line that suggests the temporal reiteration of the
timearrow. It's only constraint is that it is always unidirectional or only
clockwise in orientation. We can specify a negative D dimension that would be
represented by an arrow in the opposite direction or a counterclockwise turn of
the knob. This is a sense is most closely approximated by our imagination of
history.
In this unification space, there is no need usually
to represent Nth dimensions. I have set them to potentially rotate in a
counterclockwise direction, in order to fundamentally segregate them from the
temporal dynamic.
We can imagine the entire universe flowing in a
backward direction in some fundamental way, even though it appears to be moving
forward temporally, or else moving or changing in some way that we do not
comprehend or immediately apprehend. Nthorder dimensions exist only as
hypothetical or possible dimensions, and suggest the cooccurrence of multiple
realities. The actual existence of such realities is at this stage only
conjectural.
This is not exactly the same notion as the
contemporaneous existence of parallel universes. Such universes could be
construed to exist within the same metatemporal dimension in fact. This is
analogous to the synchronous existence of two independent people, who
nonetheless occupy the same temporal frame. Each additional dimension represents
some strange form of reiteration of the lower dimensions, as a unified system.
We cannot say what these dimensions might be.
It can be clearly seen that in just this depiction of
unification space, we have represented a great percentage of what appears to be
most basic about any system. This suggests that functional integration of any
system always occur at least in terms of such potential unification space.
The presupposition of this kind of space as the basis
for all mathematical modeling brings up an important relationship between
mathematics and graphic representation, or what I would call geometrical
modeling. Any mathematically ordered system should be describable as an
orthogonal translation in some form of geometricized space. This entails that if
all science is mathematically expressible, it should also be geometrically
describable.
Minimally speaking, though we may loose a great deal
of information in the translation, we can depict any 3 dimensional system as a 2
dimensional topographical transformation. Within the context of this work, all
diagrammatic representations are essentially 2dimensional. Two dimensionality
of a single construct is the minimal integrational requirement for any system.
Less than this and we deal only with straight lines which are construed as
fundamentally unrealistic and functionally useless to our system. Thus any
system should be minimally representable in terms of plane geometry, though most
systems can be projected and translated into terms of spherical geometry.
Ideally, though, it is intended to be used to represent functional descriptions
of curvilinear relationships that are based upon the application of analytical
geometry.
Though we may represent Euclidean systems by this
space, and it is itself essentially Euclidean, the basic requirements of all
values within this space are that none can equal absolute 1 or 0. This sets the
space to be essentially Noneuclidean in design. No line of any kind may
actually pass the perimeter or boundary of the system. The boundary of the
system is representable as either a perfect circle (3Dimensional sphere) or as
a square (or cube) that is either contained within the circle (or sphere) at its
vertices or that contains it at its midpoints. The implications as to whether
the square contains the circle or the circle contains the square is important I
believe to our ability to represent with certainty any system, especially
infinite or else infinitesimal systems. I will speculate at this point that the
former condition represents the outer limit of uncertainty and the latter
condition represents the inner limit of certainty, and the perimeter of the
circle itself represents the
midpoint of no return or vanishing point at which certainty and uncertainty
become essentially equal.
My first presupposition in the construction of this
procedural system is to state that:
All possibly occurring values are presentable within
this Borel unification field. Any scale or type of measurement may be defined in
terms of this space.
Another way of looking at this is to state that
whether we are dealing with a hypothetical space of some expected probabilities
or frequency distribution patterns, or with an actual space of realized
phenomenal event patterns, we are always also dealing with a finite sample that
is somehow and in some way a part of a larger system of relations. It is in the
largest sense infinite, and to some unknown extent prestructures and influences
the system we are dealing with, real or ideal.
In order to relate this hypothetical unification
space to mathematics in general and to our operational procedures in advanced
systems sciences, it is necessary to define some standard terms of notation
relevant to this system. X, Y, Z, D and Nth have already been utilized as
reference terms naming the principal axii and dimensions of our unification
space. Lower case x, y, z, d and nth will be used to represent any discrete
instantaneous ratio values that are attacheable to any variable in a system.
I will represent absolute zero in the system as the
stand O, with lower case "o" representing the concept and derivative
value I call relative "o," which can be defined as:
O
= o/O
I will use A to represent the value of absolute
unity, or 1, in the system, and similarly, lower case "a" to represent
relative achieved or instantanous unity within any given system.
I have reserved U to represent uncertainty and
"u" relative uncertainty. I use S to represent some hypothetical
original Start state or initial state, and "s" some actualized or
infered beginning state. F is used to represent some hypothetical end state, or
final state, and "f" is some actualized end state. S and F can also be
used under subscripted conditions to represent "success" or
"failure." P is used as a standard probability value associated with
any possible event, and "p" the actual estimated probability of that
event.
I have selected the variables J and H to represent,
arbitrarily, any given global variable. J would be the primary variable, and H
would be the third derivative associated with J. J would be a variable that is
partially dependent upon H in its derivation. M stands for any numeric or
measurable or parameter value that may be associated with either H or J in their
derivation. Lower case h, j and m all represent instantaneous actualized
derivative values of these systems.
A second presupposition to impose on this operational
system is to state that:
Any discrete or nondiscrete variable or term is in
fact always a trichotomous term that contains at least three intrinsic
derivatives.
I presuppose in this the notion that for any given
hypothetical system, we can define at least one state that is approximately
discrete and that is at least partially determinable upon some "numerical
scale" of measurement. Thus, any variable represents a complex polynomial
that has mixed numericsymbolic values. Symbolic values are nothing but labels,
and in computers, also addresses for storage. The presupposition is that these
entities can be mixed in a systematic way without the symbolic variable having
to be ultimately determined numerically or parametrically, but can be
relativistically determined in a discontinuous and nonparametric way by the
principle of relational selfidentity. All variables or terms always have at
least some derivative numeric and nonnumeric value, as well as some residual
value that makes up the difference between the derivative and the ideal value.
1. Any term encompasses some value/variable and can
be expressed as some systemic derivative.
Hence, for any given variable J, we can have at least
the following variametic breakdown:
J
= M(j) + H
Where
N subsumes some complex derivative numerical value or weight assignable to (j)
which is some particular instance or delimited set representing J and H is some
other complex polynomial construct representing the differential between J and
its actualized derivative M(j), hence:
H
= J  M(j)
And
1 = (M(j) + H)/ J = (J  M(j))/H
If
we hypothesize that X is also a similar complex polynomial, we get:
H
= M_{h}(h) + H_{m}
Where
H_{m }is some derivative nth value of the difference between H and M(h).
And
M_{h} is some other numerical weight or value associated with the
derivative of H.
This set of equations is meant to demonstrate only
the complex algebraic and polynomial structure of symbolic mathematics that
combines numeric and symbolic components in the same model. We can imagine that
each variable and value is complexly determined by some other set of variables
that are themselves complexly determined, and so on ad infinitum. It can be
clearly seen that this kind of formula is applicable directly to the modeling of
our operational systems developed in the Introduction, if we consider the J
variable in the original formula to be some hypothetical state, and the M(j) + H
to be the polynomal expansion or differentiation of this state in some
subsequent or alternate state or in some theoretical construct of that state.
The original complex derivative polynomial M(j)
can be thought of from an artificial intelligence language standpoint as
representing a basic CAR/CDR relation where the address points to some numeric
value stored there. We can thus talk about intrinsic polynomial expansion such
that M will be able to be designated by some set of subsets each with their own
(j) values. H always stands for some complex set of relative residuals that are
attached to the system by virtue of its relation to the hypothesized ideal
system.
The point of symbolic mathematics is to emphasize
that any discrete state or value is always representable as a complex
derivative. There are no absolute values in this system, only values that are
relative to the derivative functions. Thus symbolic mathematics is ultimately,
as I conceive it, an entirely relative system. In this system, there is absolute
Zero but it exists always as an ultimate endstate state that cannot be reached.
Hence Zero is expressed by the same kind of equation as above in the following
form:
O
= M(o) + H
where
H = O  M(o)
Another presupposition of our operational system is
to state that in any and every given system:
3. Any relation subsumes a range of varying
relational determinations and can be expressed as some systemic alternative or
set of systematic alternatives.
A relational value between points or sets always
assumes a parenthetic embedding of these points or sets in some relatively
differentiated way.
In mathematics, formulas normally circumscribe
symbolic strings that are ordered systematically by means of statable and
precisely ordered logical relations. These are considered to be "rules of
composition" that order the symbols, usually in a manner that expresses an
equation or else a transformation. Formulas are considered applicable to defined
sets of points that are part of a population of possible points in reality. A
point in this sense can be considered a particularized or particularistic
eventinterval or entityinterval that has some kind of relatively discontinuous
quality tha t is considered elementary and fundamental within the general or
standard frame of reference being employed. It implies among other things, a
kind of "instantaneity" or instantaneousness of its phenomenal
occurrence.
The test of a formula, for its generality, is that it
is hypothetically applicable or relevant to any particular instance or point
event of any class that the formula defines. Thus, all the points of the set
should be, at least in theory, susceptible to the uniform application of the
same formula or set of formulas that are contingent upon that definition of a
set. In a sense, the formula therefore defines a hypothetical or ideal set of
relatable and relatively equivalent points that is generalized on some level,
and in the larger sense, is held to be universal if the validity of the formula
is claimed to be universal.
It occurs in reality that exact equivalence cannot
always be presumed for members of a common set, and that the formulaic
operations, or "functions" that apply to the members of such a set
apply in an exactly equal or undifferentiable manner to all members of the set.
In the most ideal view of science, we would have a minimum paradigm of universal
laws that underlie and explain all phenomena, and by deduction result in all
other general and covering laws that are valid within the system. Science has
not yet obtained that point of comprehensive integration or theoretical
unification, and it will never reach the point where it will proffer
unequivocally and with uncritical doubt or unquestionable certainty a paradigm
of a few universal laws of reality underlying all sciences. But this does not
mean that Science cannot or should not, at least in theoretical construction,
progress toward such a goal. Neither does it mean that there is no place for
differentiation of multiple scientific applications in reality, or that these
themselves cannot be brought under a common umbrella of functional integration.
In this system, we have already the expression of the
four basic arithmetic operations of addition/subtraction and
multiplication/division. Addition and subtraction implies a system that is a
composite of subsystems that are relatable in complex ways. Among other things,
these relational signs imply an essential equivalence between members of a
common set or sets. Thus addition and subtraction subsume, I believe, a variety
of possible interactions between subsystems. The signs themselves, (+) or ()
would themselves take on alternative relational significances (conjunction,
disjunction).
From a set theoretic standpoint, we can talk about
union and intersection of sets, which implies conjunction and disjunction
respectively. We can also talk about the multiplication of sets if we consider
sets to stand for matrix structures.
In the foregoing basic equation, we may also express
what can be called relative dependence/independence. We can say in the original
form of the equation, that Z is a term that is relatively dependent upon X that
is itself relatively independent in a complementary way, such that if we return
to our third equation above:
1
= (M(j) + H)/ J = (J  M(j))/H
Then we get:
1
 H/J = N(j)/J
and
1
 M(j)/J = H/J
or
1
 M(J)/H = J/H
and
1
 J/H = M(j)/H
The arithmetic functions of multiplication and
division express relational values of integration & distribution. Any
implicit multiplication sign subsumes and implicit matrix in the formula, such
that in the first equation above:
J
= M(j) + H
The M x (j) would represent the dimensions of a
martix subsumed by J and of which H is a differential derivative. This implicit
matrix describes a range of alternative derivative valuesvariables that are
encompassed internally by J, plus the range of other alternative derivative
values subsumed by H that would itself be some matrix. Thus in the equation
above M (j) comprises a size dimension of the intrinsic matrix that implicit to
J subtract H.
I propose a set of transformational operations to be
performed for all numerical values. I will call these relational numbers.
Essentially, any discrete numerical value x will be derived as 1/x
In
specifying a terminological basis for our metasystemic understanding, I believe
it is necessary to answer the following basic questions:
What
is a thing (or an entity, a part, an element, a component, an entity, a point, a
state, an interval)?
What
is a limit (or a boundary, or constraint)?
What
is a relation (or an operator, a dependency, a function)?
What
is a set (or a sample, a collection, a matrix, a group)?
What
is a string (or a formula, a series, a vectorial)?
What
is a system (or a machine, a mechanism)?
What
is a framework (or a context)?
What
is a size (or a dimension, a magnitude) ?
What
is a space?
Science cannot descriptively account for all
phenomena that occur in reality. Scientific knowledge can only represent a
selective subset of the total reservoir of possible knowledge of reality, and
yet that subset should at least in theory lead to and be able to account for all
possible knowledge of reality. In the allocational tradeoffs between rational
coherence of our explanation and empirical consistency of our observational
descriptions, some middle ground has to be marked out. We can speak of the
selective procedures that lead to the systematic simplification of scientific
knowledge that represents a generalized substitution of phenomenological
knowledge of reality. We seek this form of simplification in both our
mathematical and linguisticsymbolic constructs.
It can be demonstrated that scientific praxis is
based upon the superimposition of selective constraint upon our observations and
our conclusions derived from our observations. This constraint is progressive in
the sense that it leads to greater and greater resolution of the problems
inherent to a scientific worldviewi.e., the systematic excoriation and
explanation of the structural relations implicit to and deterministically
accounting for the observed phenomenal patterns of nature.
If we could not selectively limit our knowledge base
in rational and interesting ways, we could not have a science. Ultimately, we
would like our scientific theories to be expressible in rather elegant and
simple formulas or grand equations that can be expressed in abstract
mathematical terms, or else in as few words as possible. But if we cannot
achieve such elegance, especially in our depiction of inherently complex
nonlinear systems, which all naturally occurring systems can be demonstrated to
be, our science is thereby not fundamentally weakened or rendered imperfect.
Symbolic calculus begins at the other end of the
continuum of mathematical mechanics. The paradox of the comparision of abstract
mathematical systems and natural language symbol systems is that mathematics
enables us to express infinitudes and the notion of continuous variation with
quite clear terms. Natural symbolism that is based on the positing of
discontinous entities as if concrete makes the conceptioning of infinitudes and
continuities between things seem inherently paradoxical and problematic. The
obverse of this conditionality of our knowledge, which I take to be a form of
linguistic relativity of different systems of discription, is that in some vague
sense the detailed and accurate description of finite realities in mathematical
terms becomes quickly overcomplicated. At the same time, natural language that
is constrained by a sense of realism is very robust in this task, and in the
task of articulating and describing inherently complex but dicontinuous systems.
I have proposed a kind of symbolic calculus as the
complement of a mathematical mechanics. I would propose symbolic calculus as a
kind of systematic integration of infinite and continuous change states in
reality in terms of differential integration of discrete states that are defined
symbolically in natural categorical terms. It is like narrative description that
fosters the illusion of a motionpicture projector. If mathematical mechanics
contributes uncertainty values and weights to our basic formulas, then symbolic
calculus is intended to coordinate and make consistent the use of symbolic terms
and definitional meanings in the articulation and elaboration of such formulas.
Systems
Modeling
I propose metasystems theory as the basis for the
integration of sciences upon a new level of articulation, or for the elucidation
of what I would call metascience, which would comprise the methodologies and
knowledge stock of metasystems theory. The basis for metasystems theory and
metascience rests upon the inference that all things in reality are
interconnected, however remotely, upon one level or another, and this
interconnection between things is the basis for the integration of reality. It
is the regular and recurrent nature of these interconnections, as well as the
variant processes of change that occur within such interactions, that
constitutes the basis of knowledge and metasystems science. The disparate nature
of knowledge in different scientific domains has tended to occlude what can be
considered an interdisciplinary approach to natural and real world problem sets
in reality, much of which by nature demands input from a variety of different
disciplines and perspectives. What is occluded I believe is not only a coherent
and comprehensive worldview that can be called scientific, but also, and more
important, a general operational approach to the understanding of reality that
rests upon such comprehensiveness of perspective. If reality is an
undichotomized whole, if real systems that occur within it happen in a naturally
integrated manner, then it stands to reason that the knowledge systems we derive
from and bring to bear upon this reality might be also similarly integrated and
reflect this holism and comprehensiveness of perspective.
Science has proceeded upon foundations that have been
empirically and methodologically strong, but theoretically and conceptually
weak. It has been weakened in part by the lack of an overarching worldview that
can be considered to be scientific. This central and general weakness pervades
all fields of science, more or less. It is not so much the case that human
beings are creatures with limited conceptual abilities, so much as it is the
symbolic form and function that human conceptuality takes, and the inherent
constraints placed upon conceptual systems by the fact of their symbolization.
Symbolization involves more than metaphorical encapsulization or linguistic
expression. It also entails a level of organic embodiment of the symbolisms such
that they seem real. Such concretization of symbolizations tends to obscure the
facticity of their abstract character and origin, the result of which are the
perpetuation of certain kinds of informal fallacies of reason and undue and
unselfcritical attachment to received points of view. This creates the
foundation, as Kuhn remarked, for making scientific though paradigmatic and for
its constructive reification.
It comes to me as a paradox perhaps, that it is often
the case that scholarship in the humanities and affiliated social sciences tends
to achieve a much stronger conceptual foundation and prowess than in the
sciences, though the former disciplines by their nature lack a strong empirical
or methodological orientation that is comparable to the sciences.
It
is the case as well that conceptual systems and the languages that encode these
in the sciences tend toward a strong mathematical model that constrain
conceptual abstraction in certain ways that lacks the flexibility that
symbolization and a concern with a looser system logodaedaly permits.
The strength of conceptual development rests in
several parameters:
1. A strong and detailed knowledge of facts and
realities.
2. A critical and reflexive approach to all such
knowledge.
3. The capacity to construct alternative systems to
fit realities.
4. The critical development of such systems and their
reality testing.
This approach is not fundamentally different from a
general form of scientific method that incorporates heuristic problem solving
and hypotheticodeductive experimentation. Indeed it is not, except that it
tends, I believe, to be looser and more powerful on the abstract end of things
than are the received realities of scientific theoretization.
My concern in the development of a general
metasystems approach for the sciences is twofold at least. First it is my
desire to offer to the general sciences a means for developing conceptual
systems that are at once stronger and more flexible both because they are less
prone to the ideological and paradigmatic conundrums of their own facticity as
constructions, and because they offer a more powerful means of conceptual
construction than that afforded by a strict reliance upon mathematical
description. Secondly, it is to provide for general science an actual set of
conceptual constructions that stand as a set of alternative constructs for
further development of ideas surrounding central issues in the sciences.
The Greek philosopher's realized a form of conceptual
development that was far stronger and more powerful than any other period of
human history. They used largely a critical approach to naturalistic
observation, combined with a rigorous logic tied to language and a notion of
"truth" that permitted them to construct models of their world that
were far in advance of their actual technological state. We find in Leonardo da
Vinci and in Albert Einstein a similar conceptual prowess of mind, and in
Charles Darwin a realization of this prowess for the biological sciences.
I believe that it is Einstein's analogy of attempting
to figure out the mechanisms of a watch by the external examination of a
pocketfob that provides us the clue to the understanding of a natural systems
theoretic approach. In this, the role of both inductive inference in the face of
empirical uncertainty, and hypotheticodeductivism in the midst of rational
uncertainty, are critically important as a way for logically deriving and
evaluation different kinds of conclusions.
Often it seems that ideas and theories surrounding
reality are set in the stone of social consciousness, with a sense of commitment
and investment into them that is all too humanly real. Conceptual systems are
nothing but framing devices that can be applied for best fit to anything we want
to use them for. They can be concocted and constructed for almost any context or
situation that we wish to deal with. They permit insight, as beyond the face of
the pocketfob, and they permit understanding of hidden realities beyond the
face that leads to a form of vision with the mind.
The methodological/operational basis of systems
theory and method are the development of coherent representational models, in a
variety of forms, that serve to accurately represent structural patterns,
properties and principles of real systems. It is through the construction,
development and refinement of representational models that we gain greater
understanding of the structural patterns of systems of all kinds, and it is
these models that are eventually applied in the development of new systems or in
the progressive control of change and moderation of established systems.
All models are primarily conceptual and symbolic
constructs in our minds, that are worked in some form in reality. The basis of
all art and artistic creativity in human systems is in the development of
representative models of reality, in some media or set of media, that are tied
to conceptual models and frameworks of understanding or seeing the world.
Modeling and heuristic representation of real or
ideal systems in the form of models provides an exploratory and experimental
platform of the development of alternative systems by means that are relatively
economical and efficacious in terms of cost of resources input into the creation
of such systems, and the potential heuristic outcomes and benefits coming from
such systems. Construction, prototyping and testing of models is a standard
practice in most engineering efforts, and is always a precursor to the actual
development of a real system.
Supercomputing has permitted a level of authentic
virtual representation of extremely complex systems in a manner that is true and
reliable, and has itself constituted a major technological advancement for the
sciences, especially in those areas dealing with intrinsically complex datasets
and systems, like meteorology or ecology.
We may recognize certain design principles that might
be appropriate to the construction and development of systemsbased models
relevant to our further understanding of real or ideal systems. We must
distinguish in this regard between what can be referred to as general design
principles that are appropriate across and for all kinds of systems, and what
might be referred to as "system specific" or particular design
principles that are appropriate of only a given kind or particular system to
which we are referring.
Clearly it is the case that scientific domains have
largely emerged around a distinctive body of knowledge and
technical/technological methods used to access and augment this knowledge. We
cannot conceive of the field of microbiology without a microscope, and we would
be hard pressed to articulate a meaningful astronomy without access to even a
rudimentary telescope. We must learn to recognize and appreciate the unique
differences and specialized assets relevant to each field and domain of
scientific research, and to consider these as a part of a larger collection and
body of tools available to extend our knowledge of reality in systematic ways.
It is equally clear as well that principles and theoretical models that are appropriate for one area of knowledge or domain of scientific research, do not necessarily translate very well into any other areas or domain of scientific endeavor. The models that apply upon physical levels of stratification in natural systems are completely different than the systemsbased models that apply upon biological or human systems levels.
All systems that we can think about are essentially
knowledge systems that are symbolically constructed. The natural systems they
represent are in and of themselves inert and incapable of selfreference or a
sense of identity in the world. They are by themselves without the human
intelligence component systems in which trees fall silently in a forest without
notice and in which stars collide and burst on a regular, semirandom basis
without further mention of the deed. We say that natural systems are implicit to
the patterns in terms of the redundancy and stochastic structures that these
patterns reveal to the human observer, or rather in terms of the information
they yield upon observation. And no observation is or can be conducted in a
completely naïve apperceptive sense without the automatic and builtin
filtering processes that are the result of our conscious awareness and the
conceptual models and understanding that we bring to our organization of
experience and to our making sense of our awareness of the world. This is to be
aware of the world, of the experience of reality, in terms fundamentally
different in kind than that of a dog or a rat or a bird or a fish. It is to be
not only consciously selfaware in the world, but reflexively so. It is to be
aware not only of the world but of one's own awareness in that world, moment by
moment, breath by breath. And we may say even when we are wide awake we are
never fully or completely aware or conscious of our world, but we always
perceive it, and conceive it, in a partial and partly distorted form. But
however imperfect and incomplete, this kind of human awareness is enough to
effect a kind of transcendence of existential context, of biological imperative,
that I would call symbolic.
Thus all systems as knowledge systems are symbolic in
organization and reflect the human being as both knower and articulator of
knowledge in the world as well as the general lifesituation of that human
being. We like to call them rational but they are in fact as much rationalized
and rationalizing as they are actually logical or factual about the world. They
represent symbolic models we have of the world, or of parts of the world, and
these models are built from parts and pieces we define and the relationships
that we decide to interconnect the many pieces with.
It is our dilemma as human beings that we have no
choice but to see the world in this way, with our symbolic models, in a manner
that gives order to our relations and apprehensions about the world. These
models are hardly static affairs, but are continuously changing and developing
depending upon the changes in the relationships and patterns of response we
maintain and are capable of carrying on with the world. Even if we attempt to
deliberately suspend the influence of these models, they remain unconsciously
embedded, not only in the subconscious background of our own brains, implicitly
prestructuring how and even what our experiences with the world are, but they
are also similarly embedded in the field of social relationships and the sense
of order we bring into the world and shape the world by. Even if we could rid
ourselves of our own preconceptions and biases in this regard, it proves
virtually impossible to rid other people of theirs, especially if they are not
even cognizant, much less willing, of a need to do so. And so when it eventually
comes to pass that we must interact with such people, as life always constrains
us to do, we are forced to reshape and yield our own models, however
independently achieved, in order to do so.
In this way we must see all systems, as general,
abstract theoretical systems, as knowledge systems that are representational and
explanative in function, and as ultimately constrained by the symboliccognitive
relativity of the human subject as central knower and articulator of these
systems. This I call the anthropological relativity of all knowledge systems,
and hence of all systems we are capable of knowing in however an objective,
scientific manner.
First,
Second, Third and Nth order Systems & Relational Theory
We
refer to systems complexity in a relative sense of the position and level at
which they occur in a larger metasystems frameworkrelative to encompassing
systems these systems become subsystems, and they in turn become supersystems
for the subsystems components that are encompassed within the boundaries of
their definition. As we proceed from one level to the next, either ascending or
descending in the hierarchy, it is clear that the order of complexity that we
encompass in our metasystems framework increases exponentially. We cannot
describe this exponential increase in clear and uncertain numerical terms. We
cannot assume there to be a doubling, trebling or quadupling of complexity, thus
we must leave the exponent as well as the main term as variables. We can write
an expression for this exponential increase of complexity of order in a system
in the following manner:
((X ^{(x)})^{y})^{z }
We have to have a way of handling the terms, and we
know from the logarithms that exponents are added together or multiplied. We can
address any system in the following manner. For any given level, there is at
least one higher order of generality or abstraction which should represent an
order of magnitude of simplification. We would address this kind of model
indicating ascending superordination and descending subordination in the
following manner:
^{c(b(a)}X _{(x)y)z }
For the same level, there is always also one lower
order of increasing differential specification which should represent a
corresponding order of magnitude of complication. We can say as a rule in
general metasystems that generalization implies specification, and
simplification implies complication. As a consequence, we may identify 3, 5, 7
or even 9 or 11 orders of magnitude to comprehensive metasystems, and we find
that expert knowledge sometimes attains these levels, at least descending if not
always in the ascending comprehension of systems. We would thus identify a 3
level stratified system as a first order system, a five level stratified system
as a second order system, a sevel level stratified system as a third order
system, and so on.
We must understand that the variable terms themselves
would represent what could be called complex nonlinear instantaneous
statevalues. In other words, the central term X would denote in most natural
systems not a single value or variable, but a set or matrix of multiple values
or variables that would be related by some function. At the same time, it is
assumed that the exponential values are related to the central variable in terms
of some functional set of derivatives or integrals. We would state that the
ascending terms would represent integrals of the system, and the descending
terms would represent derivatives of the termderivatives and integrals being
defined in an instantaneous manner. In the application to a calculus of
spacetime dynamics, this model represents simultaneous systems that cooccur
independently upon the same levels of stratification in accordance to the
cosmological principle.
I have coined the term relational theory in reference primarily to the
understanding of the structure of human symbolic systems in order to get a
handle on the structural aspects of naturally occurring metasystems. In
relational systems it can be said that there are no apriori primes or starting
values, but each term is definable in reference to some set of other terms
within the system. There are thus no anchor points by which to ground the system
or upon which to build the system. I believe metasystems as these naturally
occur in reality represent such relational structures. We assign to these
relational structures properties and values that are associated with a given
level of specificity/generality in such a system, but we cannot designate in a
nonarbitrary manner the upper or lower limits of such a system. I would state
anthropologically, from the standpoint to the anthropology of knowledge and
anthropological relativity, that this central paradox of reality is as much an
artifact or consequence of our own knowledge or way of understanding reality, as
it is anything intrinsic to reality itself. We are referring to a set of
limiting conditions at which epistemological and metaphysical considerations
converge. We do not say that this patterning is intrinsic to the order or
patterning of reality in an of itself. We only infer this sense of order from
our own knowledge frameworks and filters. Reality in and of itself, divorced
from the experience of human knowledge, is noneselfaware. It can be said to
contain information in an implicit and theoretical sense in its patterning and
organizational structures that it assumes, but this patterning is stochastic and
ultimately blind.
The paradox in a physical sense though is that physical reality appears
to reflect and embody this kind of relational patterning, and all physical
aspects of reality can be said to constitute a grand relational metastructure
within which there are no fixed or predetermined coordinate reference systems.
In other words, we must contend not only with the paradox of anthropological
relativity of knowledge systems about reality, but we must contend as well with
extrinsic limits to this knowledge in terms of the physical relativity of our
systems of understanding and our capacity to observe naturally occurring systems
without influencing these systems by means of our observation.
Equilibrium
& Supersystems
It may be said that naturally occurring systems that
exhibit redundant and consistent properties upon an organismic level attain a
certain relative equilibrium of structure that permits us to refer to it as a
system that is at least partially closed and partially selfdetermining. This
equilibrium exists as a kind of dynamic balance that is maintained through
selforganizational patterning with the frameworks in which the system exists in
the first place.
Equilibrium can be said to be complex, dynamic and
inherently underdetermined. In nature it is almost always nonlinear in its
patterning, and hence its equilibrium is used to account for its statepath
trajectory, or developmental patterning, within a larger metasystemic context.
In short form we refer to a system of natural
patterning as a "system" because it exhibits a relative structure that
we associate with a set of properties that we refer to as emergent or synthetic
to the system. When we analyze such a system, we break it down into its
definitional or componential primes, which we treat as if given and
nonrelative, the emergent properties of the higher order suddenly disappear and
we attempt to determine the network and transition structures that occur between
the component parts without the benefit of a holistic integration of the system
in terms of its transcendent properties. This represents a basic dilemma of
scientific theory and explanation between analytical reductionism of the system
into its component parts, and synthetic generalization of the interaction of the
component parts in relation to the system as a whole integrity.
There are certainly properties that are evident upon
one level that are not fully accountable for by the terms and relations of the
underlying levels. Thus analytical explanation falls frequently short of its
intended aim of full comprehension when it is done without the aid of synthetic
theoretical hypothetization about the system as a whole and its metasystemic
provenience in a larger scheme of things. This constitutes what I refer to as
the scientific dialectic that is continuously switching back and forth between
analytical explanation on the one hand and synthetic generalization on the
other.
Emergent properties associated with metasystems are
the consequence of the operation of the metasystem upon a transcendent level of
integration. These properties depend greatly on the fidelity of order of the
underlying system upon which the emergent properties are based. Emergent
properties really can be seen only as the sensible qualities that are available
to our knowledge at some level, by which we understand systems and their
composition in the first place. Emergent properties can be seen as dependent
upon the integrity of the underlying system, and these properties are those
primarily that we attribute to such systems. Emergent properties define systems
in a stratified sense and entail that a system is integrated to its surroundings
in relation to other parallel systems, and form together what can be called a
supersystem.
Nature is thus organized at multiple levels of
integration, each level exhibiting its own independent sets of properties, and
yet each based upon the systems resting beneath it. The stratification of nature
was not achieved in an instant, and represents probably the result of a series
of highly unlikely events, which can be described as an occurrence of change
within a situational context. That this stratification exists is undeniable, and
yet there is what can be considered to be a central dogma of this
stratification, and this is that all systems tend toward increasing size and
scale of complexity in their integration. This integration is achieved in a
basically physical and mechanical model, at all levels. The organization of
emergent properties at different levels, or their stratification and ranking
between levels, is a derivative consequence of this physical integration of
natural systems. It follows that the basis of scientific explanation is always
physical, and this this explanation will grow increasingly general as we move
from the physical to the higher emergent orders of natural systems. The degree
of complexity of such systems can be seen to expand exponentially as well, such
that we can consider the following kind of model:
…….(V_{3}(W_{2}(X_{1}(Y_{0}0
)^{z})^{z})^{ z})^{z})^{z}……
where
X is the starting point (zeroth entity), superscript z is the relative power or
exponent of increased complexity, subscripts represent successive orders of
levels, and …..VWX represent in creasing emergent properties associated with
the subsystems.
All scientific explanation begins in and leads back
to the explanation of the physical processes that underlie and account for the
basic emergent properties that are associated with any given level of
integration of reality. Secondarily, scientific explanation is concerned with
the problem of the derivative or resultant systems that emerge or are developed
as a result of the interactions of physical process in some kind of order.
All naturally occurring systems exhibit emergent properties upon discrete
levels of stratification, and there is no natural system that is not so endowed
and that is fully comprehensible in a completely constitutive manner. The
emergent properties of all natural systems are an indication of the fundamental
relativities of such systems, both physically and anthropologically in the sense
of our knowledge and understanding, and even observation of such systems.
Natural systems theory breaks down and stratifies
reality in this manner into natural and logically ordered sets occurring upon
different levels of superordinationsubordination. In fact natural systems
stratify in terms of a spectrum ranging from purely physical phenomena on one
extreme to purely symbolic and metaphysical phenomena upon the other extreme,
with biological systems ranging somewhere between these two extremes. We can
range along this spectrum from one end to the other and notice discontinuity
only in terms of the emergent properties that are associated with a particular
level of the spectrum. If we sought a purely analytic approach, we would find
for instance that this emergent discontinuity of systems breaks down and systems
appear more or less continuously reducible in terms of components and components
of components and so on ad infinitum.
We can say that the most comprehensive natural system
is the physical system, and of the physical systems the most comprehensive is
probably the fundamental unified field system that encompasses the total
universe as a metastate and possibly multistate system. At the same time, when
it comes to the emergent properties of energy, of various forms of elemental
matter, and of organic molecules, cells and biological systems, each of these is
a subset of the larger and more basic system in which it rests. We arrive at
human systems, which relate ultimately to other possible intelligent systems in
the universe, at the other end of the extreme as a form of natural system that
is capable of automaton selfawareness, or consciousness, and to some extent a
measure of selfdetermination tat is relatively nonstochastic or nonrandom.
Abstract
States & Natural Orders
The
Systematics of Identity, Property, Relation & Inferential Structures
The concept of metasystems implies set theory, as
well as a number of other related theories that deal with the organization of
elements and relations between elements. Exactly how set theory and other
related mathematical theories might be implicated in the understanding of
metasystems theory is dealt with in this chapter. We can say that a metasystem
implies one or more transformable sets. We can consider that each instantaneous
state transition that we measure or mark off for a system constitutes a subset
of the total set comprised by the metasystem.
Any metasystem in theory has a start state, or
beginning, and an end state, or terminus. In actuality, it can be demonstrated
that in natural systems, there is rarely a clearcut line that marks a beginning
and an end of a system. It is more a question of descriptive shorthand and the
need to impose a sense of discontinuous boundary upon systems that are otherwise
continuous and in their essence unending.
We impose some qualitative definitional shorthand of
life upon an organism. We say that a human being has a beginning at the moment
of conception and an end at the moment of its final expiration. If we look more
closely, though, we can see that conception was preceded by the lifeforms and
processes of the parents, and represents this essential continuity of process in
life. Even in death, neither can we mark the exact moment of final expiration,
in which the system quits all at once, nor can we say that the system, in return
to nature, does not reenter some larger eventcycle of nature, which it clearly
and always does. But from the standpoint of talking about that distinct,
individual entity as a living person, we must mark the boundary as such. During
that period, it constitutes its own unique system that is independent of the
systems that come before, or after or that encircle it in every way. We mark
this uniqueness by our discontinous superimposition of definitional boundaries.
We can hypothesize that, for any metasystem, there is
some original start state, A_{S}, and some final end state, A_{F}.
There is an indeterminate range of intermediate states that are describable by
the statetransformations from the startstate to the endstate according to some
complex nonlinear transformational function, such that we may write:
A_{S}
→ ƒ (A_{S} → A_{F })_{ }→ A_{F}
The endstate may be the direct transformation of the
startstate, but the indirect byproduct of a whole series of transformations of
intermediate states. The arrow implies one thingit is change over time. It is
irreversible, and unequal.
It appears that in our model of metasystems, the
notion of equality or the equal sign is an ideal and absolute that connotes a
static system. It thus cannot connote change of systems as these occur in
reality. The closest I believe is to impose an equivalence sign, such as
≈, denoting that one hypothetical entity is approximately the same, or
remains relatively equal or unchanged in relationship, to another entity or to
itself, in time.
We can say that all metasystems are timeordered
systems. Equality is reserved in our denotations for the specification or ideal
identification of entities and their partial values for when we impose
substitution upon systems that allow their relational embedding and
differentiation in abstract terms of other systems.
We can say that time's arrow in our formulas are
arrows of change and difference, and implie therefore an additive or
substractive comparison of values, or statedifferential. We can say that they
represent definite "intervals" of transition between
"states."
In metasystems, we refer to "states" as
complex entities. We can infer a kind of "statetheory" that perhaps
shares many aspects of set theory and order theory in mathematics. A state is a
kind of subset of a metasystem. Metasystem models must therefore elaborate
statetheory as somehow relevant to its abstract representation of real systems.
We can describe for any metasystem a hypothetical metastate that is the series
of all subsets of the state. Series implies a form of union that occurs in both
time and space, what I will call stateintegration.
A state is a sequentially ordered subset. It is also,
necessarily, a spatially ordered subset. Any real system that exists in time,
must occupy some kind of discontinuous space. As such, its position is always at
least implicitly definable within a larger "metamatrix" of
alternative states. We can refer to prestates, poststates, superstates,
substates, and alterstates which we can designate in a disjunctive way as
either (righthand) or (lefthand) states.
Each state would have some direct or indirect
relational function with our "centerstate." The minimal construct for
a metasystem model can be seen to be an orthogonal projection of a
fourdimensional reality onto a threedimensional spatial representation.
Each state is a unique subset of a
"metastate." I will also impose what I refer to as the hypothetical
"zerostate" which can be considered to be a nonstate. A zerostate
can be defined as any state plus its complement, less its metastate. The
complement of a state is therefore all alternative states, and this complement
defines the matrix structure and implicit referenceinference framework of the
state.
Each centerstate constitutes its own point of origin
in the larger metastate framework, and also simultaneously an extension of
another, infinite numbers of origin points in alternative states.
We may characterize a state as an instantaneous or
momentary set of interrelated points, in a momentary sense, or as some extended
set of such subsets that constitutes a discrete interval of a larger metastate
or metasystem. Ultimately, all states are continuous and therefore our
superimposition of interval measures or discrete momentary "snap
shots" are possible only in an abstract sense.
A state has been described as a relational subset of
a system. Its identity and composition as a state, its sense of integrity, is
defined relationally by the transformational functions that are pertinent to
that state. This is usually always supercomplex and multiply connected at
several levels of analysis. We must identify what we can call the principal or
prime relational cardinality of any state as the minimal set of relational
determinants that can be used to relate and describe the most number of point
values for a given state. The degree of integration of this set of functional
determinants can be said to be the extent to which they can be successfully
unified within a single transformational equation. If it requires two or more
separate sets of transformational formulas to describe a system, we can say that
the state is heterogeneously underdetermined by that number of degrees.
Since any nonlinear state can always be said to be
only partially or imperfectly determined, then we can hypothesize that for any
real system, there are always at least two or more basic sets of equations that
determine the values of that state. The primary function can be said to be the
set of those deterministic relational functions that determine the most number
of values of the system. The complement function can be considered to be that
residual set of nondeterministic relational functions that determine the
remainder of the values of the system.
While we can speak of positive functions that
determine the ordering of a system, it is difficult to imagine what can be
called negative complementary functions that "determine" the relative
disorder of a system. That disorder may be somehow represented in an ordered
manner, or that chaos may be somehow determined in a functional manner, seems
selfcontradictory and presents something of a paradox in our understanding of
reality. We can say that just as there can be no perfectly ordered states or
systems, there also cannot be perfectly disordered states or systems. Hence, we
can describe some kind of improper integral function for any state of relative
disorder that hypothetically characterises any real state or system.
The problems we encounter in the abstract
representation of real systems is precisely the kinds of problems encountered in
the recording of living realities by means of moviecameras and stillframe
photography. We can with any metasystem only adopt one point of view at one
time by which we configure the metasystem. This is so because we cannot adopt
the point of view of the center of origin for any metastate or alternative state
within a metasystem. Any metasystem presents to us the possibility of an
infinite number of alternative points of view, and there is no single correct
set or number of points of view that is best or exclusive to a valid
representation of the system.
We are rescued in this daunting form of relativism
when we consider that every and any point of view is approximately equivalent to
any other point of view. There is no single best or worst point of view, though
some may be relatively better than others, especially in terms of what functions
they are serving.
This has a great deal to do with our knowledge and
descriptive explanation of complex systems. My wife was perusing an old medical
anatomy book of my father's medical school days in San Francisco. The detail of
the book was amazing. It presented numerous points of view of the body in
different angles and at different levels, some highly schematized and others
highly realistic, including actual photographs. To a great extent, what points
of view were included was primarily determined by the purposes that it was
intended to serve in the larger structure of the text itself.
The human body, as a metasystem of nature, presents
to us the possibility of an infinite number of viewpoints that focus on an
infinite number of centerstates as an alternative and equivalent point of
reference. We cannot say that one overall viewpoint was best, or that there was
any singlepoint of view that is without value.
We really have no way of proceeding otherwise in our
abstract representations and descriptive explanations of systems, other than the
elaboration of alternative centerstates from a variety of "angles"
and different points of view, depending upon our functional purposes to which
they are put.
This digression about our relative knowledge, what I
will call the representational staterelativity of metasystems, is important to
our consideration of statetheory. I believe it demonstrates clearly that we
cannot adopt any point of view that does not serve some extrinsic functional
purpose that is not inherent to the metasystem under inquiry. Not only can we
never describe any metasystem in its entirety, in a complete or exhaustive
manner, but we can never describe any metasystem in a completely nonarbitrary
or a priori way. Our understanding of any metasystem remains always tied to the
functional framework within which we ourselves are embedded. It serves us well
in our descriptive explanation of metasystems to always remember and mention at
least in passing some sense of functional rationality underlying our
description. This is clear in anthropological fieldwork, but it is not so
readily apparent in the telescopic observation of distant stellar systems.
It is clear that in our scientific explanations, we
seek to impose a set of standards and explicit limitations upon our descriptions
such that we are able to abstractly represent any metastate or metasystem in a
minimally sufficient manner. This cannot be done by means of exhaustive
elaboration of alternate states. We seek a metastate of metastates, a
description of the order and relation that underlies the metasystem in its
entirety. We hypothesize the existence of some underlying sense of order of
relations that governs a system, of which any particular statedescription is
but one imperfect and partial representation.
Scientific theoretization and generalization is a
form of systematic simplification of metastates used to functionally explain
metasystems as these occur in reality. Any operational procedure we may apply to
our descriptive explanation of states and systems must lead to a simplification
rather than an elaboration of a system. If we seek to elaborate some point of
view in detail, it is in the interest of applying this particular description to
alternative statedescriptions, as an example. Simplification rules are based
upon the notion of relative equivalence and substitutability of states and
metastates such that we may derive refined abstract models that are
representative of most alternative states occurring for any given system.
[1]
How a problem will be understood, or even what problem occurs, will be
largely a condition of the frames of reference adopted by the problem
solver. What may seem problematic about reality for one person may not be so
problematic for another individual or group of people. We may therefore
distinguish also between primary or direct problem sets that deal with
immediate, instantaneous conditions of reality, and secondary or indirect or
derivative problem sets that are the consequence of the differential or
parallax of perception of primary problems or other secondary problems. I
would also distinguish what I would refer to as "tertiary" problem
sets that are distinguishable as "pseudo" problems or false
problems that arise as the result of error of processing or recording, the
transmission of misinformation, or erroneous apprehension of either direct
or indirect problem sets.
Blanket Copyright, Hugh M. Lewis, © 2005. Use of this text governed by fair use policypermission to make copies of this text is granted for purposes of research and nonprofit instruction only.
Last Updated: 08/25/09