**Cloud Cosmology**

I
am today sharing with you on my 65

^{th}birthday, the unpublished second part of my paper published in AIP's Physics Essays June 2010. I will if invited, consider submitting it to a professional journal although I consider this venue sufficient for active review and inviting comment and interesting questions that can enhance the presentation.
In fact I invite suggestions and input. The few equations did not post so you may wish to have me email you a clean copy. Ask by emailing me cat arclein@yahoo.com. This work leads to a slew of additional conjectures and are worth while cataloging and I do not mind assigning attribution here even when they are already obvious to myself and even thought of.

As
a teenager, I understood that the important theoretical question that
needed to be answered was the melding of a foundational mathematical
model of particle physics into Einstein's foundational mathematical
model of General Relativity. This was Einstein's core agenda. Of
course, I entered the field at about the same time that this effort
was been abandoned by theorists and physicists generally in favor of
the empirically rewarding quantum theory which is a Ptolemaic
approach rather than foundational mathematics.

Regardless
I received valuable guidance and insight and have spent the
succeeding years defining and investigating many problems of
interest. This blog is part of that ongoing process. As well I
received insight that the results published here could best wait
until I turned sixty five. There are a couple of reasons that this
might be true. To start with, the present approach has utterly
bogged down into a theoretical quagmire that is only now been
admitted. It is time for change. The second and far more important
reason is that our computer power was way to weak to properly run the
simulations that this work begs. There really was no rush.

**Claim 1**The advent of

**generalized cyclic functions**into the field of mathematics allows us to expand our mathematica hugely into higher order solutions. An additional tweak of symbolic logic also holds out the plausible proposition that a great deal of our core mathematica can be mapped and confirmed by computer simulation. In short it is possible to tidy up the foundations of our mathematica and get a convincing computer assist. This leads to a conjecture.

**Conjecture:**Godel's Theorem can be voided.

I
will be interjecting additional comments and interpretations
throughout this essay that are not part of the original text and they
will be normally in the form of conjectures. This hopefully will
make this essay easier to read. However, the work on the particle
pair in the earliest going is excessively rigorous as it has to be.
You can expect to get a headache as you master it because no word is
unexamined, not can be slid over easily. Go carefully and take your
time.

**Claim 2**The first creation of the particle pair

**fp**described herein is sufficient to create the cosmos and all of time and space or the space time manifold and thereby unites Particle Physics and General Relativity.

**Conjecture:**A review of the empirical data is sufficient to suggest that the foundational

**fp**described in this paper is the

**neutral neutrino**and is a sufficient explanation for the content of the cosmos and of

**Dark Matter**in particular. It may also be necessary and unique and in fact subject to mathematical proof. The lack of perfect packing for tetrahedrons strongly suggests just that.

**Conjecture:**The creation of a

**neutral neutrino**is offset by the simultaneous creation of an

**anti neutral neutrino**formed as a twist and counter twist. From that point they break apart and cannot cancel each other out unless they are able to realign. This is extremely unlikely unless tight packing is achieved. This framework produces our cosmological content without a net input of effort beyond initiation.

**Claim 3**The theory generates a superior cosmology which we will now name as

**Cloud Cosmology.**The act of creation produces an expanding sphere of partially bounded curvature geometries that decay readily into particle pairs

**fp**s when conditions are sufficient. This produces a cosmological cloud of

**neutral neutrinos**and a resultant background radiation reflecting the decay process. The

**neutral neutrino**does not produce gravity but that changes as assembly takes place and decay produces much larger particles and more energetic radiation. Assembly is the result of random perturbation and correct alignment leading to attractive combinations.

**Conjecture:**Self assembly produces a background of proto particles that will ultimately decay into all the particles that we recognize and related radiation. These particles are recognized as

**cosmic radiation and hydrogen**. Hydrogen is formed as two neutrons self assemble and then decay to produce our hydrogen atom.

After
all that takes place general gravitational effects will produce the
rest of our present cosmos as understood.

As
I state in the paper, my only assumption is that we exist. The act
of creation itself is an act of imagination that includes a simple
twist and plausibly its counter twist to initiate a space time
manifold and the first particle pair(s)

**fp**. Everything else follows at the internal speed of light. What is truly unimaginable is that we can imagine this.**Claim 4 Neutral**

**neutrinos**and

**anti neutral neutrinos**will self assemble to form up as proto assemblages that will decay into geometric solids recognized in part through our

**standard theory**. The electron is plausibly a solid holding about 600 such neutrinos. There is ample room for variation here and neutrino sharing. Simulation will need to investigate a range of options to determine stability. We can extend this same idea to assemble electron anti electron pairs or something similar to form the neutron which is central to our perspective of matter. We do not assume that these geometric solids are perfect.

**Conjecture:**A neutron contains over 720000 neutrinos of both types. Our metric allows us to map the net induced curvature inside and outside this assemblage for any point we choose. This will allow an appreciation of the power of the metric and the computation challenge involved to map the induced fields. It is now possible.

To
put the computational problem into perspective, each point that is
sampled for a neutron will require over 7200000 convergence
calculations that need to be consistently stable in an environment
that is deteriorating as the inverse of distance from the center.
Sooner or later it be comes unstable and crash.

It
is also clear that if we can achieve that, it will be possible to
generate all possible smaller assemblages and their induced
curvature.

**Paper Begins**

Before
we begin our discussion of the implications of the generalized cyclic
function for physics, a few remarks are in order about the
mathematics and what we are attempting to achieve in the balance of
the paper and to provide a bit of a road map to the reader. At least
you will know what I think is important and what is mere speculation
at this time.

The
mathematics is a direct expansion of the idea of complex number
through the application of a non reducibility rule for the nth root
of -1 for all n greater than 2. An immediate and direct result is
the generation of the generalized cyclic function of order greater
than two. That encouraged the further development of the Pythagorean
identities for orders three and four. It is also clear that higher
orders of the Pythagorean identities can be developed to order n with
the application of significant time and effort. The calculations
are simple as demonstrated in the worked cases of the third and forth
order.

The
existence of Pythagorean identities of a higher order than two is new
to mathematics and previously unimagined. I make the conjecture that
it represents a powerful new tool allowing the orderly development of
exact solutions in higher ordered differential equations. An obvious
problem of some fame is the three, four and n body problem in
mathematical physics. The new Pythagorean identities provides us
with tools needed to attempt resolution.

I
do not expect an easy or even a complete solution to arise, but I do
know that by first reframing the historic work on the two body
problem in terms of the second order Pythagorean, that it is
plausible to extend the resultant ideas into the three body problem
with some confidence. Much of this has been done and it will be
valuable to establish best notation and general framework before this
program is launched.

The
generalized cyclic functions themselves are congruent in the sense
that the curves maintain the same shape inside an envelope formed by
the exponential function over +/- Y while the cycle length (or
wavelength) for n > 2 declines as the inverse of the number of
cycles. This is not yet formally derived but apparent as the
individual curves are mapped from the spiral function. Natural
congruence of the curves for all n implies that when applied to
notions in physics that a change in n will produce a small
incremental change in the derived results.

Of
more immediate importance to physics, we discovered from the
development of the mathematics, is that any derived construction must
be inherently even numbered because odd numbering is immediately
divergent. That directly implies that the act of particle creation
must be by pairs. It was that discovery that allowed me to finally
construct a satisfactory thought experiment regarding fundamental
particles. I had pursued the idea itself with unsatisfactory results
prior to this insight for many years.

What
this odd numbered divergence might mean in terms of natural objects
in space is much harder to illumine but it certainly serves as a
warning that all apparently stable cyclic systems are potentially way
more divergent than common sense would imply. For example it can be
inferred that an incoming body could possibly trigger the removal of
a comparable body in an apparently stable orbital system without
overly disturbing the rest of the system.

On
a personal note, I come to this problem through my personal training
focused in the field of applied mathematics, rather than through the
lens of present day thinking in physics. In fact, I am also trying
to initially avoid in this paper most of the present work in physics
and even the language of physics because it is premature for the
purpose of this paper and is certain to confuse the reader. I would
dearly love to associate the fp pair in the next section with some
identified particle in today’s nomenclature, except that is
premature. I am avoiding the language of physics, including even the
idea of energy, because all these words carry baggage and hard to
shake assumptions that we all share.

I
included Rektorys’ Survey of Applied Mathematics in my references
not for a specific item, but because the book is an attempt to
inventory by statement without proofs, every theorem having value in
applications. I had a close formal encounter with each and every
theorem listed in the book’s thirteen hundred pages save a handful,
sufficient to satisfy my appetite.

More
critically, I was privileged to attend a one year course on General
Relativity conducted by Hanno Rund and David Lovelock on their work
that eliminated the unsatisfactory mathematical assumption of
linearity from the General Theory. Much discussion took place
regarding how particle physics might be synthesized in terms of the
General Theory. It was apparent to me and perhaps others that such
had to be a function of the metric itself. Yet no such metric
presented itself or even came close.

Therefore,

__what I am conducting in the next several pages is a thought experiment that is informed by the idea of the existence of such a metric and the imposed mathematical necessity of pair wise creation in a particle based universe.__The next several pages represent a rigorous attempt to describe the creation of a particle based universe as far as it is possible to proceed without simulation.
I
emphasize that this is a thought experiment aimed at constructing a
universe in the classical sense and specifically avoiding for now as
much as possible a possibly premature attempt to associate results
with known empirical results. In particular, the additional non
acceptance of the equivalence of the inverse of infinity to zero
causes me concern when addressing other such models that do not or
can not make that distinction.

Importantly,
the mathematics of the cyclic function is sufficiently robust that it
is plausible to map the effect of a particle out to many cycles and
to also construct assemblages of such particles and map their effect
on each other to a distance of many cycles. This can be done to a
high level of resolution and possibly thousands of cycles by taking
advantage of the algebra I developed in the paper for this purpose.

Therefore,
not only can we simply imagine the structures and their
possibilities, we can also calculate their effect at a distance and
to simulate experiments on them with the technology available today.

This
means that it is possible to construct geometric objects whose
actions and reactions can be identified as possible analogs to the
particles of present day thinking in physics. This is also another
good reason to avoid any modern terminology until calculation has
caught up to our imagination.

You
will find my understanding of time in the particle framework to be
unusual and this will need spending some time on. I am informed for
example that a finite object reflects a finite time function directly
related to the number of fp pairs involved. I assume that
relationship to be simply linear. I also do not expect the reader to
be immediately comfortable with these ideas.

One
further conjecture suggested by the quasi crystalline nature of the
fp pair and the implied tetrahedral construction geometry is that
polyhedral solids have physical meaning. One such object has 600
vertices and thus 1200 fps. If one then makes the conjecture that
two such objects can adjoin and share one fp pair, we obtain a
coincidental analog to proton neutron pairing. The remaining smaller
polyhedral objects in complete form or missing a fp pair or more is
also worth modeling as possible plausible derivatives of the breakup
of the larger object. In any case, we have a framework for
simulation and with the metric a method of establishing comparable
mensuration over a range of polyhedral solids.

I
believe other writers have occasionally proposed a pseudo crystalline
structure to the particles of physics, but it is not where I started
from at all. Rather I started with a careful thought experiment that
ended up with a quasi crystalline structure with important time
inferences.

The
next section is a rigorous thought experiment whose aim is to imagine
a particle universe constructed from a single unique fundamental
particle created in pairs and then modeled by application of the
generalized cyclic function order n were n is the number of particles
involved between two and N the number of particles in the universe.

Once
that is complete we discuss the issue of gravitation and the Dirac
hypothesis and show the Euclidean metric as a first approximation.
We also discuss the inferences of the generalized metric in light of
the non zero nature of the inverse of infinity in our model.

Please
Note: This model universe is meant to be understood as a thought
experiment that may on the application of the implied calculation
have physical meaning, but should not be applied as an analog to the
standard theory until then. I also drop the n in my notation using
cyclic functions when N is implied for the total particle count of
the universe.

**Defining the fundamental particle**

Firstly,
space

*S*exists, and is conveniently described as a classic four-dimensional time-space manifold. Any object, whatever the word object may mean, contained in space*S*will induce curvature on space S. We leave the concept of existence to the purview of the philosophers.*Our postulated fundamental particle fp is bounded in three spatial dimensions by*

**d***d, which presents as a constant within the universe of fundamental particles U.*

We
see no reason at this point to assume that there is more than one
such type of particle.

The
particle fp must also signal its existence. We can imagine this
happening as the turning on and off of a light switch. In this case
it is reasonable to assume that the particle snaps in and out of
existence over a time constant

**d***t*. This process continuously induces curvature on space in a cyclic manner transmitting the information of the fp’s existence.*A visual metaphor for a fundamental particle is a strobe light. By flashing on and off, it informs surrounding space of it location at light speed.*

We
now have two options. Either the fp is at rest and only moves
relative to available curvature or alternatively, it recreates itself
in the direction of curvature. In the first instance, the particle
can establish variable velocities while transmitting curvature at

*c =***d***d/***d***t*, a natural constant. In the second instance fp is itself traveling at*c =***d***d/***d***t*, but is changing direction every**d***t*in response to the curvature signals emitted by adjacent particles in particular and that of the universe in general. I accept the second case, which puts the movement of all fp’s and their transmitted curvature at the speed of light and makes the first option as an unnecessary assumption and complication.
This
second case also gives us an intuitive definition for inertia since
motion for a finite set of fps cannot be changed smoothly as an
external force F is continuously applied. Of course the incremental
change element

**d***d*is so small that the observed effect is practically the same as with the calculus concept of mathematical continuity dependent of the concept of mathematical infinity. Infinity is something that the universe has no need to know and we can rigorously exclude it as an assumption.
In
either case, the constancy of the speed of light is a direct result
of the existence of the fp. This is comforting. Can we now
construct our universe from this minimalist beginning?

The
fps effect on curvature can be imagined as a push pull event, not
dissimilar to a sine wave. The scaling unit

**d***d*will establish the apparent wavelength of this changing curvature, at least close to origin. A fp may be imagined as a bounded partition of space*S*.*S*can be imagined as an unimaginatively large sphere described by the expanding curvature surface generated by the creation of the first fp. This directly implies that every fp contained in space*S*is simultaneous since space*S*does not have a time clock independent of the time clock impressed by the fps contained in*S*. This also means explicitly that if and when a new fp is created, that all other fps will ‘know’. Time and distance, as we understand it, is a metric induced on space*S*by the universe of existing fps, but is not an external condition for*S*.
Since
we are not inclined to let our fundamental particle fp slow down or
sit still, we now need to construct a thought experiment for ‘at
rest’ phenomena. Now, a single fp traveling in a straight line
will generate a trailing cone of curvature. This is interesting but
not useful. If instead we imagine two fps in close proximity, we can
readily imagine a roughly synchronized dance in which the fps
switches direction every

**d**t**cycle. The direction change is determined by the other fp’s curvature signal. The general shape of the configuration is tetrahedral and should converge to a fully synchronized dance in the absence of any external curvature change. In this thought experiment the fp pair traces a path around the tetrahedron, hitting each apex in turn while separated by one apex.***An fp pair is created by the twisting of space to form two bounded and adjacent fp’s. Each fp follows a step wise path in which it’s direction is determined principally by the curvature wave previously generated by it’s partner. The only stable configuration possible is the tetrahedron.*

Without
externally applied curvature, this pair is naturally at rest.
Observe that the equivalent of the centre of mass will switch back
and forth over a distance of

**d***d/√2*thanks to the geometry of a tetrahedron.
A
good visual analogy is to imagine a balloon receiving a half twist to
form two spheres, then untwisting and disappearing, reappearing
immediately and repeating the procedure at right angles to the first
configuration. This cycle is then repeated over and over.

A
more correct visualization is to imagine space twisting into
existence on one edge of the tetrahedron creating two fps and then
twisting back out of existence over the duration of

**d***t*. Thereupon the exact same process is reenacted on the opposing edge as a result of the curvature generated by the first event. This process repeats continually.
One
immediate result of such a configuration is the generation of
directional curvatures ‘

**d***c*’ in the four directions corresponding to the edges of the tetrahedron consisting of a repetition of*010101*. This sets up and permits the possibility of additional pairs been drawn into the dance and linked together in a highly symmetric and crystalline-like formation. We can postulate the formation of stable geometric formations continuously emitting strong directional curvatures as well as establishing a general curvature on the surrounding space reflecting the fp content of the construct. These structures will be referred to as fp constructs for the purpose of this paper. An appropriate symbol could be*P*_{2m}, m been the number of fp pairs.
This
2-fp construct has all the implied symmetry of a tetrahedron as well
as two opposite axis of rotation for the motion of the fps. This
immediately implies that larger constructs will vary in regard to the
axis of rotation chosen. Specifically, the tetrahedron has four axis
of symmetry associated with each apex and the center of the opposite
side, and two axis of symmetry through the center of opposing sides.
These axis are not uniquely different inasmuch as an observer will be
unable to separate the individuals in the two types of symmetry.
However, the movement of fps from axis to axis occurs in either a
clockwise or counter clockwise direction. These are separable to an
observer. A 2-fp pair can thus present six unique aspects to an
observer and to another pair.

These
are: Apex on, clockwise path

Apex
on, counter clockwise path

Face
on, clockwise path

Face
on, counter clockwise path

Edge
on, clockwise path

Edge
on counter clockwise path

This
construct can obviously be further modeled and evaluated in the event
that it is globally rotating as a result of externally applied
curvatures.

*An excellent intuitive conceptualization of the creation of an fp pair is to imagine taking a balloon and giving it a half twist thereby producing two spheres with a minimum effort. This may also be a useful conceptionalisation of the first act in the creation of the universe. An ideal symbol for such a pair is ironically*

*. Direction change, rather than been thought of as from apex to apex, can now be better thought of as moving from edge to edge over the distance*

**d***d/√2.*

A
program to investigate the likely configurations requires extensive
computer based simulation, which is now possible. Intuitively, we
anticipate that it will be possible to construct objects that are
analogs to our known particles. Inherent symmetry is built in and we
see an affinity for directed curvatures that will provide an obvious
analog for the forces holding the constructed objects together.

Conceptually
we postulate the creation of directed curvatures that will have a
stronger effect at short range than the balance of the generated
curvature. Recall that during each cycle the fp generates a sphere
of expanding curvature traveling at light speed. The directed
curvature is a small partially bounded subset that has combined two
fields in a straight line. We postulate that this line of curvature
can link with the line of curvature of another pair and that this
combination tends toward reinforcement and stability. This gives us
an analog to a resonating waveform anchored on the two fp-pairs.
What this will look like when we attempt to mathematically model it
is unknown at this time.

One
other intuitive concept that emerges from our thought experiments is
that directed curvatures or waveforms or photons have an affinity to
combine and resonate where conditions permit. The rules of such
events are only hinted at. More positively, our fp pair mode
establishes ground rules for the generation of these formations and
will allow us to create working models that permit the understanding
of the nature of their linkages and boundedness.

The
linkage of two 2-fp constructs is possible on any of the four
edgewise axis and perhaps the other axis presented and generating a
signal

**sig**equal to*010101…*In line we have a combined**sig**ladder between the two 2-fps that tends to keep them in place. If they move slightly out of position then the next passage of the**sig**will jostle them back into position. Oscillation is likely the norm. The distance between the two 2-fp constructs can vary greatly with binding strength obviously increasing as proximity improves.
Extending
this logically, we can postulate the formation of a larger
tetrahedral construct consisting of a central 2-fp with four 2-fps
linked at the apexes. Varying linkage distances and synchronicity
generates a variable 10-fp construct with multiple acceptable unique
configurations to be tested by computer simulation.

Taking
this thought experiment further we must consider the existence of
assemblages of

**sig**s been emitted and absorbed by the fp constructs. Such assemblages must react with the contained fps in the construct for both emission and absorption. These will perhaps be the analogs to photons and binding forces.*Photons are semi-bounded assemblages of curvatures moving in one direction. It is possible that a sufficiently energetic photon is the precursor to a fundamental particle fp. Photons do not impress new curvature on space S in the same sense as an fp outside that curvature already contained by the photon but they may react with each other.*

What
becomes clear from the forgoing discussion is that we can readily
model n-fp constructs of large n. Stability will vary depending on
the level of internal oscillation and shell completeness. Whether we
can now build out our current knowledge of particle physics using
these simple foundations can only be answered successfully through
extensive computer modeling.

**Time inference**

*These n-fp constructs will impress their own general curvature and inherent time constant on their surrounding local space L. L – Space is defined as a domain containing a countable subset of fp pairs.*

We
have postulated by a simple axiom of existence that each fp reflects
a

**d**t which is a universal constant perceived as dependent only on the number of fps in the universe. It is also clear that any n-fp construct or more generally any subset of fps will generate its own internal time scale**tau**. This is an inevitable inference of our definition for S in which the universe of all fps impresses time and space dimension on the universe. This means that any subset of fps contained within the universe has the same set of rules applying as is true for the whole.
We
are stating that both the universe and any defined subset will
exhibit a time value

**tau**that is dependent on the fp content. This is not to say that a subset will not also be part of the universal**tau**, it obviously must, but that for a given subset, the effect of the universal**tau**is small enough to be ignored. I anticipate that for particle physics, the**tau**may be an analog of spin. I have no particular suggestions on how to construct a derivative equation before enough modeling is done to put us firmly on track.
I
am saying, however, that constructs of varying fp content will
exhibit an internal clock that will vary as to the fp count.

This
puts the classical understanding of time on its head. Our philosophy
has informed us that time is uniform and somewhat independent. We
are saying instead that our sense of time is a direct result of the
impress of the existence of a universe of simultaneous fundamental
particles that are bounded regions of space described by

**d**t and**d**d. In addition we are saying that in any subset of fps, it is convenient to treat that subset as a small separate universe with an internally consistent resultant time component tau.**Large-scale constructs**

Without
the benefit of direct modeling, we are way out on a limb at this
point. We can intuitively speculate from the forgoing that large
assembled n-fp constructs could have the ability to absorb and re
emit photons while effectively expanding or contracting. We can
intuit the existence of a strong induced curvature around the multi
proton construct of an atom that will behave similarly to a series of
shells that permits the holding of an electron analog anchoring its
photon ring. We intuit that our push-pull induced curvature is quite
capable of creating the known electromagnetic forces. Without large
scale modeling, we cannot know precisely how.

We
can deduce that any form of asymmetric curvature both internal and
externally induced on the construct will result in physical spin.
This will have a large effect on the form of the induced curvature
and the form of the construct’s interaction with other constructs.

We
have described the likely characteristics of a fundamental particle
and its derivative constructs. The design principle is clearly rich
enough to permit success in modeling the full universe of particles
and forces known to physics. And we have done this without
introducing a single new law of physics outside of insisting that a
fundamental particle exists and executing a thought experiment about
its nature.

I
suspect that these particular characteristics can be proven
rigorously as both necessary and sufficient for the existence of the
universe. The one remaining task left to us is to assign a
mathematical function to describe the fps in the universe in order
for us to generate useful information and predictions.

Mathematics
throws up a wide range of potentially useful cyclic functions.
Beside the old standbys of sine and cosine, we have Taylor series and
any number of less convenient non-functions and similar constructs.
None of these functions reflect the fp content of the universe and
will be at best a stopgap for the sake of modeling the theory and
could possibly lead to serious error and/or simple distortion.

Fortunately,
we have the perfect function(s) at hand in the form of the
generalized cyclic functions we have just introduced. These
functions behave conformably for all

*n > 2*. The shape of the curve and apparent wavelength are extremely similar for any large*n>>>2*and certainly tend to converge to extreme similarity for both large n and large x. This permits stability in the overall structure of the universe and its laws. A shift in the fp content of the universe will have a minute effect on the universe and its apparent laws under these equations.**Mathematics of the fp**

We
have described the fp as a strobe light generating shells of
alternating positive and negative curvature on space. We have also
pointed out that each fp reflects the total number of fps in the
universe and that this is true simultaneously.

In
the case of a universe of fp pairs, it is consistent to model their
behavior with the sine function. In the case of four fps, it is
consistent to model their behavior with

*C(4,0)*which we discussed earlier. We can apply this approach in our modeling for any n-fp construct
More
generally we can model all fp particles in the universe with the
cyclic function

*C(n,0)*. For our use, we know that there exists a large number of fps in our universe that is believed to number around 10 to the power of 78. It is therefore convenient to think of the fp as been described by the function*C(10*^{78}*,0)*. Symbolically, it is convenient to use the following form of C(∞,0), although we risk confusion with the concept of mathematical infinity. The universe does not know what mathematical infinity is. This does suggest a natural way to convert mathematical calculus into physical calculus and that it should be done at least as a useful exercise. We may surprise ourselves and gain greater insight into the concept of infinity.
In
practice, we will be able to model fp constructs that will range up
to perhaps several hundred fp pairs using the sine function alone
losing only accuracy.

From
the mathematics of the generalized cyclic function we can make one
other key inference. The odd ordered functions are uniformly
divergent in the negative suggesting sharp changes in behavior. This
implies that it is a physically necessary for new fps to form two at
a time as we have described.

It
appears reasonable to postulate that these pairs can be formed as a
result of the collapse or folding of a semi bounded photon construct
that carries sufficient curvature. We can also postulate that any two
photons containing enough curvature can interact and form a 2-fp pair
while carrying of surplus curvature in the form of another photon.
We might speculate that this can happen in a low curvature
environment where linkage is not interfered with over some vast
distance as well as in a high curvature environment, neither of which
applies on earth. One other such environment is implied as a
condition for the postulated fractal like surface of the universe.

I
observe that if we assume that the universe had a beginning, then a
direct logical outcome is that the universe can be described as a
sphere expanding at the speed of light from the observer’s
perspective. I understand no other logical derivative of the
assumption of existence at this time.

The
explicit necessity of pair creation also tells us that we can focus
much of our work on the behavior of the fp pair with the assistance
of our old friends the sine and cosine functions. This is important,
since short range function convergence will become progressively more
difficult with computer simulation of larger and larger n-constructs.
If instead our protocol for investigation is designed around using
sine functions to simulate a large n structure made up of many pairs,
then it is merely a matter of substituting the nth order cyclic
function in step wise fashion in order to observe the tightening up
of the n structure (it will get much smaller) and to arrive at a true
model (or close approximation).

**Implied cosmology**

We
have directly linked the particle content of the universe to the
equation describing any fp. We have a number of inferences from this
approach.

**Cosmological content**is a direct result of the generalized cyclic equation describing the fundamental particle. The space-time continuum or manifold impressed by this content is as understood in general relativity, which is minimalist in design. The number of fps in the universe, the size of the fp, the apparent age of the universe and the speed of light itself are all directly linked through this formulation.

**Dirac’s large number hypothesis**is a natural result of this formulation. Our impressed**Tau**for the universe of all particles will be the apparent age of the universe, whatever that may mean. The association of**d***t*and**d***d*and the total number of fps in the universe within the founding equation impresses the observed scaling system that generates the large number hypothesis and is excellent evidence that we may be on to something. This means explicitly that the apparent size of the observed particle system is a function of the number of such particles within the universe and this number dictates the apparent time scale of the universe.

- It is reasonable that gravity
*G*can be precisely defined. The curvature*H(r)*derived from a single particle can be described by the following equation:

The
equation

*(43)*
holds
for any fp and by simple extension for any fp construct. For
simplicity, we are assuming the Newtonian space metric where the
geometric component is the inverse square law. Calculating H in
local space for a fp pair presents a special difficulty because the
switching of position creates two points of origin for any net
calculation at a remove from the pair. The distance term r is
measured as the number of

**d***d’s*counted from the particle to the observer.**Gravitational theory**

We
have noted that the applied curvature effect of a fp can be described
by the equation

*(44)*

where
n is the number of fps in the universe and we assume a simple inverse
square law over distance.

We
have also noted that locally, that matter is dominated by fp-pairs.
This means that the second order cyclic form will dominate and we can
simplify our lives greatly in terms of calculating the larger scale
effects by working specifically with the second order form. This
cannot be accomplished at the particle assemblage level in which the
second order model will only provide a first approximation.

The
gravitational force of a large object of mass M can be approximated
by the simple expedient of calculating the net effect of the
contained fps using

*H(r)*. At a significant distance this will vary linearly with*H(r)*as the implied geometric effects of M diminish. Again we can reasonably expect that the effects of 2-fp geometry will be dominant, as this is again local space. Formally, M varies as to the volume of M, which can be described in the case of a sphere as 4/3Ï€R^{3}in which R is the radius. Obviously M could be simply replaced by*M*_{ }defined as the number of fps and R could be replaced by*R*defined as the number of fp wavelengths required to measure the radius of the sphere.
We
will first start our investigation by demonstrating the resultant
calculation of gravitational force in Newtonian Physics. Therein
gravity varies as to

*1/r*^{2}by the inverse square law. Thus for any particular interval*{a, b}*, the total applied effect varies as to the over that interval. Since the indefinite integral , we determine over the interval*{a, b}*the resultant value of*( -1/b + 1/a ) or (b-a)/ab*.
If
we now set
and

*,*we have the resulting value of . In Newtonian Physics gravity will vary as to over the interval**
We
now use our equation of the second order cyclic function which
coincides with the dominant fp pair structure of local space. Since
this function has a cyclic component we will calculate our integral
over the interval .
You will observe that

*1/r*^{2}is still a dominant part of this equation.
The
indefinite integral of

*C(2,0)r/r*^{2}or*cos(r)/ r*^{2}is as follows:*(45)*
)

This
is convergent. By the by, we do not know what it is properly
convergent too. This is one of those nasty problems in mathematics.
It goes without saying that the higher nth order versions will be
just as aggravating and uncooperative.

Calculating
this integral between the interval ,
we have the following results for the first term:

*(46)*

This
component is exactly the same as the Newtonian solution just shown.
This happens to be excellent news since the most influential
component locally is immediately producing the Newtonian metric.

What
does the rest of the equation contribute? Here we have the
aforementioned problem. It has been proven that this component is
convergent. It has never been shown what it is convergent to. It is
a very small number approaching zero but not necessarily zero.
Certainly, it is close enough to zero in local space to not matter
and to be effectively undetectable above the noise.

And
as

*r*increases it is reasonable that if the convergent value of the second non-Newtonian component is non-zero then the resultant effect of this component will increase. More explicitly, the first Newtonian term is converging on zero much faster that the non Newtonian second term. This means that at some point for a sufficiently large enough*r*the effect of the second component will dominate the effect of the inverse square component.
Since
we have already eliminated mathematical infinity as an assumption we
can state that both the Newtonian component and the non-Newtonian
component will independently converge to a small positive number
identical with value for physical infinity

*∞ = 10*^{-78}.
This
allows us to postulate that the net effect of gravity is possibly
twice as strong as predicted by classical Newtonian physics in the
inter Stellar and inter Galactic void. This also means that a
significant part of the missing mass problem is a direct artifact of
the erroneous application of mathematical infinity. Applying this
new formulation may account for all the missing mass even before we
factor in any possible effect of the higher ordered cyclic function
formulation that more appropriately reflects the particle count in
the universe.

Returning
to
the indefinite integral is:

*(47)*

This
equation will be important in mapping curvature in proximity to
objects such as particles and atoms with a countable number of fp
pairs. It will not however give us the same easy ride that the fp
pair gave us by simply resolving into a Newtonian component and an
extra component. Here we still have two components with the same
cosmological result and the fact that matter is made up of 2 fp pairs
imposes the dominant Newtonian locally.

Again
for the first term over the interval

*,*were*n*is unrelated to the order of the cyclic function we have the following after setting*C*_{0}*r = C(n, 0)r**(47)*

Which
is hardly presenting the simple resolution seen for

*n = 2*. The second term gives us:*(48)*

Finally
we can readily combine the two terms giving us an equation describing
the curvature in the vicinity of a particle with many fp pairs.

*(49)*

**Net curvature for the fp pair**

The
fp pair traces out a tetrahedron as it switches back and forth from
edge to edge. The center of influence is the half way point of this
switching path and can be used as a point of origin for calculation
purposes. Any two apexes to point of origin lines can be used as
axis for setting up the curvature equations for any point in space
surrounding the fp pair. A more useful axis is the unique line
between the mid points of the edges formed by the successive cycles
of the fp pair. The other two edge pairs are excluded because they
do not represent the appearance of both fps on the same edge. This
can be naturally called the polar axis.

This
formulation is not overly convenient for mapping the important
edgewise behavior, so a second formulation is likely justified using
an apex as a point of origin. The more important difficulty is
accommodating the switching back and forth of the fp pair which is
better facilitated by the use of the polar axis.

In
any event we can construct a vector equation for the net curvature at
point

*P(r, Î», Î´)*consisting of the individual contributions from the four apexes using*s = √3/8*as the distance of any apex from the origin and*Î»*and*Î´*as the angles based on the polar axis and a line to an apex rotating on the polar axis.**4**

**Cosmological Red Shift.**Another immediate artifact of using the higher order cyclic function formulation is the fact that the

**apparent wavelength**

**w(x)****is declining over great distances**since

*w(x)*is converging to a constant as x becomes very large. This means that distant objects are really closer than would be implied by the natural assumption of uniformity. This implies that the observed Galactic red shift is primarily a result of this effect rather than the continuous creation of new fps. Of course both effects could be contributing, except that new fp production is declining inversely to the total volume of the universe. Our model does allow new fp pair production, which would lead to an apparent red shift due to the general increase in particle content. A simple interpretation implies a set age for the universe linked to the number of fps. The real possibility of production and consumption of fp pairs been possibly in general balance throughout the universe will interfere with any nice speculations we may wish to make regarding the age of the universe.

More
precisely, for any photon we know that

*. For any photon assemblage we also now realize from our knowledge of cyclic functions that the impressed***E = hc/Î»****d**d component of the photon is shrinking because*w(x)*is converging to a constant as the photon gets further from its point of origin.
Since

**E****=****hc/ Î»***=***hd***d/***d***t**and since***Î»***and***E****d**t (the observed time structure of the photon will not change) are held immutable in our measuring regime, this effect can only be observed as a lengthening of the wavelength*This is the most likely origin of the observed universal red shift.***Î».****5**

**Schwarzschild Event Horizon.**The much discussed interpretation of the Schwarzschild solution is a natural outcome of mathematical infinity and is meaningless except to predict tight packing of particle content. A more obvious interpretation based on our understanding of fp pairs is that two pairs in close proximity may be able to be able to create a new fp pair while releasing the other pair in the form of partially bounded curvatures. This carries off half the gravity associated with the two pair.

While
this is taking place, the partially bounded curvatures may be
expected to recombine into various photonic forms and migrate out of
the star carrying off the associated curvature content. We can
expect a resultant spectrum in which a huge amount of energy is been
discharged right across the spectrum. In this form, the bulk of the
energy associated with a star will be carried outside the gravity
well created by the containing Galaxy. A portion of this energy may
also reform into fp pairs in the low curvature environment of space.
Most of this can expected to also be outside the gravity well of the
galaxy.

The
terminal content of gravity wells can thus be an analog to a packed
3D fractal like set that is subject to analysis as such.

**Conjecture:**Quasars are event horizon phenomena and are likely way closer than presently assumed. Some may even be in our Galaxy. Massive jets of photonic energy or partial curvature carries off content and gravity and these jets are seen as some of it decays back into visible matter.

**Summary**

We
have constructed a concept of the universe using only the assumption
of existence and the insight derived from the mathematics of the
generalized cyclic function, without reference yet to the extensive
body of empirical information. A next step is to apply simulation
methods to the known range of regular polyhedral objects in which
each vertice is occupied by a fp pair tracing a tetrahedral path and
calculate their mutual effects on each other using the new metric.
With the results of that work it will be possible to produce
comparable calculations over the full range of possible objects and
determine if there is any apparent similarity to the empirical data.
We should face no particular difficulty in simulating polyhedral
objects with even thousands of vertices using the algebra and
achieving extreme precision in calculation.

## No comments:

Post a Comment