This is really an interesting insight that is taking us into deeply understanding the architecture of Space Time consciousnesses itself.
My Cloud Cosmology nicely and rigorously handles the creation of the 3D particle universe which serves to complete the primary objective of integrating particle physics and the General Theory itself. What it also does is identify the role of topographical decay that produces all such particles while shedding at excess construct that I am suggesting resolves best into 2D constructs that can be readily modeled as a ribbon or a Mobius Strip. Actually simulating this is future challenge. Yet presuming that all decay products naturally form such strips is terribly useful as it nicely allows number itself to be recorded.
The whole texture of Space and by inference, consciousness can be produced from Mobius Strips. Recall that such a strip can record any number through additional twists. Now extend this to a pattern with number that naturally extends through time.
Long-Standing Problem of 'Golden Ratio' and Other Irrational Numbers Solved with 'Magical Simplicity'
The golden ratio is one of the most famous irrational numbers; it goes on forever and can't be expressed accurately without infinite space.
Most people rarely deal with irrational numbers—it would be, well, irrational, as they run on forever, and representing them accurately requires an infinite amount of space. But irrational constants such as π and √2—numbers that cannot be reduced to a simple fraction—frequently crop up in science and engineering. These unwieldy numbers have plagued mathematicians since the ancient Greeks; indeed, legend has it that Hippasus was drowned for suggesting irrationals existed. Now, though, a nearly 80-year-old quandary about how well they can be approximated has been solved.
Many people conceptualize irrational numbers by rounding them to fractions or decimals: estimating π as 3.14, which is equivalent to 157/50, leads to widespread celebration of Pi Day on March 14th. Yet a different approximation, 22/7, is easier to wrangle and closer to π. This prompts the question: Is there a limit to how simple and accurate these approximations can ever get? And can we choose a fraction in any form we want?
In 1941 physicist Richard Duffin and mathematician Albert Schaeffer proposed a simple rule to answer these questions. Consider a quest to approximate various irrational numbers. First, decide on how close the approximation should be for fractions of a particular denominator. (Remember, the “numerator” refers to the top of a fraction and the “denominator” the bottom. Here, all the fractions are fully simplified—so, for example, 2/4 does not count as having the denominator 4 because it simplifies to 1/2.) You might decide that simplified fractions of the form n/2 can approximate any irrational number whose true value falls within 1/10 of them—giving the approximation an “error” of 1/10. Fractions that look like n/10 are closer together on the number line than those with denominator 2, so you might limit the error in that case to only 1/100—those fractions can approximate anything within 1/100th of them.
Usually, larger denominators are associated with smaller errors. If this is true, and there are infinitely many denominators that one can use to approximate a number to within the corresponding error, then by increasing the denominator the approximation can be made better and better. Duffin and Schaeffer’s rule measures when this can be done based on the size of the errors.
If the chosen errors are small enough in aggregate, a randomly picked irrational number x will have only a limited number of good approximations: it might fall into the gaps between approximations with particular denominators. But if the errors are big enough, there will be infinitely many denominators that create a good approximating fraction. In this case, if the errors also shrink as the denominators get bigger, then you can choose an approximation that is as precise as you want.
The upshot is that either you can approximate almost every number arbitrarily well, or almost none of them. “There’s a striking dichotomy,” says Dimitris Koukoulopoulos, a mathematician at the University of Montreal. Moreover, you can choose errors however you want, and as long as they are large enough in aggregate most numbers can be approximated infinitely many ways. This means that, by choosing some errors as zero, you can limit the approximations to specific types of fractions—for example, those with denominators that are powers of 10 only.
Although it seems logical that small errors make it harder to approximate numbers, Duffin and Schaeffer were unable to prove their conjecture—and neither was anybody else. The proof remained “a landmark open problem” in number theory, says Christoph Aistleitner, a mathematician at Graz University of Technology in Austria who has studied the problem. That is, until this summer, when Koukoulopoulos and his co-author James Maynard announced their solution in a paper posted to the preprint server arXiv.org.
The Duffin-Schaeffer conjecture “has this magical simplicity in an area of maths that’s normally exceptionally difficult and complicated,” Maynard says, a professor at the University of Oxford. He stumbled into the problem by accident—he is a number theorist, but not in the same area as most Duffin-Schaeffer experts. (He normally studies prime numbers — those that are divisible by only themselves and 1.) A University of York professor suggested Maynard tackle the Duffin-Schaeffer conjecture after he gave a talk there. “I think he had an intuition that it might be beneficial to get someone slightly outside of that immediate field,” says Maynard. That intuition turned out to be correct, although it would not bear fruit for several years. Long after that initial conversation, Maynard suggested a collaboration to Koukoulopoulos on a suspicion that his colleague had relevant expertise.
Maynard and Koukoulopoulos knew that previous work in the field had reduced the problem to one about the prime factors of the denominators—the prime numbers that, when multiplied together, yield the denominator. Maynard suggested thinking about the problem as shading in numbers: “Imagine, on the number line, coloring in all the numbers close to fractions with denominator 100.” The Duffin-Schaeffer conjecture says if the errors are large enough and one does this for every possible denominator, almost every number will be colored in infinitely many times.
For any particular denominator, only part of the number line will be colored in. If mathematicians could show that for each denominator, sufficiently different areas were colored, they would ensure almost every number was colored. If they could also prove those sections were overlapping, they could conclude that happened many times. One way of capturing this idea of different-but-overlapping areas is to prove the regions colored by different denominators had nothing to do with one another—they were independent.
But this is not actually true, especially if two denominators share many prime factors. For example, the possible denominators 10 and 100 share factors 2 and 5—and the numbers that can be approximated by fractions of the form n/10 exhibit frustrating overlaps with those that can be approximated by fractions n/100.
Graphing the problem
Maynard and Koukoulopoulos solved this conundrum by reframing the problem in terms of networks that mathematicians call graphs—a bunch of dots, with some connected by lines (called edges). The dots in their graphs represented possible denominators that the researchers wanted to use for the approximating fraction, and two dots were connected by an edge if they had many prime factors in common. The graphs had a lot of edges precisely in cases where the allowed denominators had unwanted dependencies.
Using graphs allowed the two mathematicians to visualize the problem in a new way. “One of the biggest insights you need is to forget all the unimportant parts of the problem and to just home in on the one or two factors that make [it] very special,” says Maynard. Using graphs, he says, “not only lets you prove the result, but it’s really telling you something structural about what’s going on in the problem.” Maynard and Koukoulopoulos deduced that graphs with many edges corresponded to a particular, highly structured mathematical situation that they could analyze separately.
( this is interesting. Recall that particle decay produces two dimensional constructs with edges on its way to been a photon in our empirical universe. - arclein )
The duo’s solution came as a surprise to many in the field. “The general feeling was that this was not close to being solved,” says Aistleitner. “The technique of using [graphs] is something that maybe in the future will be regarded as just as important [as]—maybe more important than—the actual Duffin-Schaeffer conjecture,” says Jeffrey Vaaler, a retired professor at the University of Texas, Austin, who proved a special case of the conjecture in 1978.
It may take other experts several months to understand the full details. “The proof now is a long and complicated proof,” says Aistleitner. “It’s not sufficient just to have one striking, brilliant idea. There are many, many parts that have to be controlled.” At 44 pages of dense, technical mathematics, even leading mathematical minds need time to wrap their heads around the paper. The community, however, seems optimistic. Says Vaaler: “It’s a beautiful paper. I think it’s correct.”