For Official use only. Please do not write in this space.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
nonsqtr (04-08-2021)
Exactly. So, play this out on a random surface.
In other words, you're dealing with a system where the dominant attractor changes shape and position in real time. So, how do you tell whether a bit is being influenced by one or the other and by how much?
This is a good example because it's fundamentally non-Gaussian.
Last edited by nonsqtr; 04-08-2021 at 12:12 PM.
Yeah. I think you've hit on something important here. The idea of "orthogonality". Which is intuitive in Euclid-land, but gets a little wierd in probability. As ever having to do with the inner products and angle preserving transformations.
Seems this is worthy of some study. I'm not sure anyone's really looked at selection from inhomogeneous spaces. The spaces are always assumed to be uniform in some way.
If you read about the Henon attractor, its solution set includes a Cantor dust. THAT is interesting. Yes?
A combinatorial solution that has no otherwise obvious reason for existing.
Last edited by nonsqtr; 04-08-2021 at 12:25 PM.
Oh yes I fell in love with chaos theory, fractals and attractors many years ago. My concentrated admiration is directed at Gaston Julia, the Frenchman who invented Julia Sets. What made him REALLY smart was he did all his work before the invention of the computer, so he couldnt show people the objects created, it was all in his head
Now this was really clever stuff. But the person that came next was also smart - Benoit Mandelbrot
HE realised you could grade how open, closed or broken up a specific Julia set was - one piece, many islands thin, fat. If you represent this by a single dot of a specific colour, and plot all the points according to the x and y values what you get is THIS
Most people have seen this, they have no idea how clever it is. Every dot in this image has a corresponding Julia Set based on the X and y coordinates. It is, in fact an Index of Julia Sets. And Benoit Mandelbrot was the first person to print out a Julia Set, and to print out the Mandelbrot set. Its been described as "the most complex mathematical object in the Universe"
Last edited by UKSmartypants; 04-08-2021 at 03:24 PM.
For Official use only. Please do not write in this space.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
nonsqtr (04-08-2021)
Wow, that's pretty impressive. All in his head? Ha ha - seems more space-filling than a Riemann sphere, as it were.
K, so the internal and external information metrics are different. What can we do with that?
Even though we can't directly measure the event space I'll bet we can derive information about its topology by studying the outcomes. I'd be surprised if this wasn't true somehow. Topology meaning and including, things like Betti number.
I mean, reaching into a nice flat circular space and pulling out an outcome is easy to conceive, but what happens if you reach into an oddball topology with singularities or discontinuities?
The way that's worked so far, is to model the event space. That's what Gibbs tries to do. But because everything is in flat-land, the idea of modeling the event space is kinda taken for granted. It's easy when you're flipping a coin, and only "slightly" harder with molecules flying around in a gas, and only "slightly" harder with the gravitational behavior of celestial objects - but what about the "information" geometry?
We need another level of indirection. I mean, think about the idea of an "error" in a bit. That's an interesting concept, isn't it? Is it possible to have an "error" in a spin state? No, not really, it's just a probability in the first place. It's only really definable as a state "with some level of confidence". Hence quantum error correction and all that.
The topology of the distribution, as distinct from the topology of the event space.
I feel like we're making progress. Not sure how yet, but I sense a light bulb moment on the horizon.
Last edited by nonsqtr; 04-08-2021 at 03:52 PM.
Okay. I think I found a framework that'll work. Here's how it goes:
The "information" in a distribution is encoded in the form of its sigma-algebra. (I'll use s-algebra for short). The s-algebra tells us the number of available states and how those states are defined.
The light bulb moment comes this way: from stochastic processes and the theory of filtration.
There is one scenario which has been especially well studied, which is the time series, which represents "ever increasing information", and therefore the filtration is related to the fact that the s-algebra always grows, with each new time step dt.
But, when we think outside of the box, we realize the s-algebra doesn't always have to grow. It can shrink too. However, interesting things happen when it grows.
Growth of the sigma-algebra is associated with a behavior called right-continuity, which basically means you get a smooth family of growing sigma-algebras that ultimately form a surface. It is this surface we're interested in.
This property of right-continuity does some fascinating stuff in the context of growing s-algebras. For instance, it allows an infinitesimal peek into the future at t+dt, which is fully one half of what I need to prove. If there is left-continuity I have the other half. Unfortunately it's not that easy. (It never is lol).
You can prove left-continuity in the case of a similarly shrinking s-algebra (which is somewhat analogous to running time backwards), but the mixed case is very complicated. If you can guarantee neither shrinkage not growth then the continuity assumptions fail and you have to consider the size of the s-algebra itself as a random variable. Which kinda means you can only see one direction at a time. (However if you had two such manifolds and they were alternating... it would be about the same thing as reversing every other sequence in the earlier bitwise example, yes?)
Now - filtrations have a "mesh size", almost like the one induced thermodynamically in an Ising spin glass model. The coarsest mesh size is when you have no information, and the finest mesh size is when you have all the information from the beginning of time. And, a sigma-algebra can be dual'd by complement. So if the mesh size goes from fine to coarse, you have what amounts to a stochastic optimization engine over the information space - that is to say the "space of information", which relates to the number of available states.
Topology? Hell yes! The optimization surface unquestionably has a topology, just like the energy surface in an Ising model or a Hopfield neural network model.
I think we're there, because the sigma-algebra is a very flexible device, it can be dimensionalized and orthogonalized pretty much as needed and it's topology can be constructed in such a way that it satisfies the essential symmetries.
Yeah. Hell yes. This whole scenario maps directly to "memory that has a time course".
Memory that has an onset, lasts a while, and then decays.
The decay is the shrinkage of the s-algebra.
Home run.
Now I can get back to music.
It'll take me some years to prove this works, I'm not very good with math. But I'll betcha I could build a working simulation faster than I could prove the math. Conceptually it's not all that hard, you just have a Hilbert space of filters sweeping in opposite directions.
The sigma-algebra is quantifiable, it has a topology and a geometry, and it maps combinatorially directly into the information space. It has everything we need.
This idea of mesh size, though, is stellar. It lets you determine pairwise correlations practically in real time - AND, it immediately suggests a method for achieving the scale invariance needed for self-similarity.
At this point I'd probably hire a subject matter expert to take the ball and run with it (if I could find one ha ha). This is great stuff, it's worth a career if anyone wants one.
For Official use only. Please do not write in this space.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
This is great though. Your nonlinearities measurable with Volterra kernels map to changes in the underlying s-set. You don't have to "control the distribution", it happens by itself! In the linear time series you have basically two distributions of interest, Gaussian for value and Poisson for time. Both are based on combinatoric assumptions about the s-set, mostly related to equivalence classes.
The best part of it is, it works in discrete-land (quantized, where every s-subset is open with the discrete s-set topology) and in continuous-land (fields, where the open subsets are neighborhoods with measurable volume).
People have studied this already in the linear case, Fisher-Rao is unique in any Markovian situation and etc - and - there is an interesting literature on "hierarchical clustering" that might be germane, having to do with matrix methods in the multivariate cases (proving for instance, that the Rao distance is just the geodesic of Fisher-Rao on the manifold). The nonlinear approach is badass stuff though. There's a pretty decent literature on the geometries induced by Fisher Rao on various distribution spaces, but I only found some very recent stuff from 2018 and 19 on deformations in the nonlinear case. Looks like people are thinking about it though. I'll betcha ten bucks there's a simple convolutional solution or something.
Did time flow in two directions from the big bang, making two futures?
Why time only flows forwards is one of the great mysteries of physics.
A new idea suggests that it actually went in two ways from the big bang – and, even more radically, that time emerges not from entropy, but from the growth of structure
PHYSICS 3 March 2021
By Julian Barbour
TIME moves forward. This is so obvious that we take it for granted, and the rule seems to apply everywhere we look. Observable phenomena only ever unfold in one temporal direction. We get older, not younger. We remember the past, not the future. Stars clump in galaxies rather than dispersing, and radioactive nuclei decay rather than assemble.
The big question is, where does this forward-facing arrow of time come from? The most popular explanation relates to entropy. In this picture, the flow of time is essentially a manifestation of the universe’s inescapable inclination towards disorder.
I have a different idea, or rather two. The first is that time goes both ways – that the big bang isn’t an origin for time, but a midpoint from which two parts of one universe play out, running in opposite directions. We can never see the one unfolding in the other temporal direction, yet it is there, I suggest, as a consequence of a fundamental law of nature.
My second idea is even more radical. It could transform our understanding of the very nature of time. The consequences might even reach beyond the realm of classical physics, the world we can easily see, and offer fresh clues to the quantum nature of gravity – the elusive theory that marries general relativity with quantum mechanics.
Physicists’ current ideas about time owe much to Albert Einstein. His general theory of relativity merged the three dimensions of space with one of time into space-time, the all-encompassing backdrop against which events play out. In principle, if not always in practice, we can move in space as we wish. Not so in time. Time insists on a direction of travel: we have no choice but to be swept along from past to future.
This flow of time isn’t dictated by the fundamental laws of nature. All but one of these are time-symmetric: they work equally well towards the past or future. Take the collision of two billiard balls, governed by one of these laws: a film of what happens doesn’t look odd when played backwards or forwards. The one time-asymmetric law we know of is one that dictates the decay of certain elementary particles, an oddity that prevented the complete mutual annihilation of matter and antimatter in the early universe and ensured that we, being made of matter, are here today. But there is no way it can explain the onward flow of time.
To explain this, physicists have instead turned to a law that isn’t considered fundamental, but which emerges from more basic laws. The second law of thermodynamics says that in a closed system, overall disorder, characterised by a statistically defined quantity called entropy, always increases. It does so because there are many more possible states of disorder than of order. Thus, a small ice cube in the corner of a large box will melt and become liquid water, spreading the molecules out and increasing disorder. Entropy has increased. Note the “statistically defined” bit: the laws of physics don’t rule out this process being reversed, but just say that event is statistically hugely unlikely.
In this picture, the direction of time is created by the increase in disorder. If snapshots showing the position of the molecules in that box were shuffled out of order, my 4-year-old granddaughter could put them back in order. For many scientists, this is enough: entropy puts direction into time.
For my part, I don’t doubt the robustness of thermodynamics. Einstein described it as “the only physical theory of universal content which I am convinced that, within the framework of applicability, its basic concepts will never be overthrown”. I wouldn’t be so bold as to disagree.
But Einstein’s caveat is important, and leads to a question: does the “framework of applicability” of thermodynamics include our universe? It doesn’t appear to be a closed system. It might be infinite in size and is certainly expanding, possibly without impediment. If so, it isn’t in a box. But a box, physical or conceptual, is crucial for the interpretation of entropy.
That alone is reason to question the application of thermodynamics more or less unchanged to cosmology. But there is another reason. With the entropic arrow of time, physicists assume that the universe began at the big bang with very low entropy, a special state of extraordinarily high order. That is arbitrarily imposed. One of the most profound aspects of existence is attributed to a special condition put in by hand. This has been called the past hypothesis and in my view it isn’t a resolution to the issue of time, but an admission of defeat.
In fact, an alternative to the past hypothesis may have been staring us in the face for more than two centuries. In 1772, mathematician Joseph-Louis Lagrange proved something about the behaviour of a system of three particles that interact according to Isaac Newton’s law of gravitation. This says that every particle attracts every other with a force proportional to their masses and inversely proportional to the square of the distances between them.
Past forwards
Lagrange’s result, which extends to any number of particles, showed that if a system’s total energy (potential plus kinetic) is either zero or positive then its size, essentially its diameter, passes through a unique minimum at just one point on the timeline of its evolution. This process runs just as well backwards as forwards, Newton’s gravity being time-symmetric. And with one fascinating exception to which I will return, the size of the system grows to infinity both to the past and future.
Interestingly, the uniformity with which the particles are distributed is greatest around the point of minimum size. It has long been known that a uniform distribution of particles is gravitationally unstable and breaks up into clusters. What nobody seems to have realised, however, is that when you run the evolution of the particles’ motion backwards from the clustered state to the minimum, most uniform state and then take it beyond this point, it goes on to become clustered again.
In a paper I published in 2014, together with Tim Koslowski at the National Autonomous University of Mexico and Flavio Mercati at the University of Naples, Italy, we showed that this is the case in a simple proxy of the universe. A computer simulation of a thousand particles interacting under Newtonian gravity showed that pretty much every configuration of particles would evolve into this minimum state and then expand outwards, becoming gradually more structured in both directions. I call the minimal state the Janus point, after the Roman god who looks simultaneously in opposite directions of time.
What would this mean for us? If we lived in the model universe I have just described, we must be on one side or the other of the Janus point. We find Newton’s time-symmetric law governs what happens around us, but also a pervasive arrow of time that defines our future. In our past direction, we can just make out fog – what we call the big bang – and nothing beyond it. Not realising the fog is a Janus point, we invoke a past hypothesis to explain the inexplicable. But Newton’s laws say the special point must be there, so there is no need to invoke the past hypothesis. Instead, we can mathematically define a quantity that reflects the evolution of our system of particles into something that looks like structure. Let’s call it “complexity”.
Complexity is calculated using all the masses of the particles and all the ratios of the distances between any two of them. It has nothing to do with the statistical likelihood of possible states and differs from entropy in that its growth reflects an increase in structure, or variety, rather than disorder. I argue that it should take the place of entropy as the basis of time’s arrow.
In my recent book The Janus Point, I take things further. I propose that, ultimately, our model suggests that the history of the universe isn’t a story of order steadily degrading into disorder, but rather one of the growth of structure or complexity, as we define it.
“Complexity doesn’t just give time its direction – it literally is time”
The suggestion for this comes in the first place from Newton’s theory of gravity. It isn’t yet clear it can be extended to a general relativistic description of gravity. But in many cases, Newtonian gravity predicts behaviour almost identical to relativity, so there is a hint to look for a similar effect in Einstein’s theory.
This brings me to the fascinating exception to Lagrange’s result I mentioned earlier. In everything discussed so far, the minimum size of the “universe”, at the Janus point, isn’t zero but finite. But general relativity at the big bang leads to a zero size of the universe, known as a singularity, where the equations break down.
It has been known since a remarkable paper by Frenchman Jean Chazy in 1918 that singular events called total collisions can also occur in Newton’s theory. In them, all the particles come together and collide simultaneously at their common centre of mass. At this point, Newton’s equations break down; they can’t be employed to continue any solution past a total collision. Instead of two-sided solutions, we have one-sided solutions.
If we take this exception seriously, we cannot say time has two opposite directions but, significantly, it doesn’t rule out complexity giving time a direction.
The equations for Newton’s gravity are still time symmetrical, so the solutions that terminate at a total collision can run the other way. They become Newtonian “big bangs” in which all the particles suddenly fly apart from each other. Right at the start, the particles are arranged in a remarkably uniform way, but they soon begin to look like the motions on either side of the Janus point we saw in our calculations.
As they emerge from zero size, their configuration, characterised by the complexity, satisfies a very special condition. There are plenty of configurations, or shapes, that satisfy the condition but just one has the absolutely smallest possible value of the complexity. It is more uniform than any other shape the universe could have.
This is where a radical twist in the tale was all but forced on me, during the final stages of writing my book. The fact that the universe had an extremely uniform shape immediately after the big bang set me thinking. Could the special shape I’ve identified, which I call Alpha, serve as a guide to a new theory of time – and also point the way to arguably the biggest prize in physics, a quantum theory of gravity?
Quantum theory describes the often counter-intuitive behaviour of subatomic particles. For all its successes, it has always relied on an essentially classical conception of a time that exists independently of and outside the system. But surely any attempt to create a quantum theory of the universe, and with it gravity, should start without the notion of a pre-existing external time. Time has to originate somewhere, and where else but the quantum realm.
My ideas about complexity can help. What I’m proposing might be called Newtonian quantum gravity because it unifies aspects of Newton’s theory of gravity, above all this value of complexity, and the two key novel features of quantum mechanics: probabilities for the state a system finds itself in, and an entity known as a wave function that determines how these probabilities evolve.
The idea is that a wave function of the universe determines the probabilities of all the possible shapes it can have. This is relatively conventional. What I’m suggesting, however, is how that happens: I put the birth of time at Alpha, this uniquely uniform configuration of particles, and make complexity time itself.
Heaps of time
I said my granddaughter could sort the shuffled snapshots into the correct order. Now suppose I give her snapshots of all possible shapes of the universe to sort into heaps, one for each value of their complexity. In the first heap there will be just that one most uniform shape: Alpha. After that, there will be infinitely many for each value of complexity. The wave function determines relative probabilities for each of the shapes within each heap.
This is what standard quantum mechanics does for the probabilities of a system’s possible states at different external times. My proposal includes something similar but with invisible, external time replaced by complexity, which is visible in the sense that it is directly determined by the shape of the universe. Hence, complexity doesn’t just give time its direction – it literally is time.
The picture I have sketched matches the known history of the universe, but is only a start. The good news for next steps is that there is, at least in principle, an observational test.
Scrutiny of the first light in the universe, known as the cosmic microwave background (CMB), indicates that very soon after the big bang the distribution of matter in the universe was extremely uniform, while also revealing tiny fluctuations of a very specific structure. Inflation, a theory that suggests the universe underwent a huge expansion in its first split second, can explain the form of those fluctuations rather well. But it doesn’t tell us how inflation began and key parameters must be fitted to match observations.
According to my idea, the universe must begin as uniform as it possibly can and then develop small nonuniformities. This might sound like an arbitrary assumption, but it is a direct consequence of the simplest quantum law one can propose for the universe, which forces the wave function to evolve from a necessarily unique condition at its most uniform shape. It is possible we could use first principles to directly predict the form of the fluctuations, which we could at some point verify or rule out by further scrutinising CMB.
This idea could go either way. I am hopeful, and not only because Newtonian complexity has a counterpart in Einstein’s theory. I also find encouragement in the thoughts of Niels Bohr, a founder of quantum mechanics, who said any new quantum idea needs to be crazy. The idea that complexity is time is certainly that – and it could be transformative. If time really is complexity, and it is a big if, it will kill two birds with one stone: provide a new starting point from which to formulate a quantum theory of gravity and show, on the basis of simple first principles, how time gets its direction.
Read more: Did time flow in two directions from the big bang, making two futures? | New Scientist
For Official use only. Please do not write in this space.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
nonsqtr (04-11-2021)
There are currently 1 users browsing this thread. (0 members and 1 guests)