Reciprocal Cosmology

Alan G. Carter

alan@melloworld.com

22nd May 1999

 

Introduction

There are many deep philosophical difficulties at the core of our modern understanding of physics. These start at the largest scales, with the nature of the Big Bang, fate of the universe and the origin of cosmological structure. Within the cosmos, we do not know why General Relativity works - what are gravity and inertia? At our own scale, we have noticed that the universe contains strange complexities that give it a fractal geometry which is found in dripping taps, heart muscle contractions, mountain ranges, share price movements and ferns, but we have not got any idea why. Then at the smallest scales, quantum mechanics seems to be philosophically beyond human comprehension - indeed several eminent theorists have proposed that it is wrong to ask for an interpretation - the mathematical processes that get the right answers although we do not know why should be thought of as all there is, and we should not worry about reality.

Remarkably, obtaining a self-consistent picture in which all these mysteries and more are rationally understandable is not too difficult if one is guided by a prior result (2: The Ghost Not). What this does is define a class of logical error that will have been made time and time again by prior researchers because it is a paradigmattic trap. It can be used to locate the errors and correct them, with the result that the products of generations of labour can be fitted together, spontaneously returning all of the efforts of ages past.

This paper describes a universe structured differently to the current understanding, but which is allowable by prior knowledge, and within which the prior results of Netwon, Einstein and the quantum mechanical school are still true (as they must be since CD players and GPS receivers work). It does this in four sections discussing the cosmological, classical (relativistic), quantum mechanical and chaotic (fractal) regimes. The physical universe it leaves us with enables us to make statements about the nature of consciousness, both in terms of the old and new paradigms, which are addressed in 4: Consciousness and 5: Hypertime. It is not yet verified that this model is allowable, but if it is it does not suffer the internal contradictions and inherent imponderables of the usual model.

This is an initial, qualitative release of this paper. It is necessary to provide this release since the ideas contained here have bearing on the other six results of the Reciprocality Project. There will be a second edition, which will define a computational mechanics, and fit quantitative experimental values to the relationships proposed in this edition.

Cosmological Issues

First we shall recap what is currently understood of cosmology in the broadest terms, and then we will be able to apply the Ghost Not and seek improvements.

We have a variety of ways of estimating how far away stars are. All of the methods have significant margins of error, but by taking several methods together we seem to have a reasonable idea of the distances of a great many stars. When we observe these stars, we see that starlight is composed of different colours, and that these colours are the ones we would expect from heating various specific kinds of atoms, that the stars are made of. What is odd is that the characteristic colours are a bit off - they are too red. More, the further a star is from us, the redder its light seems to be.

The conventional explanation for this is that the universe - space itself - is actually expanding. The light starts out the correct colour, but over the years as it journeys towards us, the space that it is travelling through is stretching, and the light (which is in the space) stretches with it, in the same way that a line drawn on a balloon expands with the balloon when it is inflated. From this we get the idea that at earlier and earlier times, there was less and less space in the universe, and in its first moments it appeared as a tiny point - the Big Bang. It is important to realise that the Big Bang idea does not suggest that space was here first, and all the matter and energy exploded into it. The idea is that once there was only one cubic centimetre of space (and before that, even less) in the whole universe, with everything that is in the universe today packed into it. It was very hot. Although the early universe was tiny, you could travel forever in it because the ends are joined together and you could just go round and round (so long as you were small enough to fit at all).

You may have seen this idea demonstrated on TV animations, that represent the universe as an expanding ball. The idea is to use the two dimensions of the ball's surface to represent our three dimensional space. Then the extra third dimension of the room the ball is in can be used to represent the "hyperspace" the ball is expanding in, with time. Here I'll just sketch the same idea using a circle instead of a ball. Then the edge of the circle can be a (very boring) one dimensional "space", and the paper can be "hyperspace". We can have several circles to represent time. Then the redshift idea can be drawn like this:

As the light travels, space gets bigger and bigger, the length of the light pulse becomes bigger as well, and the wavelength or colour of the light lengthens towards the red.

In the future, the universe might draw itself together again in a Big Crunch if there is enough mass, or it might (for some reason) reach an exactly balanced point by slowing its expansion more and more but never quite stopping, or it might just carry on getting bigger and bigger forever. Current experimental results suggest that getting bigger forever is the most likely. We know nothing about hyperspace, nor do we have any suggestions as to what might lie behind these ideas, although there are ideas that the Big Bang itself might be understandable as a very unlikely (but that doesn't matter to us - we are here because it happened) energy fluctuation due to quantum mechanical uncertainty in something infinitely small.

Now we can do physics by cheating. Instead of trying to surpass the insights of past geniuses, we can use the Ghost Not to guide us to blind spots - ideas that the past workers would have found it difficult to see. We can start with the idea of the uninvolved observer - a person who sits there seeing everything but not themselves being involved. It's an attitude the Ghost Not slides very deeply into people's thinking, although they will deny it and construct an alternative rationalisation. In this case, why is the observer not involved in the expansion? Since the space and the travelling light are, why aren't the observer's eyeball and measuring stick scaling too? How can the observer see the redshift at all? Having seen the redshift as uninvolved observers, Hubble and his successors then constructed a rationalisation: Space is expanding at cosmological scales, but not at local ones. It is suggested that the gravitation of our galaxy - and the rest of our cluster - is causing this. The local concentration of mass is pulling space together, even though at larger scales space is expanding. We are living in an atypical region of space. While this answers the previous question, it too has the hallmarks of a Ghost Not fudge. If we are living in an atypical region of space, what would happen to us if we went off and founded a space colony in the middle of a Great Void? Would we not then scale with space, and fail to see redshift? Sadly we can't do the experiment as yet, but continue the thought experiment. One day our descendents are visited by another ship from Earth. Would they find 10 metre tall people (relative to their measuring sticks)? Would the atoms of colony food be too big to work with their body chemistry? How come we never see hydrogen atoms as big as houses drifting in from intergalactic space?

It's a fudge. The Ghost Not is telling us to invest effort here like a prospector uses geological data to find oil. Is there any sane geometry that can get us a redshift, while having the observer as much involved in any scaling as anything else in the universe? Yes! And the trick of it also comes from the Ghost Not.

Dyadic consciousness has to slide perceived phenomena out of actual space and onto its own image space before perception can be acknowledged. So the thing it is least likely to be able to look at is the real space itself. Do the real space or hyperspace have anything to offer us that might have been overlooked? They both do! In order for our eyeballs to be able to see colour shifts in starlight, there must be at least two real length scales involved. One controls the length of eyeballs, and the other controls the length of starlight waves. Then, the length of starlight waves will be able to change relative to the length of eyeballs, which is certainly happening because we see redshift. And we have two length metrics right before our eyes in the existing theory - space and hyperspace. So let's take another idea from the Ghost Not - that there may be things happening that we cannot see - and assume that while we travel in space only and are not aware of any scaling of space in hyperspace (we scale with space), the photons that make us up actually travel in hyperspace, and do not scale with space. To get an idea of such a situation, consider this picture of a river:

There are two coracles in the river. Both are drifting downstream at high speed, but cannot know this just from looking at each other because they are drifting together. If this is a tidal river, that later flows more slowly, the coracles will notice no change if they look at each other. They must look at the bank to see a change in their drift speed, and if they cannot see the bank but only each other, one drift speed is the same as another. They can only see motion relative to each other, and since coracle A has a sail and is slowly drifting across the river towards coracle B, they will be able to see this.

Now we need to stretch the analogy a little. We'll stop the river flowing, fit engines to both coracles, and use deaf observers that can't hear the engines. Furthermore, we'll equip one of the coracles with a cadre of highly trained water-rats that carry messages between coracles. Here is the diagram:

Now we have A and B travelling in a reference frame all their own, unaware of anything happening in the river context, watching rats swim across the river far more slowly than they were trained to! From here we can get a logically consistent redshift very easily indeed. In the following diagram, I've drawn the multiple circles that represent successive times inside one another.

This is a point that we should pause to emphasise. We do not require a "hyperspace" and a "time" to expand within it. One number will tell us our moment, and hence our position in "hyperspace". A powerful existing concept would appear to be consequences of doing this. The principle of least time (sometimes called the principle of least action) states that a photon will take the path that enables it to get where it is going in the shortest time - not necessarily the shortest distance. This is why light bends when passing from air to water. Generally, such least time paths are called geodesics - straight lines in warped spacetime. By making hyperspace and time identical like this, we seem to have a natural geodesic in lines drawn directly on the diagram.

There are just a few "moments" indicated on the next diagram of course - each "moment" is a slightly smaller circle. The model assumes that space is contracting - not expanding. I have then drawn the continuous moment by moment path of a single photon in green, making the rule that the same proportion of the circle (measured in degrees) must be covered in each time period, so that observers in the space would see no speed change for light.

In each time period, observers who are not aware of the infall because they are aggregates of infalling photons (assuming we can take e=mc**2 for real) see the tracked photon move the same distance. This means that in each time period, since the distance travelled on the paper to achieve the same angular travel around the edge of the circle is less, the more of the photons journey must be spent travelling inwards, which is not visible to the observers, but does make the tail of a light pulse get there later than it should - it redshifts it. And the longer the light travels, the greater this proportion becomes, which can be seen on the diagram as an increasing angle of attack as the photon passes through successive moments.

It's like the perspective problem that one has looking back down a road:

The diagram represents the ancient philosopher Hu-Bul and a student, considering the reed shift, discovered by Hu-Bul. Hu-Bul is explaining that as they look back down the road, they are looking at places where they have been in the past. As they look backwards, they see that the reeds are getting nearer and nearer together, proving that the road that they are on is actually widening.

Hu-Bul might sound like a fool to us, but what is really going on is much more complicated that it seems. The key concept of perspective was not mastered at all by artists until the 15th century, so it must be harder to understand than we think.

In fact, the apparent reed shift comes from a very subtle comparison between two quite different metrics. When the pair were standing where the furthest reeds are, the width of the road subtended about 1/6th of the entire sensorium. Now, the same width subtends less than 1/60th.

The problem is, from wherever Hu-Bul is standing, an arc of say 30 degrees is still an arc of 30 degrees in his sensorium no matter what the distance it accounts for at differing radii. Hence the reedshift is an epistemological artifact of comparing angles with lengths.

And at least Hu-Bul can see the fields on either side of the road. By the time his lineal descendent Hubble tries to tackle a similar problem with the redshift, he will be working with a context which is by definition inaccessible. The consequences of this further epistemological constraint on an epistemological problem will make things even harder to understand. Add to this the fact that Hubble's thinking will be distorted by a profoundly deep logical error (the Ghost Not) which erects an artificial barrier around the victim and causes him to believe that there are such things as "observers" (passengers in the universe who have no involvement and so are not to blame), and he has no chance of getting it right.

So in this picture we have a universe which is a featureless blur of indeterminate size and age in the past, which is even as we speak is already heading towards a Big Crunch! The latest redshift data that have been interpreted as indicating certain openness in fact indicate a quicker Crunch than we'd have estimated before. And although all we have made greater sense of is the redshift so far, further on we will be able to do much, much more.

Classical Issues

Now we can use the infall concept to get a description of gravitational mass - matter with weight.

We first take e = mc**2 at face value, and imagine massive objects to be composed of photons that are somehow bound together - specifically they are oscillating past one another, like batsmen in cricket making runs. Since there is no preferred direction for acceleration (or preferred axis for rotation) in real space, we must assume that the direction of the oscillation is normal to usual space - in the time direction. This means we can still use two dimensional concentric circles to represent the evolution of a one dimensional space over time. Here is a diagram of one of a pair of photons that are oscillating to make a particle at rest in space - but still moving in time (I've had to give it a little bit of sideways motion so you can see the zigzag but really its supposed to be doing a "forward 3 steps, back 1" kind of path, straight down):

Now lets assume this photon has a set speed that it must move at. This has scaling implications - the moments must get further apart on the paper as space shrinks, so they can use up speed they can no longer use travelling acrosswise because the one dimensional space has shrunk by travelling inwards more quickly - but we'll leave that for now. This photon is zigzagging, so it can't travel as quickly in the radial time direction as its unbound neighbor. Bound photons - particles - travel through time more slowly than unbound ones. Now consider how a photon knows what its speed through time must be. We are assuming there is nothing but photons in our universe, so it must somehow refer to the other photons in the space. We take acceleration or deceleration in time to be with respect to the other photons, and we give nearer photons a bigger say. This means an object (which must be a line of particles in a one dimensional space) will start to develop a chevron formation, with the slowest moving photons in the middle. Just being around slow moving photons puts unbound photons into a slow reference frame:

 

Why don't gravitational wells get steeper as time passes? As the effects of moving through time more slowly cause a greater lag between the edges and centre of an object, so the edges get further ahead in time, and so into areas where the "moments" are further and further apart on the paper. This maintains a specific shape of the chevron, stretching the front to match the stretched back. This does not make the chevron skinnier, since laterally, space is shrinking. If this seems overly contrived, a similar effect can be described by imagining you are watching the goings on through a TV camera, and as the infalling happens, the camera zooms in on the ever shrinking circle to keep the photons filling the screen. For each mass and density there is a balance point. [NOTE: This is exceedingly qualitative right now. It will be great fun thrashing out the details, but it has not yet been done. For a second approach to these issues, see the section below on "Mach's Principle and Planck's Constant".] This gets us directly to timeshells around massive objects - the source of gravity in General Relativity, and so gravitational mass. The more photons that are in there voting for a slow reference frame, the more gravity is produced.

Next we can consider inertial mass, and this requires a slight change in what we show on the diagrams. The idea of oscillation implies an average separation of the photons, so we'll just represent that average length to indicate the dynamic situation that produces the particle. :

To be oscillating radially like this means to be at rest . To be oscillating at an angle to the radius means to be zigzagging with a sideways component:

To set something in motion means to give its zigzag a tilt. But whenever we do this, we set up a disagreement between the inner photon (at the time) and the outer one about how much lateral movement is entailed in the tilting (because space has shrunk more for the inner one.) With the disagreement in play, the both perform an oscillation, and end up in a different position to where they started, with the tilt still in place! This is momentum. The tilt angle is directional in the one dimensional space, and is like a dial that sets the velocity. The wider the tilt angle, the harder it will be to induce further disagreements about distance between the oscillators, since they are closer together in the infall. In the limit, the best that can be done is to reduce their zigzag angle to zero (a flat tilt angle) whereupon they simulate a pair of unbound photons. This is inertia, momentum and the lightspeed limit. Note however that unlike a pair of unbound photons, these photons are still trying to zigzag, albeit like a pair of dodgems shunting each other in a straight line. This shunting motion takes all of the non-lateral speed, and so prevents the pair infalling at all. Get the zigzag angle flat and you have a Fitzgerald Lorentz infinite braking effect (gravitational mass), stopped in time.

This appears to provide us with the major features of General Relativity with the exception of the lightspeed propagation of gravity. In this model gravity does not propagate. It results from a simultaneous series of individual deductions by adjacent photons as to their cumulative lag, and has its effect instantaneously. On this question, it is this model that fits the experimental evidence.

[NOTE: I think we can get the oscillatory behaviour itself out of the undue influence each oscillating partner has in time-slowing the other.]

Quantum Mechanical Issues

Another Reality?

It has often been said that quantum mechanical phenomena simply cannot be understood in terms of the reality our minds can directly comprehend. Why do we believe this instead of making further attempts to see something we could understand, but haven't yet? Perhaps because the idea has become established, and it makes things easier to just give up. Before discussing how the Ghost Not can assist us with the philosophical problems of quantum mechanics, we can pause to behold an apparition of dyadic consciousness as vivid as anything painted by Van Gogh, described in detail by John Bell. The tragic case of Neils Bohr:

"Rather than being disturbed by the ambiguity in principle, by the shiftiness of the division between 'quantum system' and 'classical apparatus', he seemed to take satisfaction in it. He seemed to revel in the contradictions , for example between 'wave' and 'particle', that seem to appear in any attempt to go beyond the pragmatic level. Not to resolve these contradictions and ambiguities, but rather to reconcile us to them, he put forward a philosophy which he called 'complementarity'. He thought that 'complementarity' was important not only for physics but for the whole of human knowledge. The justly immense prestige of Bohr has led to the mention of complementarity in most text books of quantum theory. But usually only in a few lines. One is tempted to suspect that the authors do not understand the Bohr philosophy sufficiently to find it helpful. Einstein himself had great difficulty in reaching a sharp formulation of Bohr's meaning. What hope then for the rest of us? This is very little I can say about 'complementarity'. But I wish to say one thing. It seems to me that Bohr used this word with the reverse of its usual meaning. Consider for example the elephant. From the front she is head, trunk and two legs. From the sides she is otherwise, and from top and bottom different again. These different view are complementary in the usual sense of the word. They supplement one another, and are consistent with one another, and they are all entailed by the unifying concept 'elephant'. It is my impression that to suppose Bohr used the word 'complementary' in this ordinary way would have been regarded by him as missing his point and trivialising his thought. He seems to insist rather that we must use in our analysis elements which contradict one another, which do not add up to, or derive from, a whole. By 'complementarity' he meant, it seems to me, the reverse: contradictoriness. Bohr seemed to like aphorisms such as: 'the opposite of a deep truth is also a deep truth': 'truth and clarity are complementary'. Perhaps he took a subtle satisfaction in the use of a familiar word with the reverse of its familiar meaning."

Little Known Fact: When he was a student, Neils Bohr's tutor was Soren Keirkegaard, the creator of existentialism, who personally bit him on the neck and turned his mind so far inside out it nearly went all the way around.

A Rational Model

The big problems with quantum mechanics are that it does not appear to be deterministic - it is inherently statistical, and that it seems to play hide and seek with experimenters, such that it is what the experimenters choose to look at that determines what it does. This has led some workers to believe that although inasmuch as they can see the red shift they are not even coupled to space itself, in as much as they are required to 'collapse the wave function' they create reality with their gaze. I was once advised that an aphid cannot collapse the wave function, whereas all humans can, but I suspect my informant had no theoretical basis for these statements and was in fact subconsciously making them up to fill in the blanks.

To begin to sort out what is happening, we can just use a diagram from the previous section. In Feyman's quantum electrodynamics, a particle and its antiparticle can mutually annihilate, and can be seen as each other, travelling backwards in time. (As an aside, for a time oscillation, 'travelling backwards in time' would be a phase difference, and could be modelled,

which is very suggestive of a Feynman diagram. Taken as occurring in this context, Feynman's diagrams can be seen as little pictures that are to be taken literally, and not as elegant book-keeping abstractions.)

Feynman's is not the only interpretation of the quantum mechanical mathematics to admit backwards in time information transfer. Bohm's pilot wave model proposed a pilot wave that "senses" the possible paths, and a particle that chooses a specific one. This means that somehow, the pilot wave must be able to send influence of some sort back to the particle to guide it. Cramer's transactional model shares the same theme, that a transaction is "offered" and then "accepted". There is a backwards in time negotiation. Indeed, the redundant part of these formulations seems to be the forwards in time component. Although all the interpretations that admit of an evolving reality (as opposed to for instance, multiple universes or perpetual mixed states) need a backwards in time information flow, they only need a forwards component since that is what we see.

Now add to this observation the logically consistent description of the redshift that has the universe collapsing to a point, the "block universe" that is a logical consequence of the non-existence of simultaneous action in Relativity, and self-organisation that we can see at cosmological scales - the Great Voids and Attractors - and one has a very suggestive picture. The problems can be resolved if we assume that for some reason, consciousness is perceiving time flowing in the other direction to the causal sequence being experienced by the mass-energy in the universe. It is exploding from a highly organised point into a bloated and undistinguished blur, that is of indeterminate size since there are no two points that are distinguishable from one another to measure their separation, no measuring sticks, and no-one to do the measuring. We are seeing an effective construction, because we are seeing this backwards. Each quantum event in fact loses information on the exploding, or creative, mass-energy arrow, and adds information on ours. That is why it appears inherently statistical - we always find out what happened last. It cannot be causal as we usually think of causality, because the actual causes of events are in the future.

Observer Effects

This view solves two deep problems. In QM, we have to work in two stages. The first sets up a wave function, which specifies the probability of finding a given particle at any position. The second stage involves "collapsing the wave function" - choosing one specific position for the particle. The trouble is, we have no idea what actually performs the task of "collapsing the wave function" in nature. This is where people start talking about their gaze fixing reality, and cats that are dead and alive at the same time. In this model, the collapse of the wave function corresponds to the continuous interpenetration of the two arrows. One might say, as our awareness rides along on our wave, it continuously interacts with, or learns of, the state of the particles coming at us from the future. The model also explains why QM seems to play hide and seek. It is not that the mysterious entities we are studying actually change their natures if we open the box containing the results after the interaction, but before the readout (as in the delayed choice experiments). In this model, what happens is that when we set ourselves up to get a certain result, it sets up a channel for an appropriate behavior coming the other way.

Quark Confinement and Conjugate Values

The idea of composing particles out of collections of entities (photons?) that are oscillating with respect to one another in time offers a conceptual basis for understanding the bizarre phenomena of quark confinement and conjugate values. Quark confinement means that although we have evidence that at very small scales there are objects called "quarks" in play, we are not allowed to see solitary quarks - they always have to be paired with another. If we design experiments which should cause the quarks to be dragged apart, the force holding them together gets stronger the further apart they are - like an elastic band. Eventually we have to use so much energy to pull them further apart that we invest enough mass/energy to create two new quarks, each closely bound to the ones we tried to separate. This business of force getting stronger the further apart two things are is another quantum mechanical madness that has entered a physical culture which has come to despair of ever finding reason behind what it studies, and has stopped trying. With a time oscillation involved, things can make sense again. We have a single entity, with an infall as well as an oscillation in time, and a lateral movement in space. Because it is zig-zagging in time, at any moment it can appear to be in more than one place in space. Nevertheless, it is in fact only ever one entity, and moving it "away" from itself simply means forcing it to leap about a lot more in its zig-zag. There are no invisible elastic bands - our sequential perception of the universe simply doesn't allow us to see what we are doing to the entity's real trajectory.

Conjugate values are the pairs of values that can be obtained at the expense of one another. For instance, by determining the position of a quantum mechanical particle with increasing accuracy, I must inevitably decrease the accuracy with which it is possible to know the particle's momentum. This is the opposite situation to what happens at macroscopic scales. If I calculate the trajectory of a cannonball, then the more accurately I know the position of the cannonball at any time, the more accurately I will also know its momentum. It is another situation where we have something that seems to defy reason, and involves an undefined "wall" between the quantum and classical regimes. In this model we can see that the oscillation length of the components of a particle specifies an interval. If we specify a time interval for the accuracy of our measurement that is greater than the oscillation interval, we can reasonably say that the particle can be considered "in" our measurement interval. The narrower we make our measurement interval (while always keeping it greater than the oscillation interval), the more accurately we can know where the particle "is" during that interval. But as soon as our measurement interval becomes narrower than the oscillation interval of the particle we are observing, we have a problem. The oscillation has now become more important than the infall for describing when the particle is, so the number of places where it is at the same moment starts to go up! And the narrower we make the interval, the less important the infall becomes for bounding the possible whenabouts, so the positional ambiguity becomes greater as the zigzagging particle's occupation of multiple places at the same time becomes more apparent.

Double Slit Experiment

The double slit experiment shows the inherent weirdness of QM (in the conventional paradigm).

The experiment compares the behaviour of bullets, water waves and electrons (or other particles in the quantum regime). Bullets arrive as lumps. If one slit is open in a wall that bullets are being fired at, we get a region on the other side of the wall where the bullets hit. If we open another slit, we get two such regions. Water waves arrive as well... waves. They don't come in discrete lumps like bullets, and when we have one slit, we get waves spreading out from it, but when we have two slits, we get an interesting pattern as the waves from the two slits interfere constructively and destructively. Electrons are a bizarre mixture of both behaviours. We know that electrons are lumps, since they always arrive in lumps that are the same size. But where the lumps turn up is purely statistical, and the probability of them turning up at any point is like the intensity of the water wave at that point. The wavelike behaviour continues into changing depending on whether there is one or two slits available! Electrons seem to behave in some ways like waves, and in other ways like particles. The mathematics that predict where they will land is very exact, but we don't have the faintest idea what it describes! There's a kind of half-interpretation, that says that the mathematics has the electron somehow "sensing" all the possible paths it could take, and adding up the contributions of the paths to determine the probability. It's worth pointing out that while this is certainly a bizarre way for lumps to behave, its just describing what regular waves do - with a slight twist that we'll get to later.

We can now consider a natural way to see things in this model with two arrows of time, and allow it to understand what is happening. On our arrow we see structure growing, and on the other arrow it is falling apart. The universe we see about us is filled with structure at many levels of abstraction, which hackers call the "deep structure". The isomorphisms between patterns found in the deep structure occur at differing levels of abstraction as well as at differing physical scales. We can see this as a multi-layered universe constructing itself all in one go. Alternatively it is equally valid to trace the growth of a bit of simple structure on our arrow, back down the other arrow (see the discussion of chaotic issues, below), then back up on ours, this iteration at another layer of scale and/or abstraction, and so on, round and round, until we are tracing one cycle of the biggest level of abstraction (which is the whole universe). Then (I suspect) one could "implement" that pattern and go back to the simplest structure!

This makes the point that any allowable way to "tell the story" is a correct way. There are many "correct" ways - a point that Feynman made many times - in fact he commented that he was much happier if given an idea, he could get the same effect out of several different physical approaches.

So what of our wavicle electron? In this model there's no need to worry. The wavicle is doing something that is quite reasonable - natural - in the new model. An electron is the smallest way of representing an electron's structure. This means that electrons have the same structure (perhaps nearly the same structure) on both arrows - they can't "sneak" along one arrow in a disguised form. So it goes up our arrow, down the other arrow, our arrow, down the other arrow, on and on. As it does so, it travels all of the paths it could possibly take, which is how a lump gets to behave like a wave. Note that this is another kind of backwards-in-time behaviour that occurs as well as the way any particle can be considered as its anti-particle moving backwards in time. This is because otherwise, an infinite charge would build up at the detector every time one electron moved! When looping around the causal cycle, the electron must always carry a negative charge backwards as well as forwards in time. If it is accompanied in its travels by a positron, the positron's positive sign comes from something else (conceptually perhaps a phase difference in its component photons as suggested in above).

From our point of view what we see is bizarre. We emit an electron from our gun and a vast number of electrons depart although only one electron's worth of charge is missing. Half those departing electrons are actually arriving in backwards time, but they all look the same to us. The electrons go every which way towards the wall, pass through, and gather together at points on the other side where an exactly symmetrical goings on is occurring, with a positron being emitted from the detector. At the end, one electron has moved from gun to target. Or one positron has moved from target to gun in backwards time.

Now for the twist in the tale mentioned above. The way the probabilities are controlled by the waves doesn't work exactly like water waves. With water waves you just add the contribution from both waves to get the result of their interference. With electron waves you have to add the squares of their contributions instead. When the wave is squared like this it's called a "probability amplitude". As far as I know, no-one's proposed a reason why you have to square like this. Here, it's not a problem. There are two arrows of time, two causalities, and at the smallest scales a structure can't be explicit on one arrow and hidden in noise on the other - it must be a symmetrical flow. So we have two causalities that must both be satisfied by the symmetrical evolution of symmetrical structures. This is the electron going one way, and the positron going backwards in time. The looping behaviour of the electron is matched loop for loop, by a positron going in the other direction in time. Every place where the electron's paths add up will be a "happy" place for it (which contributes regular, un-squared probability), and it must be matched by an exactly symmetrical position for the positron's emission. Multiply the positron's probability by the electron's probability to get the joint probability, and you've squared the value that comes from the regular wave distribution!

Why do we have probabilities in the problem at all? Think again about the two arrows of time. On our arrow, a researcher arranges various components, carefully adjusts the amount of energy fed into them, and out pops an electron from the emitter. Nothing statistical about that! (Of course, there are problems with noise in the apparatus, but that's usually overlooked. In this model, the noise might be better thought of as a flow of highly encrypted data from the future, as described in the discussion of chaotic issues.) For the positron, the flows of energy are no less rational - just not in a way that is accessible to the researcher. Heat flows from the cold laboratory air into the warm photographic plate, focuses at a point, and pops off a positron which flies off towards the researcher's gun. At the time of firing, the researcher cannot know which heat, from where, will be focussing at the other end of the experiment, in ten minutes time. So it can only be considered as a statement of probabilities given what is known - the location of the gun with respect to the slits.

Resonance, Tunneling and Particle Creation

How does this wavicle trick of looping around every possible path actually work? Why should the electron loop through all paths, and not just a few of them? Perhaps we could invent a "rule" that says that the electron must "fill" all the possible spaces. We have rules of that kind in conventional QM, and although they seem to be true, they don't tell us much. We've seen a Ghost Not based statement that doesn't actually say anything before in this thinking - perhaps this is another region of Ghost Not distortion.

It would make life much easier if the way electrons manage to fill all possible paths is in exactly the same way as water waves do it – by waving something that is actually there. That isn't an option, because there is nothing but empty space between the gun and the photographic plate...

The Ghost Not causes people to transcribe from real space, which has the properties of real space, to a null space on their internal whiteboards. When we’ve found the Ghost Not before, it’s helped to turn everything inside out. Assume that space is quantized like everything else. How do we know that the particles we are made of, the photons we exchange between ourselves to see each other and other objects, and everything that we call "existing", isn't defined by holes in an otherwise full medium? Why can't we be like the missing tile in the sliding tile puzzles, that seems to move long distances while all the time it's really many different tiles moving short distances? Or like a hole current in a semiconductor? Sure, we'll need to find dynamics that stop holes merging unless specific conditions are met, but that shouldn't be any harder than doing it the other way and it seems we can do anything with solitons. Now we can make the electron "sense" all possible paths in the same way that a water wave does it - by waving space tiles.

Is there anything else space tiles can do for us to justify their additional complexity in the theory? We are allowed to multiply hypotheses, so long as when we do it, we get a lot of other stuff for free, that makes the theory more economical that way. Think first about the bizarre effect of quantum tunnelling. Here, a particle disappears at one point, and re-appears in another. It does not seem to traverse the intervening space, and it seems that it makes the transition in a constant time interval, regardless of the distance travelled. There was a demonstration a couple of years ago where some people hooked up some tunnelling diodes and played some Mozart along them. The speed gain from the tunnelling was supposed to have made up for the time spent languishing in electronics between diodes, leading to a nett speed that was faster than light. With space tiles, this kind of behaviour again isn't so weird. Usually holes are propagated by adjacent tiles swapping positions so that the hole goes hop, hop, hop. With the right conditions however, a hole can be moved from one place to another without travelling through the intervening space, by shunting a whole line of tiles along by one position. Qualitatively at least, space tiles have something going for them here.

Next, there are the virtual particles. These are particle/anti-particle pairs that pop up and disappear all the time, all through "empty space". In conventional thinking, they are a consequence of the inherent uncertainty of the energy level in the space. With the model of the conjugate values and the idea of space tiles, we can see virtual particles as "gaps" between the space tiles that do merge and break apart again.

It might be interesting to consider arrangements of space tiles required to get the observed behaviour of things like photons, electrons, and positrons. In particular it would be nice to have an arrangement where a figure and ground effect produces a half-cycle phase difference between the electron structure and the positron structure. That way, a particle really becomes its own anti-particle travelling backwards in time. Such a model might be able to explain the CPT symmetry violation whereby it is slightly more likely that a matter particle rather than an anti-matter particle will be formed in situations that should produce both with equal probability. The slight energy difference that underlies this might be related to the changing scale of the metric which produces the red shift, and lead to another way to measure the scaling rate.

Experiential Diodes

This view does raise some problems. Why do we perceive time as we do, and why do thermodynamic phenomena appear to occur in forwards time at all? First, our subjective experience. In this model, we are growing information structures on our arrow of time. Imagine that yesterday I learned that roses are red, and today I learned that violets are blue. Today I have memory of a state when my knowledge was lesser, and so can perceive what we call the flow of time. The situation yesterday is not symmetrical. Yesterday I had no knowledge of what I had lost on the other arrow of time – because that is the knowledge I had lost! Subjective experience is in this way a kind of experiential diode within the overall information system of the universe. It is only this effect, which should be obvious to the most primitive of minds provided there is no Ghost Not concealing it, that causes us to insist that the universe has a preferred arrow of time at all.

Chaotic Issues

Now we must consider why the "thermodynamic" causality that we see appears to exist. Most of physics is indeed fully reversible, and it is only the tendancy of aggregates of matter to behave in a characteristic "thermodynamic" manner that we must understand. For example, if causality is something that has to be understood backwards, why do people only uneat bananas when there is a handy empty skin lying on the floor, and why do atmospheric vibrations and vibrations in the floor spontaneously combine at just the right moment to cause the skin to leap up off the floor and wrap itself exactly around the banana?

The universe described in this model has a very significant difference compared to the conventional one. Both start in a Big Bang, both explode, and both have a sensible regime of cause and effect in them. But the new one starts with the stuff in the Big Bang in a highly ordered state, whereas the old one has a completely homogenous Big Bang. When the new Bang happens, the stuff that comes out is arranged into an ordered state which then decays. This allows the ordering to have properties that are not immediately obvious unless one knows what one is looking for.

Firstly, the ordering seeds structure, which as it tries to homogenise itself, is sabotaged by structural information coming from adjacent decaying structure. Imagine two rock stars in adjacent hotel rooms, each bent on destroying all the televisions in their rooms. Both hurl TVs out of their window with manic zeal, but as they do this, both have TVs flung into their rooms because of the activities of the rock star next door. The initial state can be arranged to allow more or less orbital reconstruction of decaying structure, as recurrences as time passes. This provides us with a generally apparent fractal geometry of nature, that requires no additional deep principles to enable it. The cause is an arrangement of the end state that need not be the case solely from fundamental constants and laws, but is.

Secondly, the ordering has inherent properties of its own. In other words, it can have function. There is a class of machines called reversible computers, studied in particular by Charles Bennett. Reversible computers are theoretical machines that do not suffer mechanical losses (perhaps they have superfluid bearings), and can drift backwards and forwards between input and result without expending energy. They show that the energetic cost of computation is zero - the problem is that the computer has to be set up to latch when it actually drifts to the end of the computation, and then we have to use energy to unlatch it to do another one. If you only use a Bennett machine once, computation is free. Now to achieve complete losslessness, it is necessary for the whole computer to be able to run backwards as easily as it can forwards. It must be able to store its intermediate results in internal registers, so it can read them out and reconstruct the inputs to drift backwards. To do this, the designer must simply arrange the computer's components so that they have this property when they interact. If the end state is so arranged, it would have the effect of making thermal noise not noisy at all, but a non-lossy encoding of order that looks random because the encoding is very concealed. This is perfectly allowable by the thermodynamics that we know are true, since the general rules for energy exchange do not apply to a perfectly closed system, as is often noted. The floor vibrations don't combine like that to make the skin leap off the floor by accident - the end state is carefully arranged to ensure that all interactions are non-lossy.

It almost seems superfluous to observe that the orbital reconstructions of early structures in later eras provides a reason why the geometry of nature is fractal.

Mach's Principle and Planck's Constant

Mach's Principle is the idea that all motion in the universe is relative to the sum total of all the stuff in it - it is a way of getting some kind of reference frame to think about things in, where there does not seem to be any explicit reference frame available. The big problem with Mach's Principle is that while it is easy to state, and easy to observe, it is very, very hard to justify or explain. It doesn't explain anything - it just puts a label on a very big mystery. Of course, in M0 cultures, "knowing" the label to stick on a thing and "knowing" what it is are always confused, so many people do not realise that no-one actually knows what physical mechanisms are designated by the term "Mach's Principle".

In this model, the answer is easy. We are seeing the evolution of the universe in reverse. The Big Bang is in the future. All particles are on trajectories that are exploding out of a single point, and so the motion of all particles can be related with respect to that future point. In our epoch, we can see that all motion is with respect to the sum of all the stuff in the universe, but there is no faster than light communication between particles required, since the motion is co-related by a common cause in our future.

This model describes a universe which in our epoch (and on our arrow of time) appears to be expanding, because of a perspective effect, while it is in reality shrinking. It also describes a universe where spontaneous appearance of structure is occurring from an initial featureless state. We will not find any seeds of cosmological anisotropy in the Big Bang - there are none. This explains how we seem to see a featureless point in our extreme past, but there is another criterion of the observed Big Bang - it also seems to have been highly energetic. How can we reconcile a shrinking universe with a very high energy density in the past? One solution to this puzzle might be to return to the two length scales discussed at the beginning of the paper. One length scale measures physicists' eyeballs and measuring sticks, while the other measures starlight waves. It is the variation between these two scales that causes the perceived red shift as the universe shrinks.

Planck's Constant is a measure of the energy value of a particle. It relates energy of particles (energy which we ultimately measure as the ability to accelerate massive particles), to their wavelength - in the case of unbound photons, their colour. While the following idea by no means settles the question, it is tempting to ask if, over cosmological time, Planck's Constant might be changing. In the early universe, the variation between the two length scales might have been sufficient that one could do a very great deal of work with very little energy, with the amount of energy required to accomplish the same effect on massive particles going up year on year. This might fit nicely with the idea that energy that has been out of play - travelling through space - since earlier epochs, is redder, and hence can do less work when it reaches its destination that it could have at earlier times.