Wow what’s happening here???:
^^^^ Max run length falls short of system size for width 503, which indicated monoliths. Widths 502 and 504 have complete or near-complete anihilations (run of 500 cells) after ~ 500 steps:
Remember that the max run lengths for each time step are averages over 10 runs or so, so this periodic effect is a property of the average maximum run length for width 503.
Lots of monoliths here.
But what is causing periodicity in the average max run length?
Can the trajectory of a particle in rule 146 be controlled?
Simplest case: 1 particle in the system, try to make it stay at the same position for all time.
Is there an initial condition that would achieve that? Perhaps a perfectly symmetric one?
In an evolution where the particle stays at one position for all time, does the particle size distribution change vs an evolution where the particle meanders? i.e. the run lengths of even runs, does the distribution of even run lengths change?
Is there some trade-off between the size of fluctuations in particle size and the deviation from a random walk? I mean, if you try to control the particle trajectory, does it require messing with the size of the particle much? I’m thinking of a Heisenberg uncertainty kind of thing, where localizing the particle in space means the momentum becomes uncertain. Not sure what the right analogy is between momentum and particle size … or if there’s any anology there at all. Maybe particle size is literally like the mass?
Hmmm, I thik it’s impossible to have a perfectly symmetric initial condition that has an odd run on the boundary, such that no particle appears there.
So I have to put the single particle on the boundary intentionally, and then RotateRight …
Right that works, but I happened to choose a width close to 2^n so there was a big annihilation:
Choose a different width:
Generalize to any width …
Symmetric random initial with a length of about n:
From my thesis work many moons ago …
I made these really cool map of the number of iterations it took for mean field approximations of the density of ECAM rules to converge …
ECAMProbs returns the probability vector for an ECAM, which is a modification of that for an ECA, where the centre cell in each 3-cell neighborhood has probability p[t-δ].
Number of iterations required for density convergence of rule 30 as a function of the initial densities p1 and p2 on the x and y axis, respectively. p1 and p2 are the densities of the two rows comprising the initial condition for the ECAM-1 rule.
It’s a pretty interesting map!
original notebook from April 2006:
MapECAMDensity[rl_,T_,n_,initdensity_] returns the density of T steps of evolution of rule rl ECAM-T, with n spatial sites and memory T.
Evolution of rule 18 ECAM-20 (i.e. memory T=20) for 20 time steps:
Varying the initial density:
Density after T time steps as a function of the density of the initial condition:
Same for rule 232 ECAM-20:
… and for rule 146 ECAM-20:
Take an average of 10 runs for each point (and show the point plus and minus one standard deviation from the mean above and below the average curve):
Single-particle correlations from just one evolution, for width 251:
Now show the histogram as a function of correlation distance:
The boundaries above and below in the plot define an envelope for the correlations at this system width:
How does the shape of the envelope change as the system width is increased?
These plots totally look like jellyfish.
It’s time to look at the autocorrelations in the rule 18 single-particle movement as a function of system size, for a larger range of system sizes (several orders of magnitude), with some statistics for each width.
Here’s the simulation code …
Now grab the resulting data files:
Each index here is a different width:
I chose widths logarithmically spaced spanning 3 orders of magnitude, in the range 100 to 10,000:
These widths will be evenly spaced on a log plot.
Take a look at a single run at width 100:
Here’s the particle position vs. time, run for 50k steps:
The particle position went periodic. Not surprising for such a small system width. Here’s the correlation function, which also has a strong periodicity:
Let’s look at all of the runs for width 100:
(Note that my particleposition code treats periodic boundary crossings as reflections.)
And all the correlations at this width:
The correlations have a pretty regular structure, since all the evolutions seemed to have gone periodic within about 10k steps. Given this reglar structure, it seems worthwhile to take the average autocorrelation at each distance:
Do this for all the other system widths:
The correlation functions at each width have a prett similar structure. It isn’t clear if I’m resolving any real differences it the correlations here at all actually. It could be that the particle really isn’t much effected by the system size.
Vitaliy recommended that I look at the power spectrum, which is related to these correlation functions via the Weiner-Kinchin theorem. That’s the next thing to do I think.
I ran a few batch jobs with longer evolutions of rule 18 (100k steps, used to be 10k steps), and measuring much longer-range correlations than before (out to distances of 50k and 100k correlation distances).
My first task was to speed up the correlation distance function as much as possible.
Generate a random walk:
My original version of the correlation function, which I want to speed up:
Timing for a single correlation data point (lag 1):
I want to do this without using Partition, since I think it’s taking a lot of memory.
Try using RotateRight instead … already an order of magnitude faster:
A similar version which doesn’t wrap at the boundaries. I don’t want to wrap around the boundaries in *time*, because there’s no reason to correlate the beginning of the timeseries with the end of the timeseries. Normalization uses a few less points of the list, so the correlation comes out numerically slightly different:
… but it’s just as fast as the RotateRight method, and I think it’s more accurate for my purposes since it doesn’t do temporal wrapping.
So here’s my new correlation function:
Grab the particle correlation data for a rule 18 evolution of 100k timesteps (still using a width of 199 cells):
And grab the actual position vs. time for the particle:
Got about 50k correlation data points:
Here’s the beginning of the walk:
But it clearly went periodic:
And the correlations also went periodic:
So I did another evolution, this time using width of 999 cells:
Now the walk doesn’t go periodic, because the underlying evolution didn’t go periodic:
And behold the long-range correlations!
This is an interesting plot. Correlations fall gradually here over about 50k lag, then hover around a negative correlation of around -0.4 from 50k to 100k lag.
1. Get a statistical distribution of correlations at each lag, from multiple runs of the rule 18 particle
At the last ruleshack meetup, we discussed why the autocorrelation curve for the position of the rule 18 particle appears to be linear. And we realized that I’m probably just not looking at large enough correlation distances (lags).
The particle position vs time looks like this:
And the autocorrelations out to lag 100 look like this:
During the meetup, we looked at the particle position with the mean subtracted (since that’s what the autocorrelation function is based on):
We noticed that the typical time between zero crossings (when the curve crosses the x-axis in this plot, or equivalently the instants in time when the particle passes through its average position), is greater than the maximum lag I was using of 100 time steps. In fact, looking at the mean-subtracted position plot above, sometimes it’s more like 1000 time steps between crossings. That means I need to sample much larger lags in order to capture more features of the random walk.
To be more precise, I can compute the crossing intervals, and the distribution of interval sizes:
The average interval size:
The interval distribution:
It looks like the average crossing interval size is of the order of a few hundred time steps, with a intervals of order 10 being most common, but with some rare intervals being on the order of 1000 time steps.
Call the average crossing interval . Then the products p(t)*p(t+r) will tend to be positive for . But for lags , there will be more products where p(t) and p(t+r) have different sign, and thus contributing negative terms to the sum over t. So I’d expect the autocorrelations to start becoming negative somewhere around .
Here’s what the correlations look like out to lag 2000 (which I ran on a server). The curve is looking more interesting now, with the linearity going away at around lag 1000.