## Thursday, January 16, 2014

### Fractal Universe Part 2

From Math Comes Fractals

What exactly is a fractal? An excellent online resource for fractals is the online educators guide at fractalfoundation.org.

A fractal is usually defined as any object or mathematical set that displays self-similarity on all scales. The object doesn't have to have exactly the same structure at all scales but the same "type" of structure must appear. In the Mandelbrot set mentioned in the previous article for example, the black figure reappears over and over but in some cases it appears distorted. The mathematical set, or formula, is often very simple. The formula for the Mandelbrot set is shown left. You start by plugging in a constant value for C for each test of this equation you want to perform. Z starts out as zero. The equation gives you a new Z. You plug this back into the equation at old Z and run it again, and so on. Each time you run it through is called an iteration. It's called a set as well as a formula because you are generating a collection of numbers.

We can rewrite the formula as Zn+1 = Zn2 + C

Z1 = Z02 + C = 0 + 1 = 1
Z2 = Z12 + C = 1 + 1 = 2
Z3 = Z22 + C = 4 + 1 = 5
Z4 = Z32 + C = 25 + 1 = 26
Z5 = Z42 + C = 676 + 1 = 677

If you graphed these results against n, you would get an upward parabolic curve because the numbers increase exponentially (to infinity). Most starting values of C will go to infinity like this, but not all.

If you start with C = -  0.5, you get entirely different results.

Z1 = Z02 + C = 0 + -0.5 = -0.5
Z2 = Z12 + C = 0.25 + -0.5 = -0.25
Z3 = Z22 + C = 0.0625 + -0.5 = -0.4375
Z4 = Z32 + C = 0.1914 + -0.5 = -0.3086
Z5 = Z42 + C = 0.0952 + -0.5 = -0.4084
Z6 = Z52 + C = 0.1638 + -0.5 = -0.3362
Z7 = Z62 + C = 0.1130 + -0.5 = -0.3870
Z8 = Z72 + C = 0.1498 + -0.5 = -0.3502

When you graph these results you get an oscillation of values which gets smaller and smaller. Eventually the results converge on a value of - 0.365.

If you start with C = -1, you get a value of Z that oscillates between two fixed points:

Z1 = Z02 + C = 0 + -1 = -1
Z2 = Z12 + C = 1 + -1 = 0
Z3 = Z22 + C = 0 + -1 = -1
Z4 = Z32 + C = 1 + -1 = 0
Z5 = Z42 + C = 0 + -1 = -1

This oscillation will continue forever.

The Mandelbrot set is made up of all the values for Z that stay finite, so most solutions such as those for when C = 1 are thrown out because Z in those cases goes to infinity.

So far we've looked at simple starting values for C: 1, -0.5 and -1. To graph the Mandelbrot set and see the beautiful image, we need to plot results along two axes on a plane. The plane we use is not your basic X-Y Cartesian plane. Instead we use something called a complex plane where the real axis, X, contains all the numbers we've already dealt with that don't go to infinity. The Y axis, however, is made up of imaginary numbers. The most basic imaginary number is i, and it is equal to the square root of -1 (something that can't exist).

Instead of plotting integers, in other words, like we do on an ordinary Cartesian plane, we plot complex numbers, each of which has two components - a real number and an imaginary number. A real number is an ordinary number. It can be positive or negative, a fraction or a whole number. An imaginary number is a real number times the special number, i. A complex number, therefore, has the form a + bi. An example of a complex number would written as -0.75 + 0.1i. For the Mandelbrot set, both Z and C are complex numbers.

Values for Z centred around the complex number, -0.75 + 0.1i, for example, give you a portion of the Mandelbrot set called the Sea Horse Valley (where the black blob head and body connect) when plotted. The squiggly "hairs" look like sea horses if you use your imagination. To see several more zooms of this valley, Wikipedia offers a great set of images in striking detail here. (Wolfgang Beyer;Wikipedia)
This is how you go from a simple formula to the beautiful image, but you need to run the equation many millions of times to get a detailed two-dimensional image of a fractal.

The mathematics behind the Mandelbrot set is actually very old but it was not until the computer age that its true beauty as a fractal was discovered. Benoit Mandelbrot, a mathematician, coined the word fractal when he discovered the Mandelbrot set image in 1979. Luckily he had access to IBM computers and he could create his own fractal images on them, letting the computer do the tedious job of computing all the values.

The math of the Mandelbrot set is fairly straight forward but the solutions show some  unpredictability built into the system. You are squaring old Z every time you run through the equation, so you expect new Z to get bigger and bigger and approach infinity. For most starting values of C that's what happens, and there's no surprise. However, for some values of C, the new Z converges on a single value or it alternates between a fixed set of values, something you don't expect. These points correspond to the black shapes in the Mandelbrot fractal. Around the edges of each black shape, all the values of C make the equation tend toward infinity. The colour is proportional to the speed at which equation expands toward infinity.

Watch this 5-minute video as it zooms in on a Mandelbrot set, one of the deepest zooms ever performed, 2.1 x 10275 times. It demands incredible computing power. The background music is called "Research Lab" by Dark Flow, a perfect choice. A magnification this deep would mean that the original figure in the animation would be far larger than the diameter of the universe!

If you have some programming skill, you can set up your own algorithm and program your PC or Mac to create your own Mandelbrot image and then zoom in on it. Andrew Williams at http://plus.maths.org offers an online tutorial to get you going.

Non-differentiability of Fractals

The Mandelbrot set is an example of a fractal curve. A fractal curve is any geometric pattern that is repeated at smaller and smaller scales to produce shapes and surfaces that cannot be represented by classical Euclidean geometry. It is a curve that bends and curls at every level of magnification.

The Koch snowflake is one of the earliest mathematical fractal curves to be discovered. It appeared in a 1904 research paper. The animation below right goes through seven iterations of the curve, from a triangle to a complex snowflake outline:

The Koch curve, like all fractal curves, has some interesting properties. For example, it has infinite length. Each iteration creates four times as many line segments as the one before, and the length of each segment is one-third the segment length in the previous iteration, so the total length increases by one third with each iteration. The length of the curve after n iterations is (4/3) n x the original triangle's length, where n goes to infinity.

If we look closely at the Koch snowflake we can get a clue about how this kind of mathematical curve is non-differentiable.

First, to find out what a differential curve is, take any kind of nonfractal curve, like the one drawn in black in the graph below, for example.

You can take many slope measurements along this black curved line by drawing many lines (in red) tangent to the curve at every point along the curve, and this way you will get a set of numbers that describe its slope. You are basically treating the slope as a head-to-tail series of very tiny straight lines. To do this, you turn each line into the hypotenuse of a right triangle (orange). This way you can get two values - rise over run. The more measurements you take, the better your approximation is to the real slope. If you could take an infinite number of measurements like this, you would get a prefect reproduction of the slope. To get this kind of perfect measurement without infinite work, you can take a function of these measurements, which in mathematics are called derivatives. The slope of the tangent line is equal to the derivative of the function at each point on the curve (three points on the curve are marked green as examples). The process of finding each derivative is called differentiation. Instead of calculating slope at every single point by hand, you let the function do the heavy lifting for you. A graph of the function at each point gives you a perfectly accurate slope or curve. Differentiation offers a way to describe not just curves but any smoothly changing phenomenon, like a chemical reaction, for example. This is the basis of calculus, which is widely used in physics to describe how systems change over time. Calculus is fundamental to physics because with it you can study how matter and energy interact, and how they change as they do so. Calculus offers a model, an approximation, of the real system, one that is accessible for you to analyze.

If we take our Koch snowflake and try to take the same kinds of measurements for slope everywhere on it, we'll find that it's impossible to do because you can never get a segment of curve short enough to take your slope measurement. Each curve breaks down into infinitely more segments. Remember, the pattern reappears over and over again at smaller and smaller iterations to infinity. This makes the Koch snowflake, and all fractal curves, non-differentiable at any point.

Fractal Dimensions

If you plot the Mandelbrot set or any fractal set on a log-log versus scale graph, you will get a straight line. The slope of the line gives you something called the fractal dimension. A fractal dimension is very unique, and much different from what we ordinarily think of in three-dimensional space.

Any ordinary point has a dimension of zero. A line has a dimension of one (length). A square is 2-dimensional (D; length + height) and a cube is 3-D (length + height + depth). This way of describing dimension is called topological dimension. It works for Euclidean geometry and we are all fairly familiar with it, but it doesn't work for fractals.

An example that illustrates this problem is the fractal Peano curve. We call it a curve even though there is no waviness to it. A curve, in fact, is defined in math any continuous function which is built from unit intervals.

Three iterations are shown below. Antonio Miguel de Campos;Wikipedia
The shape at the far right becomes a perfectly solid 2-dimensional square after many iterations, but its topological dimension remains 1 because it is made up of 1-dimensional lines. The formal definition of a fractal is any figure where the fractal dimension (2 for the Peano curve) is higher than the topological dimension (1 for the Peano curve).

Fractal dimensions don't have to be whole integers, but they can be, as in the case of the Peano curve. The Koch snowflake has a fractal dimension that isn't a whole integer. Remember the length of its curve after n iterations is (4/3)n x the original triangle's length, where n goes to infinity. Unlike the Peano curve, the fractal dimension of the Koch snowflake is very difficult to visualize, so we can use another technique instead. We can calculate its fractal dimension by taking log 4/log 3, which is 1.26186. This is greater than the dimension of a line but less than the dimension of a two-dimensional surface. This is something you never see in Euclidean geometry OR in the geometric description of space-time. The metric tensor that describes spacetime has three whole integer (Euclidean) dimensions for space and 1 whole integer dimension for time. In situations describing space-time close to or at the speed of light or where gravity bends space-time, the geometry becomes non-Euclidean but dimensions in these cases are still treated as whole integers.

Fractal dimension gives you an idea of the complexity of the fractal curve. A fractal curve with a dimension very close to 1, such as 1.1, behaves much like an ordinary line does. A fractal curve with a dimension close to 2 fills space much like an ordinary 2-dimensional surface does. A fractal curve with a dimension close to 3 fills space almost like a volume does.

So far we have seen fractals that are built by using a type of iteration called generator iteration. With this kind of iteration, you simply substitute certain geographic shapes with other geometric shapes.

To build a Koch snowflake you substitute this: for this: The swapping out can actually be done using any kind of function. For
example, you can repeatedly apply geometric transformations such as rotation and/or reflection, or you can take a mathematical formula and substitute one or more different mathematical formulae for the initial formula. Formula iteration (called IFS) produces some of the most complex fractals. Examples of this type of iteration are the Mandlebrot set and the strange attractor set we saw in the previous article. Meteorologists use this kind of iteration to construct weather models.

The Barnsley fern is an exceptionally pretty example of a fractal created through four transformations called affine transformations. The formula for one transformation in two dimensions is shown below. Transformations in  mathematics are often shown as matrices (the square brackets above). These kinds of transformations preserve points, straight lines and planes but not angles between lines or distances between points. Each of the leaves of the fern frond shown right is related to one another by the affine transformation. For example the red leaf can be transformed into the dark blue leaf by a combination of reflection, rotation, scaling and translation.

By playing around with coefficients (a, b, c, d, e and f above) in the transformation formula, you can make mutant fern varieties, such as the three shown below. The more iterations you run through, the more complex each fern diagram gets. As you can imagine, IFS models are especially useful for creating computer-generated imagery (CGI).

(images to the left:DSP-user;Wikipedia)

From Simple To Complex So far, we have looked at how fractals are built. We can see how the geometry of a fractal doesn't lend itself to differentiation or to ordinary Euclidean geometry, and we can see how a simple starting point can become very complex through many iterations, but we  haven't really looked into how fractals are related to chaos. Where does the chaotic nature of weather, for example, come into the formula iteration process just mentioned?

To answer this, we need to take a closer look at functions and numbers. Let's take a simple formula:

xnew = bx(1-x) For every X value, you map it to bx(1-x). You run it through bx(1-x) to get your next new x value. You take that value and run it through bx(1-x) again, and so on. This is mathematically what you are doing when you do a formula iteration.

I'm using the example in Fractals Unleashed Tutorial Chapter 13. Let's take this formula now and make b = 1.5 and we'll start with x = 0.234. We get:

.234, .269, .295, .312, .322, .327, .330, .332, .333, .333, .333 . . .

After a while, the iteration gets stuck on 0.333.

Let's make b = 3.20 and we'll start with the same x, 0.234: We get

.234, .574, .783, .544, .794, .524, .798, .516, .799, .513, .799, .513.   . .

Now the numbers start jumping back and forth between 0.799 and 0.513. These two different results are similar to what we found when we played with the Mandelbrot set. If we use b = 3.45, the results will settle into jumping between 4 numbers and if we use b = 3.54, we will find them jumping around 8 different numbers! As we increase b, the size of the number cycles goes up. What's going on?

This is the process called bifurcation  in mathematics. These iterations can be plotted on a bifurcation map. An example of a bifurcation map is shown below for the functional equation xn+1 = rxn(1-xn). This formula is often used to approximate the evolution of animal populations over time.

The value, r, plays a similar role to b, above. Like the earlier function, as you increase the value of r, the results begin to jump between 2 numbers (A), then 4 (B), then 8 and so on.

What is especially fascinating here is that you can take a section of the map above and magnify it, and you will see that it's a fractal, as demonstrated below.

These bifurcation fractals are called Feigenbaum fractals, after Mitchell Feigenbaum, a pioneer in chaos theory. In fact, this fractal shows the close connection between chaos and fractals. As the value of r (or b in our earlier example) increases, the system moves toward chaos (all those jumping results). This is actually a very simple example of chaos. There doesn't seem to be any obvious randomness you might expect but there is unpredictability. Changing the input, r, unexpectedly gives rise to bifurcations at certain values. In any chaotic system, a tiny difference in input results in a huge range of diverging possible outcomes. If you take a vertical slice of a bifurcation in the map above, you will get a strange attractor (described in the previous article) for that specific value of r. This mathematical system is an example of a nonlinear system. This means that the output is not directly proportional to the input. All chaotic systems are nonlinear systems. Most systems in nature are nonlinear. Scientists often approximate natural behaviours by using much easier linear equations to describe them, but there is a price for doing so. When this approximation is done, there is a risk of having chaotic elements or even singularities (input values for which the results are meaningless) hiding somewhere in the linearization. This is sometimes where catastrophic building failures have their origin. After an earthquake, a few seemingly unimportant structural components in a building fail, for example, and this leads to an unexpected cascading catastrophic failure of the whole structure.

Bifurcation fractals are examples of discrete chaotic dynamical systems. Most systems in nature are continuous dynamical systems. Continuous dynamical systems with Euclidean geometry can never be chaotic. However, most continuous dynamical systems in nature have non-Euclidean geometry (fractal geometry being just one example), and these systems can often be chaotic. Below, Euclidean geometry is compared with elliptic and hyperbolic geometry, two common non-Euclidean geometries. Joshuabowman;en.wikipedia

The non-Euclidean non-differential mathematics behind fractals and their close relationship to chaos and other nonlinear systems seem to be characteristics shared in common with continuously changing natural systems. Yet, most of the mathematics of physics is historically grounded in Euclidean geometry and differentiable functions. This math offers relatively easy and workable models for nature, but perhaps it should come as no surprise that they don't quite work when we try to describe certain phenomena. If nature is fundamentally math as Max Tegmark suggests, then there is increasing evidence that it is made of fractal math. This implies that the mathematics we need to describe nature should be non-linear, non-differentiable and non-Euclidean. Most theory in physics, however, is written as differentiable functions mapped onto Euclidean space - an approximation that might be too rough to use for some models. We've seen the naturalness of fractal geometry all around us in nature but what about the fundamental structure of spacetime itself?  Maybe the traditional framework of mathematics is partly to blame for the scale problem of spacetime as well. In recent articles (Dark Matter and Dark Energy) we wondered what might lie beyond Einstein's geometry of spacetime. In the next article we are ready to ask what lies beyond our customary mathematics used in most physics. Is the math fundamentally wrong? We will explore the possibility of fractal space-time next, in Fractal Universe Part 3.