## Thursday, December 19, 2013

### Log-Polar coordinate system applied to Google maps

Inspired by XKCD strip #485: Depth.

Coordinate system is a way to assign numeric coordinates to every point of a plane (or space). Cartesian coordinate system is definitely the most used one. Polar coordinate system is probably the second in the popularity ranking, it identifies points by two numbers $$(r,\theta)$$, where $$r$$ is the distance from the point to the pole, and $$\theta$$ is the angle between radius-vector of a point and some fixed direction: polar axis.

Logarithmic-Polar coordinate system (Wikipedia) is the modification of the latter, which instead of distance $$r$$ uses its logarithm: $$\rho=\log r$$. Isn't it just a complication? Not at all. Arguably, log-polar coordinates are more natural than simple polar.

Cartesian coordinates arise naturally when you need to express translations (movements) of an object in space. To translate a point in some direction, one should simply increment its coordinates by corresponding amount.

Similarly, the log-polar coordinate system is naturally related to the rotation and scale transformations:

• rotation around the pole by angle $$\phi$$ corresponds to increment of the angular coordinate: $$\theta_1 = \theta + \phi$$,
• uniform scaling with factor $$k$$ is equivalent to increment of the logarithmic length: $$\rho_1 = \rho + \log k$$.

But there is more. Mapping from log-polar coordinates to Cartesian is conformal. This can be easily seen using complex plane: If some point A has Cartesian coordinates $$(x,y)$$ and log-polar coordinates $$(\rho, \theta)$$, then according to Euler's formula:
$$x+iy = e^{\rho+i\theta} ,$$ $$\rho+i\theta = \log(x+iy),$$
where $$i$$ is complex unit. Since logarithm and exponent are analytic functions, this mapping is conformal.

Let's apply log-polar to Cartesian coordinate transform to an image, mapping log-distance $$\rho$$ to the vertical Cartesian axis and angle $$\theta$$ to horizontal. Conformity of this mapping preserves angles between lines. Straight lines will become curves, but they will intersect at the same angles as in the original image. And the smaller the fragment of the image, the less distorted it would appear.

Interesting property of this mapping is that it shows both small and large scale features on the same picture, smohly connected. If $$\rho=\log r$$ is mapped to the vertical coordinate axis, then small objects near the center and big objects far from the center will appear the same size, just one higher than another. Thanks to quick growth of the exponent function, it is possible to show both atomic nuclei and planets on the same chart (XKCD #485).

Enough words, see the images. They are clickable, hyperlink leads to the full-size gallery. Beware of traffic.

The above picture shows the whole hemisphere of the Earth, originally in orthogonal projection. Log-polar to Cartesian coordinate mapping makes scale uniformly grow from bottom to top. You can see both singe pedestrians at the bottom and whole continents (yellow blob at the top is Africa) in the upper part.

## Software used

Python scripts used to construct these images are available in the Github repository:

under the permissive MIT license. To run these scripts, you'll need Python 3.3+ and Pillow. For the time being, the scripts are not well organized. Run the auto_glue.py --help to see help on options.

The script uses Google Maps static API to download multiple images of the same place with different zoom levels. Each piece of the map is first converted from Mercator to orthogonal projection. Then orthogonal projection is transformed from log-polar to Cartesian coordinate system, producing one horizontal band. All bands are then glued together into the single image.

Source images are from Google Maps. I've left water-marks and logos on them unchanged.

## Monday, November 4, 2013

### The Single Rotation rule: remarkably simple and rich reversible cellular automaton

In the previous post I have presented the Reversible Cellular Automata simulator, along with some explanations about the reversible computations and block cellular automata. This post is dedicated to the specific reversible cellular automaton, I have stumbled upon while playing with the Simulator: the “Single Rotation” rule. Here is what it looks like:

Your browser must support SVG and JavaScript to show the animation above. In case your browser does not support it, see the GIF animation:

This automaton is an example of the binary block cellular automaton with Margouls neighborhood. It can be described by the only one simple block transition rule:

Rotate clockwise by 90° every block with exactly 1 alive cell.
And that's all: other blocks remain unchanged. The numeric code for this rule is: [0,2,8,3,1,5,6,7,4,9,10,11,12,13,14,15], which corresponds to the following substitution table:

Time-reversibility of this rule is obvious: previous state can be obtained by reversing the direction of the rotation. Just like the famous “Game of Life”, this rule is very simple, but the automaton it defines has a remarkably complex behavior, arguably more rich than that of the “Critters” rule ([1], [2]), better covered in the net. I doubt that I am the first one to discover this rule (the rule is too simple), but I haven't seen it mentioned anywhere, so - who knows?

## Properties of the “Single Rotation” rule

### Conservation of population

The rule has a simple conservation law: evolution of the field never changes the total number of the alive cells. This immediately follows from the transfer function, since rotation obviously never changes total number of alive cells.

### No mirror symmetry

Unlike the irreversible “Game of Life” and reversible rule “Critters”, “Single Rotation” rule has no mirror symmetry. Mirrored patterns will evolve absolutely differently from the original. The symmetries “Single Rotation” universe is invariant to are:

• Rotation by 90°, and, consequently, rotations by 180° and 270°.
• Translation by 2n cells along coordinate axes. Note that translation by 1 cell will also change the evolution of a pattern.

## Patterns

As a consequence of the time-reversibility, no pattern can completely disappear (die out) in the reversible CA. Moreover, population invariance prohibits patterns that grow infinitely: guns, puffers and breeders. Reactions, that consume spaceships are also impossible.

By the above reasons, pattern types in the “Single Rotation” rule are limited to spaceships and oscillators (including static lifes as a special kind). Boring? Not at all, because their variety is extremely rich.

### Spaceships!

For me, spaceships always were the most interesting kind of the patterns. And “Single Rotation” universe has a whole zoo of them, with different velocities, periods and directions. I've discovered 90 different spaceships, naturally produced by a random reaction. For comparison: in the “Game of Life” there are 2 spaceships that appear naturally in the random reactions: the Glider and the LWSS, with the latter being rare. In the “Critters” rule, there is one common light spaceship (“brace”, RLE: $2bo$bo$bo$2bo ) and few rare spaceships. See the collection of the “Critters” spaceships in the Simulator.

### Lightest spaceships

Spaceships of the minimal possible “weight” (number of alive cells) play special role in the reversible CAs with conservation. In the “Single Rotation” rule every pattern with 1,2,or 3 cells is an oscillator. A spaceship must have at least 4 alive cells. (This should be easily provable by enumeration) There are 6 different 4-cell spaceships, 1 orthogonal and 5 diagonal ones:

Spaceship “A” is the only 4-cell spaceship that moves in the orthogonal direction. It has period 12 and velocity c/6. I am conjecturing that it is also the fastest spaceship possible. However, this is only a hypothesis, I don't have a proof. Big spaceships can sometimes be unexpectedly fast, for example, the second fastest known to me orthogonal spaceship has 9 cells.

The spaceship “F”, resembling a vertical stroke has unexpectedly large period: 368 generations, which makes it one of the slowest spaceships.

Lightest spaceships has several special properties. One of them can be formulated as a theorem:

Theorem:
Collision of a lightest spaceship with a single cell always produces single cell and another lightest spaceship.
See demonstration below. Note that the interaction can move the single cell, and the type of the spaceship can change.

### Conway's Glider

I believe that it is nothing more than a curious coincidence, but the iconic Conway's GOL Glider is also a spaceship in the “Single Rotation” universe. It has period 15, translating diagonally by 2 cells every period. It is rather common spaceship, its natural abundance is 0.009 that means that roughly every 100'th spaceship produced by the random reaction is a Conway's Glider.

Important note here: since “Single Rotation” universe is not invariant under mirror flips and translations by an odd amount, it is important to correctly align the pattern to the blocks.

### Oscillators

In the Reversible Cellular Automata, every non-growing pattern that is not a spaceship is an oscillator. It is easy to prove that under the rule with population conservation, every pattern is either an oscillator, a spaceship or will eventually evolve in the combination of the first two.

The smallest oscillator is a single cell, which have period 4 and rotates clockwise or counter-clockwise depending on the initial position. There are 2 possible 2-cell oscillators and several 3-cell ones.

### Indestructible Still lifes (period-1 oscillators)

Still lifes are easy to construct in the “Single Rotation” rule: construct a pattern that have 2 or more alive cells in every block, regardless of the subdivision phase.

Interestingly, such patterns are indestructible: no interaction could ever change them. Since the rule is time-reversible, indestructibility leads to inconstructibility: such patterns could never arise during interaction.

Static lifes can be used to construct impenetrable walls, enclosing parts of the field. When a spaceship collides with such indestructible object, it gets “smashed” into a cloud of chaotically moving cells that are orbiting the obstacle's indestructible core, but then this cloud inevitably produces another spaceship. See example:

In this experiment, lightest orthogonal spaceship collides with a static life and “bounces” from it. It is not a coincidence, there is a general property that can be formulated as a theorem, similar to the previous one:

Theorem:
In the “Single Rotation” cellular automaton, collision of a lightest spaceship with a static life always produces another lightest spaceship and the same static life.
Note that the kind of the spaceship can change during the interaction (in the animation above it does not change though). There is one difference with a previous theorem regarding the single-cell collision: position of the single cell can change, but position of the still life would be exactly the same.

There are several other theorems about the “Single Rotation” universe. I'll write about them later. Maybe.

## Code and Programs

### Animated SVG demo

The animation at the top of the post was made with SVG image, animated by the means of JavaScript. The sources are available in the same GitHub repository: dmishin/js-revca.

It does not simply plays the animation, it actually simulates the automaton, and the initial pattern as well as other animation parameters can be specified via URL arguments. See some examples:

SVG demo should work in the most of the modern browsers, desktop and mobile.

## Reversible cellular automata

This is the first my post on Reversible Cellular Automata, there will be several more.
TL;DR: play with reversible cellular automata simulator. It is in HTML5, so you probably need some decent browser. Come back to this page, if you don't understand, what is going on.

Contents:

### Reversible Computation

Computation y=f(x) is reversible, if input data x can always be uniquely restored from output y. Wikipedia have more detailed page on reversible computations.

For example, logical negation: y = ¬x is reversible. If output is 1 then input is 0 and vice versa. In contrary, 2-ary AND (conjunction) is not, since output 0 corresponds to the three different inputs: (0,0), (0,1), (1,0). Most of the commonly used logical gates are not reversible too.

However, it is possible to construct blocks, that are reversible. It can be easily seen (by the pigeonhole principle) that reversible logical gate (function) with n inputs must have exactly n outputs. Such function can be seen as transposition of 2n elements. Famous examples are:

The latter is universal: it can be used to construct a circuit for an arbitrary logical function (provided some “garbage” outputs and “zero” inputs added to achieve reversibility).

Indeed, reversible computing is interesting topic by its own. Curiously it is not a pure mathematical toy: it is deeply connected with quantum computing and theory of computation. But I am giving up here - ask Wikipedia for more information.

### Reversible Cellular Automata

What I want to write about are Reversible Cellular Automata, i.e. cellular automata, allowing deterministic evaluation “back in time”: from the future to the past. Transition function in such automata must be reversible.

It is easy to prove that the iconic Conway's Life is not reversible. This directly follows from the existence of many different configurations that eventually die out. Consequently, empty field do not have unique predecessor, and cellular automaton is irreversible. Existence of configurations without any predecessors: Gardens of Eden, is a much less trivial fact.

Building a non-trivial reversible cellular automaton may appear to be a tricky task. In fact, there is a theorem stating that testing reversibility of a 2-dimensional CA is an undecidable problem: no algorithm can do it for sure. But there is a class of CAs, where reversibility can be achieved and tested in a simple and straightforward manner: block cellular automata.

### Block Cellular Automata

The idea behind block cellular automata construction is:

1. The lattice of cells is partitioned to equal blocks.
2. Each block is transformed independently.
Obviously, these 2 rules alone are not enough to build any interesting automaton, since blocks are not interacting. Therefore the 3rd rule is needed:
1. After each step, partitioning scheme of the lattice is changed.

### Margolus neighborhood

One of the simplest and most symmetric partitioning schemes is arguably the Margolus neighborhood. Wikipedia contributors say:

In the Margolus neighborhood, the lattice is divided to 2-cell blocks (or 2 × 2 squares in two dimensions, or 2 × 2 × 2 cubes in three dimensions, etc.) which are shifted by one cell (along each dimension) on alternate timesteps.

#### Properties of the Margolus Neighborhood

An important difference between the block CAs (included those based on Margolus neighborhood) and the more widely known totalistic CAs with Moore or von-Neumann neighborhoods (like the Game of Life) is that the former are generally have less symmetry.

First of all, Margolus neighborhood CAs are not invariant under translations by odd amount. Take one pattern of alive cells, move it by 1 cell vertically, horizontally or both, and you will generally get a pattern with completely different behavior. See the example below and try it live in the simulator (opens in the new window):

The reason is simple: odd translation will change the partitioning of the pattern to blocks, inevitably changing pattern's behavior. Block cellular automata are only invariant under translations that don't change the borders of the blocks; in the Margolus neighborhood case it is translations by even amount.

Analogously, Margolus neighborhood CAs are not invariant under translations in time by odd number of steps, i.e. they are not completely stationary. Since in block CAs partitioning scheme changes on every generation, the same pattern will evolve differently, depending on whether it was started from the step 0 or from the step 1. In other words, state of the block CA universe have an additional parameter besides the state of the cells: the phase of the partitioning. In the Margolus neighborhood, there are 2 phases, thus 1 additional bit of information is required.

In the general case, block CAs are not invariant neither under rotations by 90° nor under flips. They are only invariant under these transforms, if the block transition function is invariant under them. This means that the space of such CAs is less isotropic than the space of the conventional totalistic CAs. There are asymmetric rules where, for example, orthogonal spaceships moving to the right are completely different from the ones moving to the left.

Try it. (in a new window).

It is easy to see that in the Margolus neighborhood, the whole CA is invariant under the geometrical transform iff its block transition function is invariant under this transform.

### Rules in the Margolus neighborhood

In the block CA with Margolus neighborhood, each block consists of 4 cells, thus transition function must have 4 inputs and 4 outputs. For the binary cellular automata, this gives 24=16 possible input and output states. A block transition funciton must map every possible input to some output. To be reversible, this mapping must be one-to-one (bijective), thus effectively being a transposition of 16 elements.

State of the block can be encoded into one 4-bit binary number. Then any function can be written down as a list of 16 numbers in the range [0..15]. If the function is reversible, then every number occurs exactly one time in this list. For example, [0,8,4,3,2,5,9,7,1,6,10,11,12,13,14,15] describes a reversible function.

## The Simulator

I am proud to present a working simulator program for the reversible cellular automata with Margolus neighborhood. It runs directly in the browser, without download and installation.

Running in the browser have some disadvantages though. Particularly, you don't have access the local file system and can't save your work. This problem can however be alleviated thanks to the “Save to URL” feature (currently on the “Settings” pane).

## Requirements

The simulator program is written in JavaScript and HTML5, therefore a decent browser, supporting these technologies is an absolute requirement. I have tested the simulator in the latest version of desktop Firefox, Chrome and Opera; and it works well. It does not works in IE9, but IE 10 should support it (never tested though). It will also work on some mobile browsers, particularly on Android and Chrome, but the interface is most suited for mouse, not for touch screens.

### Basic usage

I hope you will find the interface of the Simulator to be mostly self-explanatory. Like in then most conventional CA simulators, you can edit cell patterns with mouse and run simulation, step-by-step or continuously. If you have seen Golly then you know it all. The key feature you will not find in Golly is the “Reverse Play” button that allows you to evaluate cellular universe back in time. Indeed, it is only possible for the reversible cellular automata.

Basic features are:

• Editing cells using mouse or touch interface (the latter is not convenient but supported).
• Running the automaton in the forward and reverse direction.
• Changing cellular automation rule. Either select one of the predefined rules, or enter your own in the transposition notation (See “Rulse in the Margolus neighborhood” above).
• Selection:
• Clear selected or non-selected area
• Fill with random cells
• Analyze selected pattern (see advanced features)
• Copy selected pattern to the internal buffer, and then paste it to the field

Besides the basic field editing and simulation, simulator program provides some features for more detailed analysis of the rules and patterns. These features include:

• Pattern analyzer
• Spaceship catcher
• Rule analyzer
• Save state to the URL

#### Pattern analyzer

Want to know, whether the pattern is a oscillator, spaceship or something other; what are its period and movement speed? Just select it and click the “Analyze” button. The application will evaluate the pattern and extract some basic information about it, including:

• Pattern type: oscillator, spaceship (diagonal, orthogonal or slant) or neither.
• Pattern period, if it is an oscillator or spaceship.
• Pattern movement speed, in case it is a spaceship.
But even more importantly, it will try to find the canonical form of the pattern. A little explanation here: In the Game of Life, most commonly occurring patterns have relatively small periods. The glider have period of 4 generations, for blinker it is 2, and so on. In the contrary, for the reversible CAs big periods are not an exception. No, not just big: sometimes they are HUGE. One example:

Such large periods enormously complicate manual analysis of the patterns. Given 2 visually different patterns, how do you decide: are they different phases of the same pattern, or just 2 different patterns? Canonical form gives a solution to this problem. Analyzer evaluates given pattern during its period, and chooses one configuration, that minimizes some criteria. (Precisely speaking, it searches for the most compact configuration, but it is not actually important). What is important is that such canonical form is determined uniquely (in most cases). Additionally, if the pattern was identified to be a spaceship, and the rule allows rotations, canonical form is rotated so that movement direction was to the right (orthogonal) or to the bottom-right (diagonal). Using canonical form, determining equivalence of two patterns is trivial: equivalent patterns has the same canonical forms.

#### Spaceship catcher

This feature facilitates search for the naturally born spaceships by initializing field with random initial pattern in the center and collecting escaping spaceships. When the spaceship catcher is enabled, all patterns touching the border of the field are removed from it, analyzed and spaceships are added to the currently open library. Every 30000 steps (which is default value that can be changed on the “Settings” pane), the field is re-initialized: current selection is filled randomly.

#### Rule analyzer

In the “Rule” pane, various information regarding the current rule is shown. Besides the (hopefully) obvious graphical representation of the transfer function, it includes the following information:

• Rule code, as 16 comma-separated integers. This code can be edited manually. See "Rules in the Margolus neighborhood" above.
• Reversibility of the rule: whether it is reversible or not.
• Rule invariants, such as rotation by 90° or 180° and mirror flips.
• Duality transformation of the rule. I'll cover this later. See help page for details.

### Some rules

These rules are taken from the MCell documentation page.

## Thursday, July 11, 2013

### Sums of powers of Fibonacci numbers

While solving Project Euler problem #402, I (among many others who soled or tried) have stumbled upon the problem of calculating the sum of the powers of first Fibonacci numbers.

Wikipedia article on Fibonacci numbers gives two formulas from this family:

The sum of the first powers (i.e. of the numbers themselves): $$\sum_{i=0}^n f_i = f_{n+2}-1$$
And the sum of the squares: $$\sum_{i=0}^n f_i^2 = f_n f_{n+1}$$

The simplicity of these expressions suggests that similar closed form solutions exist for other powers too, and it is indeed so. Solving the above-mentioned problem, I wrote a Python3 + Sympy program, which can derive such expressions for arbitrary large powers (well, there are limits of course: computer memory and your time, but powers around 30 can be resolved in reasonable time). It seems that these formulas are not widely represented in the Internet, so I decided to share them. Who knows, maybe I'll save somebody's 5 minutes?

Here are the expressions for the sums of the powers from 1 to 9. (as image)
$$\sum_{i=0}^n f_i = f_{n+2}-1$$
$$\sum_{i=0}^n f_i^2 = \frac{f_{2n} + f_{2n+2} - \left(-1\right)^n}{5}$$
$$\sum_{i=0}^n f_i^3 = \frac{f_{3n+2}-6(-1)^n f_{n-1}+5}{10}$$
$$\sum_{i=0}^n f_i^4 = \frac{1}{25} \left(f_{4n+2}-4(-1)^n f_{2n+1}+6n+3\right)$$
$$\sum_{i=0}^n f_i^5 = \frac{1}{550}\left(4f_{5n+2}+2f_{5n+4}-55(-1)^n f_{3n+1}+220f_{n+2}-175\right)$$
$$\sum_{i=0}^n f_i^6 = \frac{1}{500}\left(f_{6n+2}+f_{6n+4}-8(-1)^n\left(f_{4n+1}+f_{4n+3}+5\right)+60f_{2n}+60f_{2n+2}\right)$$
$$\sum_{i=0}^n f_i^7 = \frac{1}{79750}\left(22f_{7n+2}+88f_{7n+4}-406(-1)^n\left(55f_{n-1}+f_{5n+1}+2f_{5n+3}\right) + \right.$$
$$\left.+6699f_{3n+2}+17375\right)$$
$$\sum_{i=0}^n f_i^8 = \frac{1}{1875}\left(f_{8n+4}-12(-1)^n\left(14f_{2n+1}+f_{6n+3}\right)+84f_{4n+2}+210n+105\right)$$
$$\sum_{i=0}^n f_i^9 = \frac{1}{7576250}\left(1276f_{9n+4}+319f_{9n+5}-18810(-1)^n f_{7n+3}-3762(-1)^n f_{7n+4}+\right.$$
$$\left. + 119016f_{5n+2}+39672f_{5n+3}-509124(-1)^n f_{3n+1}+1527372f_{n+2}-1173125\right)$$

Because there are algorithms that can calculate n'th Fibonacci number in O(log(n)) steps, the above formulas can be used for effective calculation of such sums for very large n, where direct summation is not feasible.

## Derivation

To derive these closed-form representations of $$sum_{i=0}^n{f_i^p}$$, I used the following universal approach:

• Apply Binet's formula t: $$f_n=\left(\phi^n-\psi^n\right)/\sqrt{5}$$, where $$\phi=-1/\psi=\left(1+\sqrt{5}\right)/2$$.
• Expand the power using the binomial formula. $$f_i^p$$ will become sum of several exponents.
• Apply the formula for the sum of exponents : $$\sum_{i=0}^n a^i=(a^{i+1}-1)/(a-1)$$
• Go back from the powers of $$\phi,\psi$$ to Fibonacci numbers, using the efollowing qualities: $$\phi^n=f_{n+1}-f_n\psi$$, $$\psi^n=f_{n+1}-f_n\phi$$.
Despite that intermediate expressions involve radicals, they disappear after simplification, and the final expression has integer coefficients.

## Source code

All scripts are in the Github repository: dmishin/fibsums.

The repository contains several programs, main of them is fibsums_general.py, which is the script for deriving a formula. For any positive power, it gives two closed-form representations: one expresses the sum via the products and powers of $$f_n, f_{n+1}$$, and another - via $$f_{kn}$$. Usually, the second one is simpler.

Example of derivation of $$\sum_{i=0}^n f_i^4$$:

$python fibsums_general.py 4 ================================== Sum F(i)^4 = = (-4*(-1)**n*F(n)**2 - 4*(-1)**n*F(n + 1)**2 + 6*n - F(n)**4 + 4*F(n)**3*F(n + 1) + 4*F(n)*F(n + 1)**3 + F(n + 1)**4 + 3)/25 = = (-4*(-1)**n*F(2*n + 1) + 6*n - F(4*n + 3) + F(4*n + 4) + 3)/25  This "raw" result can be simplified even more, using known equalities, giving $$(f_{4n+2}-4(-1)^nf_{2n+1}+6n+3)/{25}$$. For small powers, computation takes a few seconds, and for powers near 25, calculation requires a minute or two. In conclusion, here are raw results for powers 1 - 25, in a text file: fibsums-25.txt. ## Saturday, March 23, 2013 ### Slit scan photography Take a look at the unusual photo above. It seems distorted: people have normal sizes, while cars are squashed into vertical strikes. However, it shows the reality as is, but under unusual angle. Time and space coordinates (strictly speaking, horizontal coordinate) were interchanged in this image. See the video below: The whole image (click image to zoom) #### Interesting features of slit-scan photos 1. All people and cars are facing to the right.Because they are facing pastwards. When a man walks, his or her nose comes first, and the back of the head comes last. Thus, in a slit-scan photo nose is always on the right (right is past, left is future). 2. The faster object moves, the shorter it seems. Obviously, horizontal length is a time, object spent crossing the slit. It is why fast cars are squashed into pancake-like objects. 3. Background turns into horizontal bands. Every still object is either invisible, or shows up as an uniform color band. ### More photos Here are few more photos, made by me. Also, see the whole gallery. Tugboat, pulling a barge. It is interesting to see, how waves on water are forming time pattern. Ducks on a surface of the Moika river. Red background is a granite pavement. Just a crossroad. Waves on a shallow water. Raindrops. In slit-scan photos, raindrops are producing hyperbolas rather than circles. People, moving toward the camera (camera was pointing along the Nevsky prospect). Step movement causes ripples in silhouettes. Click to zoom. Similar photo,more crowded: One more crossroad, long exposition (very wide image). ### Horizontal slit In the photos above, slit was placed vertically. Placing it horizontally can produce interesting effects too: Here, vertical coordinate of the picture corresponds horizontal coordinate of the original video. All horizontally moving objects are becoming a slant strips; fast objects are almost vertical, still objects are horizontal. Horizontal slit + camera along road (image rotated according to slit orientation): Another sample. Some people are changing their speed, drawing curved trails: ### Software #### Making scit-scan photos My own tool that makes slit-scan photos converts videos to, on Github: slit-scan. It compiles only under Linux and requires FFMPEG and ImageMagic. #### Making demonstration video Simple script, written in Python 2, source is on github gist. ## Saturday, March 16, 2013 ### Convergence domains of the iterated parabolic approximation method The above image is a fractal, derived from iteration of the successive parabolic interpolation method, applied to searching extrema of sin(x). Wikipedia describes this method as following: Successive parabolic interpolation is a technique for finding the extremum (minimum or maximum) of a continuous unimodal function by successively fitting parabolas (polynomials of degree two) to the function at three unique points, and at each iteration replacing the "oldest" point with the extremum of the fitted parabola.  First step of the algorithm, x4 is the new point. This method is guaranteed to converge, if applied to an unimodal function. However, the same scheme usually converges well even if applied to a function with several extrema, such as sin(x). Of course, it is only able to find one extremum among all. Which of them will be found is determined by the initial conditions. But how to determine, which initial point converge to which extremum? Parabolic interpolation method requires 3 initial points, thus the whole space of initial conditions is 3-dimensional. Sets of the initial points converging to the same extremum, are basins of attraction of this algorithm. And they are exhibiting nice fractal structure. The picture at the top shows these basins, each painted in different color. Since the parameter space is 3-dimensional, the whole structure is also a 3-dimensional body. To simplify it, I am using initial conditions where 3rd point is located between first two: [x + y, x - y, x], where x, y are coordinates on the image plane. This is equivalent to intersecting the parameter space with a 2-dimensional plane. ### More pictures Top picture, colorized differently. If an initial point converges to maximum, it is colored white, otherwise it is black. Enlarged fragment of the original picture  Enlarged fragment Basins for the more complex function sin(x)+x2/100.  Convergence domains for sin(x)+x^2/100 Enlarged fragment. ### Plotting number of iterations It is also possible to assign colors according to the number of steps. In the following image, colors are starting from black (low number of steps), then go from red to blue as the number of iterations raises, and then repeats. it can be clearly seen, that when all 3 initial points are close to the extremum, method converges fast, and when it is near to the border between 2 extremums, convergence is much slower.  Number of iterations ### Basin and its properties And finally, image of a single basin (corresponding to a maximum x=π/2). All initial points colored white will converge to the same maximum.  Single domain Interesting property of this image: shifted horizontally by$\frac{\pi k}2$where$k \in \mathbbb{Z}$, it does not overlap itself. This follows from periodicity of sin(x). The following animation shows shift by$\pi/2\$.

Conclusion? In general case, correspondence between initial points and solutions, they converge to, can be surprisingly complex.
Another (and more traditional) sample of a fractal produced by the converging iterative procedure is the Newton fractal.