The algorithm that generates these images is based on a very simple phenomenon, and its origin actually comes from an error.
While I wanted to create an algorithm that detects the edges of an image, I created an algorithm that performs the
following transformation:
After a few small adjustments, I obtained this kind of image:
After studying the behavior of the algorithm and understanding the phenomenon responsible for such results, I decided to write a new algorithm that explicitly performs what happened by mistake in the initial algorithm.
This new algorithm, ranimg, no longer takes images as input, but starts from an image generated
randomly and of small size.
The rest of the processing is just a series of iterations of the same operations:
But what is really responsible for these pretty curves crossing the images?
These lines are the result of multiplying the colors modulo 256. If we only performed Gaussian blurs,
we would obtain a large blurred image without any meaning.
The fact of multiplying the colors on a slightly blurred image (therefore presenting small areas of similar colors
but faded) will have three effects on it:
From this algorithm processing random images, we can imagine an algorithm systematically applying
the previous algorithm to relatively close images.
We would thus observe the fluid evolution of colored areas. We can generate these videos directly with ranimg (we
would observe the curves moving and changing shape over time). The difficulty of this type of approach is that
as ranimg is chaotic, modifying even a single pixel of an image will completely transform the image.
I therefore initially chose another approach.
I generate a certain number of "pillar" images, without any link between them, and then I connect them by a certain
number of intermediate images allowing a smooth transition.
So that the difference between the "pillar" images and the intermediate images is as unrecognizable as possible,
I work on images from ranimg but having undergone an additional blurring step.
Here is the result:
So I first generate a random image of 80x45 pixels.
Then, it is necessary to add images one by one by modifying a few pixels, in such a way that the application
of ranimg will produce close images.
It is therefore necessary to look for the right proportion of pixels to modify from one image to another in order to see continuity. On an image of 80x45 pixels, modifying a single pixel by adding 1 to it allows you to see successive images resembling each other. But this is not enough to have continuity. So we go down into the floats.
After multiple tests resembling a dichotomous search, it turned out that the correct value to have continuity was 2 x 10-12.
Here is the video obtained with 300 frames, 10 fps. Each frame is the application of ranimg to one of the 300 base images. Each base image is derived from the previous one by choosing a pixel at random and adding 2 x 10-12 to it.
This first idea does not give very good results, because of the precision required during calculations.
It is not reasonable to manipulate numbers as small as 10-12.
So we will first calculate a base image common to all frames, and then
perform a few final iterations from this image which will have been derived for each frame.
The difference with the previous method is that here, we derive a slightly worked image, which allows us
to then have fewer iterations to do, and therefore to have a less chaotic process.
After a few small adjustments, here are two videos obtained thanks to the algorithm:
• 30 seconds, 320x180, 16 fps
• 15 seconds, 640x360, 16 fps
This is an idea, it has not been implemented.
What we could do in order to obtain a video from an algorithm that gives images is to ask the algorithm to generate 3-dimensional images for us (so we would have to do Gaussian blurs in 3D, and adapt the rest of the operations), but in theory it is perfectly feasible.
Then, we take one of the three dimensions, and we say that it is the dimension of time.
This idea allows us to see the process of creating the video exactly as the process of creating the image.
The result could be a little different from what we do here, but would probably also, or even more
interesting.
Here, we have just established a link between adding a dimension in the calculation of images and the fact of calculating
successively the basic algorithm on images derived little by little.
We can therefore conversely think of calculating our images in two dimensions using this principle.
We start from an image in 1 dimension (a line of pixels), and we apply to this line of pixels the processes that
we apply to the images here to obtain our videos.
We would calculate line by line the lines of the image from the first one, and that would give us an image
complete.
If the analogy between the creation of the video as it is done here and the addition of dimension proves relevant,
then it will also be possible to calculate images line by line instead of doing it globally
like what we have been doing since the beginning here.
For a given final image, it is impossible to guess the initial value of each pixel. Indeed, this
algorithm extracts global colors from a random mass of pixels, so each final pixel
depends on all the other pixels in the image. On the other hand, the value of each
pixel is strongly taken into account in the calculation of the final image.
Tests have shown that changing the value of even 0.03% of the pixels has the
effect of completely modifying the final image.
This algorithm therefore also has chaotic properties.
All these properties are interesting for a cryptographic hash function.
But we could exploit the fact that this algorithm works on images to make it a function
of hashing of a new kind.
Indeed, when we download a file on the Internet and we want to compare the hashes, we must compare a sequence
of 50 alpha-numeric characters, and it would be easier to only have to compare images, representing
the fingerprint of these files; it's more visual.
To hash a file, we would transform it into an image (we find the rectangular shape that adds the least
of pixels, and we complete by periodicity).
We then apply ranimg to this image, and that gives us an image with simple shapes easy to remember.
This is a simple idea, probably bad, illustrating the interesting properties that this algorithm can have.