In that universe of infinite possibility, my imagination flies
Introduction
What if one could capture an image simply by blinking one’s eye. It is a thought experiment sometimes proposed to highlight the problem of missing a shot in the time it takes to lift the camera and click the shutter. But there is a more fundamental question such a proposal raises. What would we see if we could capture and externalise that visual perception?
We do not see like a camera, simply projecting a stream of images into consciousness. Evolution is parsimonious, and we have evolved a visual system that extracts only what is necessary for survival. The rest is blended with what we already know about the world and how it works. Researchers estimate that only about twenty per cent of what we perceive visually from moment to moment is actually passing through the eye. The rest is drawn from our memory banks and logic circuits in an infinitely sophisticated neural network. As a result, if we could print out an instant of human visual perception it would likely be a mix of areas of detail and areas of hazy imprecision; it may even be distorted like a homunculus. We don’t, if we are engaged in conversation with someone, remain aware of the fine detail in the wallpaper behind them. Yet, in a photograph with a reasonable depth of field, all that and more is made available. The sense of sight is, then, a synthetic process that creates a sufficient impression of the world from a mixture of incoming data, stored information and rules of processing.
Machines with ‘artificial intelligence’ (AI) operate in a not dissimilar way to the human brain, albeit on a much simpler scale. AIs learn what things look like by scanning archives of data. The larger the archive the richer the learning process. With the advent of the internet, that archive is now effectively infinite. When an AI generates an image, it is not piecing bits of other images together like a jigsaw, it is synthesising something new from what it has learned.
The Mexican photo-artist Raúl Cantú is fascinated by the imaginative possibilities of photography and technology. In his most recent work, he uses the artificial intelligence capabilities of machines as a creative tool. His images have an aesthetic somewhere between photography and the plastic arts, hovering in the liminal space between the objectivity of a camera and the subjectivity of perception. He is a remarkably inventive artist and I have learned a lot following the evolution of his work across a range of technologies and conceptual frameworks. It has led me to wonder if, through this form of image synthesis, we may well come closer to representing the nature of human visual perception than will ever be possible with a camera.
Alasdair Foster

Interview
You have made many explorations at the outer limits of photography: infra-red, specialist lenses, multiple exposures… What drew you to this experimental approach.
I use technology as a means through which to conceptualise work; work that explores optical unrealities, parallel worlds, and alternative scenarios. It is a contemplative process combining the eternal and the fleeting in a search for a magical expression of reality. The entwining light and shadow are nothing more than a manifestation of the vertigo of existence.
You also make work that is more purely digital involving fractals and various generative programs. What attracted you to these ways of making images?
In 2008, I was diagnosed with tinnitus. The endless noise caused insomnia and, to cope with this condition, I began to create fractals. I then used software to modify the results and combine several fractals into a single image. I was not seeking purity, but a form of interpretative alteration. Tinnitus had changed my world and I needed to understand change from another perspective; a perspective I had some control over.


© Raúl Cantú untitled images from the series ‘Beings of Light’ 2018
You have made many bodies of work created using a wide variety of techniques. Here I would like to focus on your images exploring what I might call ‘alternative realities’. So, as a place to begin, could you tell me about the portrait series, ‘Beings of Light’.
Each being is modelled in the dark during a long exposure and involves different sources of light and movement, spanning the dimensions of space and time. They arise from the fertile womb of the night, and are revealed in a vertigo of energised luminescence. They are the spectres that haunt the dream of life. Coming into existence in storms of colour and febrile brushstrokes.
In your series ‘Deep Dreams’ you bring together photographic and digital skills in a novel way.
This was the first series I made using algorithms, exploring dream worlds. For many years I had dedicated myself to landscape photography. But in 2020, with the Covid lockdown, I was unable to venture outside. Faced with those physical constraints, I began to spend more time exploring Artificial Intelligence tools and Generative Adversarial Networks. I wanted to visualise places I had sometimes dreamed of. I felt like a child floating above the catastrophe of a worn-out world, which in one way or another was the world we were all living in at that time.

The relationship between algorithms, artificial intelligence and creativity can be difficult to understand. Can you briefly explain the principles behind your way of working with these concepts?
Algorithms have existed since Babylonian times at least. But with the advent of computers they took on much more prominence, and it is that union of machines and algorithms that is changing the world. Today, the only non-algorithmic tasks are those related to creativity and human emotion. In my work, I seek to develop a relationship between the creative process and technology, and to do so in a way that remains consistent with our human essence. Algorithms are ‘blind’ until we find ways for an artificial intelligence to interact with the human mind, eye, emotion, and creativity.
This approach changes the way of conceiving of artistic production. The creative process becomes interactive, with the algorithm generating proposals that are often different from what is anticipated. This opens up the fascinating possibility of discovering new visual worlds. I have spent several years experimenting with different types of software through an approach that has developed organically.
What are Generative Adversarial Networks and how do they generate images?
Generative Adversarial Networks (also known as GANs) were invented by Ian Goodfellow. As I understand it, the process involves setting two competing neural networks to work together competitively, with each compensating for the limitations of the other. The first – or generative – network is responsible for synthesising new images based on what it has learned from a given visual data set or archive. Because artificial intelligence is not good at making things up, these first attempts can be faulty. At this point, the second – or discriminatory – network comes into action. The role of this network is to be demanding; to assess whether the new image looks as if it could legitimately be part of the original data set or to reject it if it deviates too far from the way the AI has learned items in the data should appear. Because artificial intelligence is much better at recognising images than generating them, this second network is more accurate. The two processes run in tandem until the generator is producing images that the discriminator cannot tell apart from those in the original data set.


© Raúl Cantú untitled images from the series ‘Algoritmia’ 2022
If I understand correctly, the AI ‘learns’ about various artefacts in a scene by scanning many examples from an archive or library. How does it use that data to create the new scene?
These are not collages or reassemblies, but mutations that the algorithm synthesises from what it has learned by analysing a vast amount of visual data online. I begin the process by which the AI generates new imagery by inputting a text description as a kind of ‘prompt’.
I find this text-to-image technology particularly interesting. The text can be short or long, and may be fine-tuned to help guide the process. It could include a description of the subject matter, the environment, the atmosphere, the style, the lighting, maybe an art-historical reference. The words can also be weighted to indicate more or less emphasis on a particular modifier. I may also include one or more of my own images as part of the prompt input.
Where do you find the neural nets that you employ?
I access them through online systems, most of them are in a beta-testing phase. You usually have to pay to use the system because the hardware necessary to process such enormous quantities of data mined from millions of web pages is very expensive to acquire and maintain.


© Raúl Cantú untitled images from the series ‘Algoritmia’ 2022
I am interested by certain stylistic aspects of the work that suggest a particular aesthetic vision. How much are these qualities introduced by the way the algorithm processes data and how much are they the result of the way you write the text that prompts the algorithm.
It is one of the variables you can experiment with. The algorithms evolve as they learn. When I started making images with text-to-image technology, I noticed that one of the systems that I use most frequently, Midjourney, yields results that are more pictorial or impressionistic. I like that style. Other systems, such as Dall-E, have a more photorealist quality. More recently, experimenting with new versions of the system, I have noticed that the images are now composed with fewer imperfections. But I still like that more impressionistic touch that the same algorithm generated earlier in its evolution.
On the other hand, the prompt used to initiate the process has a lot to do with the degree of realism or impressionism obtained. A prompt is not only a description of what is sought, you can include artistic styles, visual references, weightings, and modifiers with various parameters, coupled with reference images, which can all influence what is then generated and how it looks.
Is it the AI that is ultimately the creator or the artist?
Awareness and emotion come from the artist. AI is a tool, like the camera for the photographer. There is a great deal of discussion around this, just as there was in the art world with the advent of photography.


© Raúl Cantú untitled images from the series ‘Algoritmia’ 2022
What is it you seek to express in the ‘Algoritmia’ series?
The images that have resulted from this way of working are about loneliness – a subject that, in one way or another, has been present in many of the different series I have made. My images of natural environments and those created virtually have all been made in solitude. That has been the reality of recent years, physical confinement, isolation, being alone. It’s all there in the different themes that make up ‘Algoritmia’: deserts, seascapes, isolated cabins, deserted villages, people who wander around disconnected from each other…
I go alone. I imagine myself there and I try to make sense of everything I find in those places. The images often depict dark, mysterious, twilight zones in which I nonetheless manage to find a refuge: a light, a sign of warmth and security in the midst of chaos and uncertainty. In that universe of infinite possibility, my imagination flies. I integrate and recompose the micro- and macrocosm of inside and outside. I abandon the fear of losing myself, light envelops me, and I surrender to the void.


© Raúl Cantú untitled images from the series ‘Algoritmia’ 2022
You have said: “I’m not necessarily orderly, I need chaos”. What did you mean by this?
An excess of order bores me. I am a technical artist who experiments with virtuality, which needs chaos to function. And many times there are unexpected outcomes that are simply magnificent. I like to bring those unforeseen results into my imaginary worlds. I can become very obsessive in my work, which requires time, concentration, experimentation and an iron discipline. But in the course of this I learn new things because ultimately there is still the chaos running beneath. It is an adventure, a journey to unexplored horizons.
What have you learned about yourself in the world though making these AI-assisted images?
I have discovered that I can now reach places that were previously inaccessible to me. I love that, after fifteen years of exhaustive experimentation, I can reap the diverse fruits of a research that was first sown in more primitive virtual systems but are, today, so much more sophisticated.
What follows now – and I think this is a challenge for everyone – is to reconnect the body through the senses, to marvel at the extent of a real landscape, to feel the wind on my skin, to observe the vastness of our world and translate those experiences into imaginary worlds, knowing that reality will continue to be an infinite source of inspiration.


Biographical Notes
Raúl Cantú was born in Mexico City in 1964. He studied computer systems at Instituto Tecnológico y de Estudios Superiores de Monterrey (ITESM). With a background in digital design he had, from an early age, an interest in the potential of computers for image creation. In 2011, he began taking photographs, exploring many specialist techniques in the pursuit of personal creative expression. His work has featured in over one hundred group and solo exhibitions across Mexico and also in Canada, France, Italy, Spain, and the USA. His images have been widely published in books, international journals, and catalogues, including five volumes of ‘Lo Mejor de la Fotografía de Naturaleza en México’ and three volumes of ‘Riqueza Natural de México’. His work is held in a wide range of private collections. He lives and works in Saltillo, Coahuila, Mexico.
This interview is a Talking Pictures original.