Earlier this week the country went back into a second confinement. I took a train back up to Leicester and went to get a haircut before my local barber closed.

I took photos before and after, and uploaded them to Artbreeder. This is a Generative Adversarial Network, and what it does is to create new images based on what you give it. If you upload one picture with long hair and a beard, and one clean shaven with short hair, it will come up with all the frames in between. I was inspired by Matt Round’s various experiments along these lines and wanted to try something out for myself.

I recognise myself in these images, but the uncanny valley effect is strong. Everything here is slightly unnerving, and I made it worse by muxing in some eerie atmospheric sound. There are technical methods of detecting whether images like this are real, but the main giveaway is that the eyes are fixed dead centre.
Can a computer be trained to wink? 😉

I had another go, taking off putting on various layers of clothing.

The result is a blurry abstract mess. These models are trained on generating faces and landscapes, so it’s understandable it wouldn’t know what to make of a t-shirt or a jacket.

I’ve been trying to think of landscapes to feed through this model. If I can catch some ‘unnatural views’ of Leicester that might be a follow-up project. It might even count as creative geography?

Neural networks

Interpolating motion between key frames is a well-established process in animation, and there’s a combination of cheap neural networks + high frame rate cameras pushing at the boundaries of what is possible. We are starting to learn how people have very subjective standards for what is a natural frame rate. Some scenes in Akira were lauded as a cinematic marvels because they’re animated at 24fps. The motion in these scenes is already smooth enough, but if you use AI to smoothen it even further, it ends up looking more like a low-budget US children’s cartoon.

To me, stop motion LEGO animation at 60fps looks amazing, but the second LEGO Movie deliberately used a low frame rate despite being mostly computer animated. Peter Jackson filmed the Hobbit series at 48fps, a technical feat which turned out to be surprisingly controversial as the high frame rate was considered less cinematic.

In a similar development, the latest generation of graphics cards now have inbuilt AI upscaling in real-time. Nvidia calls this Deep Learning Super Sampling, whereas AMD calls it Virtual Super Resolution. What these technologies do is to render frames at a low resolution, and then use AI to fill in the remaining pixels. It’s marketed as a cool thing for games, although the real target is video streaming. Imagine holding a video call on a 4k television and actually getting to see the other participants in full 4k resolution, that’s the kind of superficial improvement that businesses probably pay huge amounts of money for.

Finally, AI already has an unsettling character, which raises the moral implications of representing reality through the eyes of computers. It’s a technical achievement that you can watch a film from 1896 in 4k at 60fps, but this could also be considered a form of historical revisionism. And, if recolouring old films is ethically contentious, what does that say about the current tendency to filter our whole visual culture through automated enhancements?