Constantly-Evolving Nature Scene Display
In this project we'll create a cool trinket you can leave on your desk that displays a constantly-changing nature scene created by a neural network. This device has no practical value other than entertainment, but that's the foundation for a great project!
The basis of this system is a generative adversarial network (GAN) that I trained on thousands of natural images, including deserts, mountains, sunsets, etc. GANs are a type of neural network designed to output realistic-looking images similar to the images on which they were trained. By slightly varying the input of a GAN over time you can coax it to produce an evolving natural landscape. Training GANs is out of the scope of this build; we'll just use a finished product that I've created to make something cool that you can leave on your desk.
This "evolving" landscape is best seen not described. Take a look at the GIF below for an example of what you'll see.
To display the images outputted by the GAN we'll use a large LCD display from DFRobot. This screen can integrate directly with a Raspberry Pi 4, which makes it super easy to get this project up and running. The Raspberry Pi 4 is a very capable board that can create images with this GAN at a relatively high frame rate. With four cores and lots of RAM, the TensorFlow implementation of a GAN that I created is surprisingly speedy.
Step 1: Pi Setup
First we'll start by getting your Pi set up with the software you need to run this project. I recommend installing a fresh version of 32-bit Raspberry Pi OS Lite (no desktop environment) so we start in the same system state, but if you have a Pi with other things already on it that will probably work.
Start by installing TensorFlow in a Python 3 virtual environment. Next, install OpenCV via pip with pip install opencv-python. Start a Python shell by typing python in a terminal window, and at the prompt that appears try typing import cv2. Depending on your operating system installation, you may get error messages like library x not found. If this happens, you can install the missing library with sudo apt-get install x-dev (where x is the name of the missing library).
Next you'll need a window manager to display the images generated by the GAN. Follow the instructions here up through "Minimum Environment for GUI Applications". Everything should now be set up!
Step 2: The Code
You can get all the code needed for this project in this git repository. Clone the repository to the home folder on your Raspberry Pi to get started. I stored the GAN model in Git Large File Storage (LFS), since the model exceeds GitHub's maximum file size. Install Git LFS on your Pi with these instructions, and run git lfs install. You can now navigate inside the project directory, and run git lfs pull to bring in the model.
The main file that runs everything is display1.py. The code is fairly straightforward - it initializes the GAN generator model, creates a random seed as a starting point, and continuously updates the seed by a small increment and generates new images as fast as the CPU can compute them. You can modify the amount the scene changes by increasing or decreasing the coefficient in the line seed = seed+changes*0.02.
Step 3: Running the Code
Screw your Pi into the display, and connect the display to the Pi with the short ribbon cable as shown in the image above. When you power on your Pi, the display should show the Pi's boot up process and eventually automatically log in to a new terminal session.
Start by connecting to the Pi via SSH from another machine.
If you're not yet familiar with the Unix tool "screen", you should be! Screen allows you to keep multiple processes running interactively in the background, which happens to be perfect for running the X server needed to display the generated images. Install screen with sudo apt install screen, and then type screen at the command line. A new terminal should appear. Any process you leave running here you can disconnect from and it will remain running in the background. We want to run the X11 server in the background, so we can use screen for that. Type startx, and the X server should initialize.
Disconnect from this screen by typing Control-A then D. You should now have returned to the shell prompt from which you started. We're now ready to run the Python script! Still from the ssh prompt, type “DISPLAY=:0 python display1.py”. The DISPLAY=:0 part tells the Pi to display the OpenCV window on the display connected to the Pi, and not over ssh.
If everything went well, you should now see a natural looking image on your Pi that changes every so often. The model of Raspberry Pi you use will impact the speed at which new images are generated. Older Pis have one or two cores, so updates will likely take more than a second. On my Raspberry Pi 4, the script generates new images once every half a second or so.
Step 4: Going Forward
As you can probably imagine, you can train a GAN to produce practically anything! Instead of showing images of nature, you could make a constantly morphing face, image of outer space, or even fruit. I'll have another project coming soon that describes how I collected training data (i.e. images) and how I actually trained the GAN itself. In the mean time, you can learn more about GANs through resources like TensorFlow's tutorials.