Tuesday 10 January 2017

Hack This: Make a Photo Filter with Machine Learning

For a while, people really loved complaining about Instagram filters. They make photos look different, but always in the same way. They encourage lazy photography. A cheap stylistic substitute for thoughtful composition. Etc. It was another whole big hand-wringing session about authenticity. Do people still tag things with "#nofilter"?

But to call image analogizing "filtering" is to sell the associated framework short. In the words of its New York University-based creators, it's "processing images by example." "Rather than attempting to program individual filters by hand, we attempt to automatically learn filters from training data," the project's website explains.

The idea: Give the Image Analogies framework three images and it will teach itself what makes the first two images similar, and then apply that similarity to the third image as a filter. The results are unpredictable and frequently very cool. Fortunately, you don't have to be a machine learning whiz to do it.

That said, there is some set-up, which is the hardest part of making your own image analogy. I'll walk you through it below. We'll be using a Python implementation of the original NYU method written by programmer Adam Wentz, whose other projects include Huge Wall of Porn, gif hell, Oldstagramme, and more fun stuff.

0.0) Resources

You don't need a GPU and a ton of memory to use Image Analogies, but it helps. Most of my experiments were done with Amazon EC2 instances that come with mostly all of the needed math/machine learning software preinstalled, and GPUs to run it on. (GPUs and their parallel processing abilities are key to the sorts of computations involved in machine learning.) Running on a remote machine also has the advantage of not completely inundating your own computer's processors, which can happen fast.

All that said, I'm not going to explain the whole process of getting started with EC2 instances and interfacing with a remote shell via the command line. Maybe in another Hack This edition. Let's just assume for the sake of this tutorial that everything is being computed locally on the machine in front of you. That will work.

0.1) Software

Image Analogies will work with either of the two big machine learning libraries TensorFlow and Theano. If you have a GPU to work with, you'll probably want the latter, while, if not, you'll need to be using TensorFlow. Another machine learning library called Keras is then needed to run on top of either Theano or TensorFlow. Yeah, I know: This is already getting to be pretty messy. But hold tight for a sec.

Going forward, I'm going to assume that we're working with TensorFlow because there's actually quite a bit more to getting going with GPU support, mostly having to do with the installation of CUDA, which is the software/platform (yes, another one) that lets you use your Nvidia GPU for these kinds of computations in the first place.

As for getting going with TensorFlow, you can follow the official guide here. It's not too bad and can be accomplished using pip like any other Python package.

The Image Analogies framework itself can be installed using pip by the simple command: pip install neural-image-analogies. This installation should take care of all of the framework's dependencies, including Keras and TensorFlow. You might want to do this in a Python virtual environment, which will keep the Image Analogies installation from possibly breaking a dependency chain elsewhere on your system.

1.0) Weights

With everything installed and theoretically working correctly, we can get to some actual machine learning. First, we need to download VGG16, which is a 16-layer convolutional neural network that's about the state-of-the-art in image recognition. This is what Image Analogies will use to make sense of our two input images. You can download a reduced form of VGG16 here. Note that it will need to be in the same directory from which you're running the Image Analogies script from.

2.0) Experiment

We're basically ready to go. All that's left is picking some images. Remember, we're taking two images, comparing them, and then applying the results of that comparison to the third image. You can be pretty clever with this.

From Wentz's Image Analogies Github page:

Mostly, I've just been throwing images together and seeing what happens. You can wind up with some cool patterns, at least. There are also a million different options and parameters you can run this script with (see the Image Analogies Github), which let you do everything from isolating specific layers of the machine learning model to tweaking detail levels and image scales. You could burn through an afternoon pretty easily just taking the "throw stuff together" approach (as below).

Once Image Analogies is installed per the above instructions, it's launched with the following command: python make_image_analogy.py first-image second-image third-image filename-prefix-for-output. The filename prefix is what the script is going to stick onto the beginning of every filename that it saves to your computer. These images should be in whatever directory you're running the script from. If you wanted to save the output to a different directory, you'd just prepend that to the output filename prefix, like: /otherdirectory/outputprefix. By default, you'll get sample intermediate images saved to your computer as the algorithm chugs along. If you wind up getting sucky output images, it's easy enough just to cancel the script before it finishes.

For me running this on a fairly standard-issue MacBook, each run takes about a half-hour. Running on a GPU-optimized Amazon instance knocks that down to five or 10 minutes. If you were interested in taking Image Analogies to the cloud, I'd suggest looking at the Go Deeper Amazon system image, which comes with all of the Image Analogies dependencies prebuilt and ready-to-go. It also has some pretty user-friendly documentation to get you started if, say, you have no idea what I mean by "Amazon system image" or EC2 or even GPU computing. Go Deeper even offers a remote desktop, which is pretty handy if you're not used to interacting with a remote computer via ssh.

We're obviously just dipping a toe into something much, much bigger here. But, like Google's Deep Dream, it winds up being a good entry-point into the bigger thing that is visual processing and machine learning, generally. And it will be a good stepping off place for Hack This to go deeper too, at least sometimes.

Read more Hack This.



from Hack This: Make a Photo Filter with Machine Learning

No comments:

Post a Comment