{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# CNTK 302 Part B: Image super-resolution using CNNs and GANs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Contributed by** [Borna Vukorepa](https://www.linkedin.com/in/borna-vukorepa-32a35283/) October 30, 2017\n", "\n", "## Introduction\n", "

This notebook downloads the image data and trains the models that can create higher resolution images from a lower resolution image. User should complete tutorial CNTK 302A before this so they can familiarize themselves with the super-resolution problem and methods that address it. The goal of the single image super-resolution (SISR) problem is to upscale a given image by some factor while keeping as much image details as possible and not making the image blurry.

\n", "

In order to train our models, we need to prepare the data. Different models might need diferent training sets.

\n", "

We will be using Berkeley Segmetation Dataset (BSDS). It contains 300 images which we will download and then prepare for training of the models we will use. Image dimensions are 481 x 321 and 321 x 481.

" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# Import the relevant modules to be used later\n", "import urllib\n", "import re\n", "import os\n", "import numpy as np\n", "from PIL import Image\n", "import sys\n", "import cntk as C\n", "\n", "try:\n", " from urllib.request import urlretrieve, urlopen\n", "except ImportError: \n", " from urllib import urlretrieve, urlopen" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Select the notebook runtime environment devices / settings\n", "Set the device to cpu / gpu for the test environment. If you have both CPU and GPU on your machine, you can optionally switch the devices. By default, we choose the best available device." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# Select the right target device when this notebook is being tested:\n", "if 'TEST_DEVICE' in os.environ:\n", " if os.environ['TEST_DEVICE'] == 'cpu':\n", " C.device.try_set_default_device(C.device.cpu())\n", " else:\n", " C.device.try_set_default_device(C.device.gpu(0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

There are two run modes:\n", "

\n", "\n", "

Note If the isFlag is set to False the notebook will take several days on a GPU enabled machine. You can try fewer iterations by setting the NUM_MINIBATCHES to a smaller number which comes at the expense of quality of the generated images. You can also try reducing the MINIBATCH_SIZE.

\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "isFast = True" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data download\n", "The data is not present in a .zip, .gz or in a similar packet. It is located in a folder on this link. Therefore, we will use regular expression to find all image names and download them one by one into destination folder. However, if the data is already downloaded or run in test mode, the notebook uses cached data." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# Determine the data path for testing\n", "# Check for an environment variable defined in CNTK's test infrastructure\n", "envvar = 'CNTK_EXTERNAL_TESTDATA_SOURCE_DIRECTORY'\n", "def is_test(): return envvar in os.environ\n", "\n", "if is_test():\n", " test_data_path_base = os.path.join(os.environ[envvar], \"Tutorials\", \"data\")\n", " test_data_dir = os.path.join(test_data_path_base, \"BerkeleySegmentationDataset\")\n", " test_data_dir = os.path.normpath(test_data_dir)\n", "\n", "# Default directory in a local folder where the tutorial is run\n", "data_dir = os.path.join(\"data\", \"BerkeleySegmentationDataset\")\n", "\n", "if not os.path.exists(data_dir):\n", " os.makedirs(data_dir)\n", " \n", "#folder with images to be evaluated\n", "example_folder = os.path.join(data_dir, \"example_images\")\n", "if not os.path.exists(example_folder):\n", " os.makedirs(example_folder)\n", "\n", "#folders with resulting images\n", "results_folder = os.path.join(data_dir, \"example_results\")\n", "if not os.path.exists(results_folder):\n", " os.makedirs(results_folder)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def download_data(images_dir, link):\n", " #Open the url\n", " images_html = urlopen(link).read().decode('utf-8')\n", " \n", " #looking for .jpg images whose names are numbers\n", " image_regex = \"[0-9]+.jpg\"\n", " \n", " #remove duplicates\n", " image_list = set(re.findall(image_regex, images_html))\n", " print(\"Starting download...\")\n", " \n", " num = 0\n", " \n", " for image in image_list:\n", " num = num + 1\n", " filename = os.path.join(images_dir, image)\n", " \n", " if num % 25 == 0:\n", " print(\"Downloading image %d of %d...\" % (num, len(image_list)))\n", " if not os.path.isfile(filename):\n", " urlretrieve(link + image, filename)\n", " else:\n", " print(\"File already exists\", filename)\n", " \n", " print(\"Images available at: \", images_dir)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Now we only need to call this function with appropriate parameters. This might take a few minutes.

" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Starting download...\n", "Downloading image 25 of 300...\n", "Downloading image 50 of 300...\n", "Downloading image 75 of 300...\n", "Downloading image 100 of 300...\n", "Downloading image 125 of 300...\n", "Downloading image 150 of 300...\n", "Downloading image 175 of 300...\n", "Downloading image 200 of 300...\n", "Downloading image 225 of 300...\n", "Downloading image 250 of 300...\n", "Downloading image 275 of 300...\n", "Downloading image 300 of 300...\n", "Images available at: data/BerkeleySegmentationDataset/Images\n", "Model directory data/BerkeleySegmentationDataset/PretrainedModels\n", "Image directory data/BerkeleySegmentationDataset/Images\n" ] } ], "source": [ "#folder for raw images, before preprocess\n", "images_dir = os.path.join(data_dir, \"Images\")\n", "if not os.path.exists(images_dir):\n", " os.makedirs(images_dir)\n", " \n", "#Get the path for pre-trained models and example images\n", "if is_test():\n", " print(\"Using cached test data\")\n", " models_dir = os.path.join(test_data_dir, \"PretrainedModels\")\n", " images_dir = os.path.join(test_data_dir, \"Images\")\n", "else:\n", " models_dir = os.path.join(data_dir, \"PretrainedModels\")\n", " if not os.path.exists(models_dir):\n", " os.makedirs(models_dir)\n", " \n", " images_dir = os.path.join(data_dir, \"Images\")\n", " if not os.path.exists(images_dir):\n", " os.makedirs(images_dir)\n", " \n", " #link to BSDS dataset\n", " link = \"https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/BSDS300/html/images/plain/normal/color/\"\n", "\n", " download_data(images_dir, link)\n", " \n", "print(\"Model directory\", models_dir)\n", "print(\"Image directory\", images_dir)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data preparation\n", "

This dataset contains only 300 images which is not enough for super-resolution training. The idea is to split images into 64 x 64 patches which will augment the training data. Function prep_64 does exactly that.

\n", "

Once we select some 64 x 64 patch, we downscale it by a factor of 2 and then upscale it by a factor of 2 again using bicubic interpolation. This will give us a blurry version of the original patch. In some approaches in the tutorial, the idea will be to learn the model which turns blurry pathes into clear ones. We will put blurry patches into one folder and normal patches into another. They will serve as minibatch sources for training. We will also sample a few test images here.

\n", "

After processing each patch, we move 42 pixels down/right (so there is some overlap between patches) as long as we can. This preprocessing can take a few minutes.

" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "#extract 64 x 64 patches from BSDS dataset\n", "def prep_64(images_dir, patch_h, patch_w, train64_lr, train64_hr, tests):\n", " if not os.path.exists(train64_lr):\n", " os.makedirs(train64_lr)\n", " \n", " if not os.path.exists(train64_hr):\n", " os.makedirs(train64_hr)\n", " \n", " if not os.path.exists(tests):\n", " os.makedirs(tests)\n", " \n", " k = 0\n", " num = 0\n", " \n", " print(\"Creating 64 x 64 training patches and tests from:\", images_dir)\n", " \n", " for entry in os.listdir(images_dir):\n", " filename = os.path.join(images_dir, entry)\n", " img = Image.open(filename)\n", " rect = np.array(img)\n", " \n", " num = num + 1\n", " \n", " if num % 25 == 0:\n", " print(\"Processing image %d of %d...\" % (num, len(os.listdir(images_dir))))\n", " \n", " if num % 50 == 0:\n", " img.save(os.path.join(tests, str(num) + \".png\"))\n", " continue\n", " \n", " x = 0\n", " y = 0\n", " \n", " while(y + patch_h <= img.width):\n", " x = 0\n", " while(x + patch_w <= img.height):\n", " patch = rect[x : x + patch_h, y : y + patch_w]\n", " img_hr = Image.fromarray(patch, 'RGB')\n", " \n", " img_lr = img_hr.resize((patch_w // 2, patch_h // 2), Image.ANTIALIAS)\n", " img_lr = img_lr.resize((patch_w, patch_h), Image.BICUBIC)\n", " \n", " out_hr = os.path.join(train64_hr, str(k) + \".png\")\n", " out_lr = os.path.join(train64_lr, str(k) + \".png\")\n", " \n", " k = k + 1\n", " \n", " img_hr.save(out_hr)\n", " img_lr.save(out_lr)\n", " \n", " x = x + 42\n", " y = y + 42\n", " print(\"Done!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

We will download a pretrained CNTK VGG19 model and later use in training of SRGAN model. The model will operate on 224 x 224 images, thus we will need to prepare training data for models which turn 112 x 112 images into 224 x 224 images.\n", "

We use similar reasoning for augmenting the training data here with prep_224. We will be selecting 224 x 224 patches and downscaling them to 112 x 112 patches. The only difference is that now we also rotate patches to increase the number of training samples because with the increase in the patch size to 224 x 224, our training set reduces requiring use to augment the training data set more. Like before, 224 x 224 patches go into one folder and 112 x 112 patches go into another.

" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "#extract 224 x 224 and 112 x 112 patches from BSDS dataset\n", "def prep_224(images_dir, patch_h, patch_w, train112, train224):\n", " if not os.path.exists(train112):\n", " os.makedirs(train112)\n", " \n", " if not os.path.exists(train224):\n", " os.makedirs(train224)\n", "\n", " k = 0\n", " num = 0\n", " \n", " print(\"Creating 224 x 224 and 112 x 112 training patches from:\", images_dir)\n", " \n", " for entry in os.listdir(images_dir):\n", " filename = os.path.join(images_dir, entry)\n", " img = Image.open(filename)\n", " rect = np.array(img)\n", " \n", " num = num + 1\n", " if num % 25 == 0:\n", " print(\"Processing image %d of %d...\" % (num, len(os.listdir(images_dir))))\n", " \n", " x = 0\n", " y = 0\n", " \n", " while(y + patch_h <= img.width):\n", " x = 0\n", " while(x + patch_w <= img.height):\n", " patch = rect[x : x + patch_h, y : y + patch_w]\n", " img_hr = Image.fromarray(patch, 'RGB')\n", " \n", " img_lr = img_hr.resize((patch_w // 2, patch_h // 2), Image.ANTIALIAS)\n", " \n", " for i in range(4):\n", " out_hr = os.path.join(train224, str(k) + \".png\")\n", " out_lr = os.path.join(train112, str(k) + \".png\")\n", " \n", " k = k + 1\n", " \n", " img_hr.save(out_hr)\n", " img_lr.save(out_lr)\n", " \n", " img_hr = img_hr.transpose(Image.ROTATE_90)\n", " img_lr = img_lr.transpose(Image.ROTATE_90)\n", " \n", " x = x + 64\n", " y = y + 64\n", " print(\"Done!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

We use the originally downloaded images, run prep functions with adequate parameters and generate around 300 images from Berkeley Segmentation Dataset, depending on how many go to test set.

\n", "

In the folder train64_LR we will have around 20000 64 x 64 blurry image patches that will be used for training. Their original counterparts will be located in the folder train64_HR.

\n", "

In the folder train224 we will have around 12000 original 224 x 224 patches. Their downscaled 112 x 112 counterparts will be located train112. Images that can later be used for testing will be in tests.

" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Creating 64 x 64 training patches and tests from: data/BerkeleySegmentationDataset/Images\n", "Processing image 25 of 300...\n", "Processing image 50 of 300...\n", "Processing image 75 of 300...\n", "Processing image 100 of 300...\n", "Processing image 125 of 300...\n", "Processing image 150 of 300...\n", "Processing image 175 of 300...\n", "Processing image 200 of 300...\n", "Processing image 225 of 300...\n", "Processing image 250 of 300...\n", "Processing image 275 of 300...\n", "Processing image 300 of 300...\n", "Done!\n", "Creating 224 x 224 and 112 x 112 training patches from: data/BerkeleySegmentationDataset/Images\n", "Processing image 25 of 300...\n", "Processing image 50 of 300...\n", "Processing image 75 of 300...\n", "Processing image 100 of 300...\n", "Processing image 125 of 300...\n", "Processing image 150 of 300...\n", "Processing image 175 of 300...\n", "Processing image 200 of 300...\n", "Processing image 225 of 300...\n", "Processing image 250 of 300...\n", "Processing image 275 of 300...\n", "Processing image 300 of 300...\n", "Done!\n" ] } ], "source": [ "#blurry 64x64 destination\n", "train64_lr = os.path.join(data_dir, \"train64_LR\")\n", "\n", "#original 64x64 destination\n", "train64_hr = os.path.join(data_dir, \"train64_HR\")\n", "\n", "#112x112 patches destination\n", "train112 = os.path.join(data_dir, \"train112\")\n", "\n", "#224x224 pathes destination\n", "train224 = os.path.join(data_dir, \"train224\")\n", "\n", "#tests destination\n", "tests = os.path.join(data_dir, \"tests\")\n", "\n", "#prep\n", "prep_64(images_dir, 64, 64, train64_lr, train64_hr, tests)\n", "prep_224(images_dir, 224, 224, train112, train224)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data illustration\n", "

In the figure below we show how the data looks like. The big image is one of the 300 Berkeley Segmentation Dataset images.

\n", "

Larger patches (highlighted in red) are the example of one pair of 224 x 224 and 112 x 112 pathces. They can be found in folders train224 and train112 respectively.

\n", "

Smaller patches (highlighted in blue) are the example of one pair of 64 x 64 clear and blurry patches. They can be found in folders train64_HR and train64_LR respectively.

" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from IPython.display import Image\n", "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/train_data.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Now we are ready to start the training.

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Used models\n", "

In the remaining part of this tutorial, we show how to use Cognitive Toolkit (CNTK) to create several deep convolutional networks for solving the SISR problem.

\n", "

We will try out several different models and see what results we can get. The models we will train are:

\n", "\n", "

Each model has some key idea behind them and they will be discussed in more detail throughout this notebook. We will start with VDSR model.

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## VDSR super-resolution model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

The key idea behind the VDSR model is the residual learning. Instead of predicting the high resolution image from low resolution image, we first upscale our starting image using some cheap method, like the bicubic interpolation. Then, we predict the so-called \"residual image\", that is, the difference between the high resolution image and the image we obtained with bicubic interpolation in the first step. Therefeore, all the intermediate values in the model and in the predicted image will be small, which is more stable. The final result is obtained by adding the predicted value and cheaply upscaled image. You can see the related paper here.

\n", "

Notice that the model acts on the interpolated low resolution image which is already upscaled by a factor of 2, so the input and output image dimensions are the same. Training set must be prepared in a way which supports this idea.

\n", "

The convolutional network consists of alternating convolutional layers and recitfying linear units (ReLUs). There are 19 convolutional layers in total. There is no ReLU layer after the last convolution.

" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Figure 1\n", "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/vdsr_architecture.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Figure 1 above was taken from the paper Accurate Image Super-Resolution Using Very Deep Convolutional Networks. It shows the architecture of VDSR model. We start from the interpolated low resolution image (ILR) and predict the residual image. Most values in it are zero or very small. Adding the residual image and ILR gives us our prediction of high resolution image." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training configuration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

We will use minibatches of size 64. Lower the minibatch size if you are getting CUDA out of memory error. Model is trained on 64 x 64 input patches prepared from Berkeley Segementation Dataset (BSDS300) (folders train64_LR and train64_HR).

" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Data directory is data/BerkeleySegmentationDataset\n" ] } ], "source": [ "#training configuration\n", "MINIBATCH_SIZE = 16 if isFast else 64\n", "NUM_MINIBATCHES = 200 if isFast else 200000\n", "\n", "# Ensure the training and test data is generated and available for this tutorial.\n", "# We search in two locations for the prepared Berkeley Segmentation Dataset.\n", "data_found = False\n", "\n", "for data_dir in [os.path.join(\"data\", \"BerkeleySegmentationDataset\")]:\n", " train_hr_path = os.path.join(data_dir, \"train64_HR\")\n", " train_lr_path = os.path.join(data_dir, \"train64_LR\")\n", " if os.path.exists(train_hr_path) and os.path.exists(train_lr_path):\n", " data_found = True\n", " break\n", " \n", "if not data_found:\n", " raise ValueError(\"Please generate the data by completing the first part of this notebook.\")\n", " \n", "print(\"Data directory is {0}\".format(data_dir))\n", "\n", "#folders with training data (high and low resolution images) and paths to map files\n", "training_folder_HR = os.path.join(data_dir, \"train64_HR\")\n", "training_folder_LR = os.path.join(data_dir, \"train64_LR\")\n", "MAP_FILE_Y = os.path.join(data_dir, \"train64_HR\", \"map.txt\")\n", "MAP_FILE_X = os.path.join(data_dir, \"train64_LR\", \"map.txt\")\n", "\n", "#image dimensions\n", "NUM_CHANNELS = 3\n", "IMG_H, IMG_W = 64, 64\n", "IMAGE_DIMS = (NUM_CHANNELS, IMG_H, IMG_W)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating minibatch sources\n", "

Function below will create map file from folder with image patches (for both high and low resolution images). Map file will be used to create minibatches of training data. This function will be used for creating map files for all models in this tutorial.

" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "# create a map file from a flat folder\n", "import cntk.io.transforms as xforms\n", "\n", "def create_map_file_from_flatfolder(folder):\n", " file_endings = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG']\n", " map_file_name = os.path.join(folder, \"map.txt\")\n", " with open(map_file_name , 'w') as map_file:\n", " for entry in os.listdir(folder):\n", " filename = os.path.join(folder, entry)\n", " if os.path.isfile(filename) and os.path.splitext(filename)[1] in file_endings:\n", " tempName = '/'.join(filename.split('\\\\'))\n", " tempName = '/'.join(tempName.split('//'))\n", " tempName = '//'.join(tempName.split('/'))\n", " map_file.write(\"{0}\\t0\\n\".format(tempName))\n", " return map_file_name" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "

Function below will create minibatch source from the map file mentioned above. Later, we will use this to create a minibatch source for both low resolution and high resolution images. This function will be used for creating minibatch sources for all models.

" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "# creates a minibatch source for training or testing\n", "def create_mb_source(map_file, width, height, num_classes = 10, randomize = True):\n", " transforms = [xforms.scale(width = width, height = height, channels = NUM_CHANNELS, interpolations = 'linear')]\n", " return C.io.MinibatchSource(C.io.ImageDeserializer(map_file, C.io.StreamDefs(\n", " features = C.io.StreamDef(field = 'image', transforms = transforms),\n", " labels = C.io.StreamDef(field = 'label', shape = num_classes))), randomize = randomize)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### VDSR architecture" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

For VDSR model, we will be only using convolution layers and ReLU layers, always stacked one after another, except for the last one when we don't have ReLU layer after the convolution layer.

\n", "

We will have 19 convolution layers. Every neuron has receptive field of size 3 x 3 and every layer contains 64 filters, with the exception of last one which has three filters. They are the RGB representation of the result. Below is the function definition which builds this architecture.

" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "def VDSR(h0):\n", " print('Generator input shape: ', h0.shape)\n", " \n", " with C.layers.default_options(init = C.he_normal(), activation = C.relu, bias = False):\n", " model = C.layers.Sequential([\n", " C.layers.For(range(18), lambda :\n", " C.layers.Convolution((3, 3), 64, pad = True)),\n", " C.layers.Convolution((3, 3), 3, activation = None, pad = True)\n", " ])\n", " \n", " return model(h0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Computation graph" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Our model takes in blurry (interpolated low resolution) 64 x 64 images. They were obtained from their high resolution counterparts by downscaling them and then upscaling them back to original size. The model will evaluate it and predict a residual image which is added to the low resolution image. The Euclidean distance between that result and the original high resolution image is our loss which we want to minimize. The output is also 64 x 64, but less blurry and closer to the original.

\n", "

real_X stores low resolution images and real_Y stores original high resolution images.

\n", "

For optimization we use adam learner. Learning rate starts at 0.1 and decreases by a factor of 10 gradually.

" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "#computation graph\n", "def build_VDSR_graph(lr_image_shape, hr_image_shape, net):\n", " input_dynamic_axes = [C.Axis.default_batch_axis()]\n", " real_X = C.input(lr_image_shape, dynamic_axes = input_dynamic_axes, name = \"real_X\")\n", " real_Y = C.input(hr_image_shape, dynamic_axes = input_dynamic_axes, name = \"real_Y\")\n", "\n", " real_X_scaled = real_X/255\n", " real_Y_scaled = real_Y/255\n", "\n", " genG = net(real_X_scaled)\n", " \n", " #Note: this is where the residual error is calculated and backpropagated through Generator\n", " g_loss_G = IMG_H * IMG_W * C.reduce_mean(C.square(real_Y_scaled - real_X_scaled - genG)) / 2.0\n", " \n", " G_optim = C.adam(g_loss_G.parameters, lr = C.learning_rate_schedule(\n", " [(1, 0.1), (1, 0.01), (1, 0.001), (1, 0.0001)], C.UnitType.minibatch, 50000),\n", " momentum = C.momentum_schedule(0.9), gradient_clipping_threshold_per_sample = 1.0)\n", "\n", " G_G_trainer = C.Trainer(genG, (g_loss_G, None), G_optim)\n", "\n", " return (real_X, real_Y, genG, real_X_scaled, real_Y_scaled, G_optim, G_G_trainer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training\n", "

Now we are ready to train our model. Using the functions and training configuration above, we create minibatch sources and computation graph. Then we take minibatches one by one and slowly update our model. Number of iterations (minibatches) depends on whether isFast is True.

\n", "

Notice that the train function accepts model architecture, dimensions of low and high resolution images and computational graph function. This will enable us to use this same function for all models, except for SRGAN model where we will also need to train the discriminator.

\n", "

The training with isFast set to True might take around 10 minutes.

" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "#training\n", "def train(arch, lr_dims, hr_dims, build_graph):\n", " create_map_file_from_flatfolder(training_folder_LR)\n", " create_map_file_from_flatfolder(training_folder_HR)\n", " \n", " print(\"Starting training\")\n", "\n", " reader_train_X = create_mb_source(MAP_FILE_X, lr_dims[1], lr_dims[2])\n", " reader_train_Y = create_mb_source(MAP_FILE_Y, hr_dims[1], hr_dims[2])\n", " real_X, real_Y, genG, real_X_scaled, real_Y_scaled, G_optim, G_G_trainer = build_graph(lr_image_shape = lr_dims,\n", " hr_image_shape = hr_dims, net = arch)\n", " \n", " print_frequency_mbsize = 50\n", " \n", " pp_G = C.logging.ProgressPrinter(print_frequency_mbsize)\n", "\n", " input_map_X = {real_X: reader_train_X.streams.features}\n", " input_map_Y = {real_Y: reader_train_Y.streams.features}\n", " \n", " for train_step in range(NUM_MINIBATCHES):\n", " \n", " X_data = reader_train_X.next_minibatch(MINIBATCH_SIZE, input_map_X)\n", " batch_inputs_X = {real_X: X_data[real_X].data}\n", " \n", " Y_data = reader_train_Y.next_minibatch(MINIBATCH_SIZE, input_map_Y)\n", " batch_inputs_X_Y = {real_X : X_data[real_X].data, real_Y : Y_data[real_Y].data}\n", "\n", " G_G_trainer.train_minibatch(batch_inputs_X_Y)\n", " pp_G.update_with_trainer(G_G_trainer)\n", " G_trainer_loss = G_G_trainer.previous_minibatch_loss_average\n", " \n", " return (G_G_trainer.model, real_X, real_X_scaled, real_Y, real_Y_scaled)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Starting training\n", "Generator input shape: (3, 64, 64)\n", " Minibatch[ 1- 50]: loss = 1583398986.511333 * 800;\n", " Minibatch[ 51- 100]: loss = 3.904708 * 800;\n", " Minibatch[ 101- 150]: loss = 3.996467 * 800;\n", " Minibatch[ 151- 200]: loss = 4.015535 * 800;\n" ] } ], "source": [ "VDSR_model, real_X, real_X_scaled, real_Y, real_Y_scaled = train(VDSR, IMAGE_DIMS, IMAGE_DIMS, build_VDSR_graph)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "

Now this model can be used to super-resolve arbitrary images. Remember that VDSR model operates on 64 x 64 images obtained by bicubic interpolation and returns a 64 x 64 image, more clear than the starting one. How can it super-resolve arbitrary sized images? The idea is to first upscale the target image with **bicubic** interpolation, then go patch by patch to clear them up with our model and present the resulting image.

Also remember that loss function is based on scaled images (pixel values between 0 and 1), so the predicted residual has to be scaled back by 255 before it is added to low resolution image. If some pixels become negative or go above 255, we return them to 0 and 255 respectively before saving.

\n", "

Figure 2 is the example of the results we could get with this model. On the left side is the effect of the bicubic interpolation and on the right is the result of our model applied.

" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Figure 2\n", "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/vdsr_small_test.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DRRN super-resolution model\n", "

DRRN stands for deep \"recursive\" residual network. Strictly speaking, the model is not recursive since there is no backward flow of data. It is very similar to the already described VDSR model because it also uses the concept of residual learning meaning that we are only predicting the residual image, that is, the difference between the interpolated low resolution image and the high resolution image. As in VDSR before, adding these two will give us final result which will hopefully have a lot of high-frequency details.\n", "You can see the related paper here. Images showing model architecture also come from that paper.

\n", "

DRRN differs from VDSR only in model architecture so we will only need to define one new function which builds that architecture. DRRN architecture consists of several recursive blocks. Each recursive block consists of two convolutional layers. After each one of them is the batch-normalization layer and ReLU layer.

\n", "

Figure 3 is taken from the mentioned paper and shows how to stack recursive blocks one after another (one, two and three blocks respectively).

" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Figure 3\n", "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/recblock_architecture.png\")" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "Figure 4, also from the same paper, shows DRRN model architecture with two recursive blocks." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Figure 4\n", "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/drnn_architecture.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### DRRN architecture and training\n", "

We will have 9 recursive blocks. There will be no ReLU layer after the last convolution. Every neuron has receptive field of size 3 x 3 and every layer contains 128 filters, with the exception of last one which has three filters. They are the RGB representation of the result.

After that, we only need to call the train function with DRRN as argument since everything else is the same as for VDSR, including the training data and the computation graph.

\n", "

The training with isFast set to True might take around 25 minutes.

" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "#basic DRRN block\n", "def DRRN_basic_block(inp, num_filters):\n", " c1 = C.layers.Convolution((3, 3), num_filters, init = C.he_normal(), pad = True, bias = False)(inp)\n", " c1 = C.layers.BatchNormalization(map_rank = 1)(c1)\n", " return c1\n", "\n", "def DRRN(h0):\n", " print('Generator input shape: ', h0.shape)\n", " \n", " with C.layers.default_options(init = C.he_normal(), activation = C.relu, bias = False):\n", " h1 = C.layers.Convolution((3, 3), 128, pad = True)(h0)\n", " h2 = DRRN_basic_block(h1, 128)\n", " h3 = DRRN_basic_block(h2, 128)\n", " h4 = h1 + h3\n", " \n", " for _ in range(8):\n", " h2 = DRRN_basic_block(h4, 128)\n", " h3 = DRRN_basic_block(h2, 128)\n", " h4 = h1 + h3\n", " \n", " h_out = C.layers.Convolution((3, 3), 3, activation = None, pad = True)(h4)\n", "\n", " return h_out" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Starting training\n", "Generator input shape: (3, 64, 64)\n", " Minibatch[ 1- 50]: loss = 2977.761893 * 800;\n", " Minibatch[ 51- 100]: loss = 166.646252 * 800;\n", " Minibatch[ 101- 150]: loss = 141.640209 * 800;\n", " Minibatch[ 151- 200]: loss = 166.618866 * 800;\n" ] } ], "source": [ "DRRN_model, real_X, real_X_scaled, real_Y, real_Y_scaled = train(DRRN, IMAGE_DIMS, IMAGE_DIMS, build_VDSR_graph)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Now this model can be used to super-resolve arbitrary images the same way we described that in VDSR section. As we already mentioned, we will evaluate our models and create filters in the end.

\n", "

Figure 5 is the example of the results we could get with this model. As before, on the left side is the effect of the bicubic interpolation and on the right is the result of our model applied. More iterations could give us more improvement. Quality is a lower than with VDSR (visible from training errors) because we only trained for a very little number of iterations. DRRN and VDSR will show similar performance when we apply models trained for larger number of iterations.

" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Figure 5\n", "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/drnn_small_test.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SRResNet super-resolution model\n", "

Unlike the previous two models, SRResNet does not use residual learning and there is no upscaling in the preprocess. The image is upscaled in the model evaluation process using convolution transpose (also referred to deconvolution) operation. Now the training set is different than before because input and output have different dimensions. Our model will be taking in 112 x 112 images and outputing 224 x 224 images. This is because the pretrained version of CNTK VGG19 model which is later needed for SRGAN operates strictly on 224 x 224 images.

\n", "

The base of the model architecture is the residual block. Each residual block has two convolutional layers, each followed by a batch normalization (BN) layer with the parametric rectifying linear unit after the first one (PReLU). Convolutional layers have 3 x 3 receptive field and each of them contains 64 filters. Image resoultion is increased near the end of the model which is less complex than increasing it at the beginning. You can see the related paper here.

\n", "

Figure 6 from the paper shows the model architecture in a more detailed way. k is the receptive field size, n is the number of filters in the layer and s is the stride.

" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Figure 6\n", "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/srresnet_architecture.PNG\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training configuration\n", "

Since the model operates on different image sizes and input dimensions are different from outputs (remember, upscaling is done inside the model and not as preprocess), we have to change training configuration. We follow the paper and use minibatch size of 16. We use the training set prepared for this model (folders train224 and train112), so we need to change the training folder and map files paths.

" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Data directory is data/BerkeleySegmentationDataset\n" ] } ], "source": [ "#training configuration\n", "MINIBATCH_SIZE = 8 if isFast else 16\n", "NUM_MINIBATCHES = 200 if isFast else 1000000\n", "\n", "# Ensure the training and test data is generated and available for this tutorial.\n", "# We search in two locations for the prepared Berkeley Segmentation Dataset.\n", "data_found = False\n", "\n", "for data_dir in [os.path.join(\"data\", \"BerkeleySegmentationDataset\")]:\n", " train_hr_path = os.path.join(data_dir, \"train224\")\n", " train_lr_path = os.path.join(data_dir, \"train112\")\n", " if os.path.exists(train_hr_path) and os.path.exists(train_lr_path):\n", " data_found = True\n", " break\n", " \n", "if not data_found:\n", " raise ValueError(\"Please generate the data by completing the first part of this notebook.\")\n", " \n", "print(\"Data directory is {0}\".format(data_dir))\n", "\n", "#folders with training data (high and low resolution images) and paths to map files\n", "training_folder_HR = os.path.join(data_dir, \"train224\")\n", "training_folder_LR = os.path.join(data_dir, \"train112\")\n", "MAP_FILE_Y = os.path.join(data_dir, \"train224\", \"map.txt\")\n", "MAP_FILE_X = os.path.join(data_dir, \"train112\", \"map.txt\")\n", "\n", "#image dimensions\n", "NUM_CHANNELS = 3\n", "LR_H, LR_W, HR_H, HR_W = 112, 112, 224, 224\n", "LR_IMAGE_DIMS = (NUM_CHANNELS, LR_H, LR_W)\n", "HR_IMAGE_DIMS = (NUM_CHANNELS, HR_H, HR_W)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Function for constructing the architecture presented in the Figure 6 is defined below." ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "#basic resnet block\n", "def resblock_basic(inp, num_filters):\n", " c1 = C.layers.Convolution((3, 3), num_filters, init = C.he_normal(), pad = True, bias = False)(inp)\n", " c1 = C.layers.BatchNormalization(map_rank = 1)(c1)\n", " c1 = C.param_relu(C.Parameter(c1.shape, init = C.he_normal()), c1)\n", " \n", " c2 = C.layers.Convolution((3, 3), num_filters, init = C.he_normal(), pad = True, bias = False)(c1)\n", " c2 = C.layers.BatchNormalization(map_rank = 1)(c2)\n", " return inp + c2\n", "\n", "def resblock_basic_stack(inp, num_stack_layers, num_filters):\n", " assert (num_stack_layers >= 0)\n", " l = inp\n", " for _ in range(num_stack_layers):\n", " l = resblock_basic(l, num_filters)\n", " return l\n", "\n", "#SRResNet architecture\n", "def SRResNet(h0):\n", " print('Generator inp shape: ', h0.shape)\n", " with C.layers.default_options(init = C.he_normal(), bias = False):\n", " \n", " h1 = C.layers.Convolution((9, 9), 64, pad = True)(h0)\n", " h1 = C.param_relu(C.Parameter(h1.shape, init = C.he_normal()), h1)\n", " \n", " h2 = resblock_basic_stack(h1, 16, 64)\n", " \n", " h3 = C.layers.Convolution((3, 3), 64, activation = None, pad = True)(h2)\n", " h3 = C.layers.BatchNormalization(map_rank = 1)(h3)\n", " \n", " h4 = h1 + h3\n", " ##here\n", " \n", " h5 = C.layers.ConvolutionTranspose2D((3, 3), 64, pad = True, strides = (2, 2), output_shape = (224, 224))(h4)\n", " h5 = C.param_relu(C.Parameter(h5.shape, init = C.he_normal()), h5)\n", " \n", " h6 = C.layers.Convolution((3, 3), 3, pad = True)(h5)\n", "\n", " return h6" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

We follow the paper and the computation graph is a bit different now than in the previous models. The difference is in the loss function and the learning rate schedule. Loss function is the MSE and learning rate is 0.0001 almost the whole time. We put it a bit higher in the beginning which can help our model produce somewhat reasonable results early on.

\n", "

The training with isFast set to True might take around 25 minutes.

" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "def build_SRResNet_graph(lr_image_shape, hr_image_shape, net):\n", " inp_dynamic_axes = [C.Axis.default_batch_axis()]\n", " real_X = C.input(lr_image_shape, dynamic_axes=inp_dynamic_axes, name=\"real_X\")\n", " real_Y = C.input(hr_image_shape, dynamic_axes=inp_dynamic_axes, name=\"real_Y\")\n", "\n", " real_X_scaled = real_X/255\n", " real_Y_scaled = real_Y/255\n", "\n", " genG = net(real_X_scaled)\n", " \n", " G_loss = C.reduce_mean(C.square(real_Y_scaled - genG))\n", " \n", " G_optim = C.adam(G_loss.parameters,\n", " lr = C.learning_rate_schedule([(1, 0.01), (1, 0.001), (98, 0.0001)], C.UnitType.minibatch, 10000),\n", " momentum = C.momentum_schedule(0.9), gradient_clipping_threshold_per_sample = 1.0)\n", "\n", " G_G_trainer = C.Trainer(genG, (G_loss, None), G_optim)\n", " \n", " return (real_X, real_Y, genG, real_X_scaled, real_Y_scaled, G_optim, G_G_trainer)" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Starting training\n", "Generator inp shape: (3, 112, 112)\n", " Minibatch[ 1- 50]: loss = 0.071668 * 400;\n", " Minibatch[ 51- 100]: loss = 0.010785 * 400;\n", " Minibatch[ 101- 150]: loss = 0.008225 * 400;\n", " Minibatch[ 151- 200]: loss = 0.006657 * 400;\n" ] } ], "source": [ "SRResNet_model, real_X, real_X_scaled, real_Y, real_Y_scaled = train(SRResNet, LR_IMAGE_DIMS,\n", " HR_IMAGE_DIMS, build_SRResNet_graph)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Figure 7 shows the results we can get with only 1000 iterations of this model. Images on the left are the original 112 x 112 images and those on the right are the outputs of our model. As we can see, the quality is far from acceptable. This model requires around 106 iterations for the quality to become solid. In the end, we will show what results we can obtain with a sufficient number of iterations." ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Figure 7\n", "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/resnet_small_test.png\",\n", " width = 390, height = 390)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The SRResNet model created here is a starting point for the last model we will try out: the SRGAN." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SRGAN super-resolution model\n", "

A GAN (Generative adversarial network) consists of two neural networks which supervise each other. GANs were first introduced by Ian Goodfellow in the paper Generative Adversarial Nets.

\n", "

Simply put, our GAN, like any other, will consist of two parts:\n", "

    \n", "
  • Generator network: Our generator network will be taking in 112 x 112 images and outputing their super-resolved versions of dimensions 224 x 224. Architecture of our generator network will be the same as for SRResNet above.
  • \n", "
  • Discriminator network: Our discriminator network will be taking in both real 224 x 224 high resolution images and those produced by the generator network so it can learn to distinguish one from another. For every 224 x 224 image it takes, our discriminator outputs one real number between 0 and 1 which estimates the probability of the input image being real (and not produced by generator). Architecture of the discriminator is essentially a sequence of several convolutional layers, each followed by batch normalization layer and leaky ReLU layer with α = 0.2.\n", "Figure 8 shows the exact architecture. It was taken from the SRGAN paper. k is the receptive field size, n is the number of filters in the layer and s is the stride.
  • \n", "

" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "#Figure 8\n", "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/discr_architecture.PNG\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

In every iteration, generator is trying to produce images which will make loss value as small as possible (as in any other model). The idea here is to include the adversarial loss into the loss function which will take discriminator's opinion into account.

\n", "

Similarly, in every iteration, discriminator is trying to become better at distinguishing generated images from the real ones.

\n", "

The key idea is to initialize the generator network weights with the weights of the previously trained SRResNet model and then start the training. Since GANs can be unstable during training (especially in a problem as delicate as super-resolution), this is a good idea because now we only need to refine the generator a bit, so we don't need massive parameter changes and gradients will hopefully not become very large. Therefore, it makse sense to say that SRGAN is the refinement of SRResNet.\n", "

Now we can define the function for constructing the discriminator based on the showcased architecture.

" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "#basic discriminator block\n", "def conv_bn_lrelu(inp, filter_size, num_filters, strides = (1, 1), init = C.he_normal()):\n", " r = C.layers.Convolution(filter_size, num_filters, init = init, pad = True, strides = strides, bias = False)(inp)\n", " r = C.layers.BatchNormalization(map_rank = 1)(r)\n", " return C.param_relu(C.constant((np.ones(r.shape) * 0.2).astype(np.float32)), r)\n", "\n", "#discriminator architecture\n", "def discriminator(h0):\n", " print('Discriminator input shape: ', h0.shape)\n", " with C.layers.default_options(init = C.he_normal(), bias = False):\n", " h1 = C.layers.Convolution((3, 3), 64, pad = True)(h0)\n", " h1 = C.param_relu(C.constant((np.ones(h1.shape) * 0.2).astype(np.float32)), h1)\n", " \n", " h2 = conv_bn_lrelu(h1, (3, 3), 64, strides = (2, 2))\n", " \n", " h3 = conv_bn_lrelu(h2, (3, 3), 128)\n", " h4 = conv_bn_lrelu(h3, (3, 3), 128, strides = (2, 2))\n", " \n", " h5 = conv_bn_lrelu(h4, (3, 3), 256)\n", " h6 = conv_bn_lrelu(h5, (3, 3), 256, strides = (2, 2))\n", " \n", " h7 = conv_bn_lrelu(h6, (3, 3), 512)\n", " h8 = conv_bn_lrelu(h7, (3, 3), 512, strides = (2, 2))\n", " \n", " h9 = C.layers.Dense(1024)(h8)\n", " h10 = C.param_relu(C.constant(0.2, h9.shape), h9)\n", " \n", " h11 = C.layers.Dense(1, activation = C.sigmoid)(h10)\n", " return h11" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training configuration\n", "

Training configuration is identical to the configuration for SRResNet but we will use smaller number of iterations, because we only need refinement. Other paths and parameters are unchanged. We put minibatch size to 4 here to speed up the process. Larger minibatch sizes offer more accurate gradient approximations and better results. Smaller minibatch sizes offer more speed.

" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "#training configuration\n", "MINIBATCH_SIZE = 2 if isFast else 4\n", "NUM_MINIBATCHES = 200 if isFast else 100000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Computation graph\n", "

Computation graph is now much different than before. Both the generator and the discriminator have their own separate loss functions and we use the information from them to update their parameters. Discriminator obviously needs to act on both real 224 x 224 images and on those generated by the generator. The standard way to enable this is to use the clone function. Effectively, we will have two discriminators, one which acts on the real images and the other one on the synthetic images. However, because we used clone, they will share parameters.

\n", "

The biggest challenge is in the loss functions. It is not easy to create a loss function which will make our GAN work well. Coefficients need to be carefully selected to ensure that our model does not diverge and start generating unusable images. There is no general rule for setting the coefficients for different parts of loss functions, it all depends on the problem we are tackling. Finding good coefficient configurations usually comes after several failed attempts.

\n", "

If $G$ is our generator and $D$ our discriminator, following the Generative Adversarial Nets paper, $G$ and $D$ are playing the following min-max game:

" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/min_max.PNG\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Here, the role of $x$ is taken by real 224 x 224 images and the role of $z$ is taken by 112 x 112 images that are the input of the generator. Therefore, discriminator wants to return larger numbers (closer to 1) for real 224 x 224 images and smaller numbers (closer to 0) for synthetic images. Conversely, generator wants to perform in such way that the discriminator gives larger outputs (closer to 1) when evaluated on generator's outputs.

\n", "

Logarithms can be problematic because their gradients are unbounded. No matter how small the coefficient is in front of the adversarial loss of the generator or the discriminator, there is a chance that our gradients might explode. Once they do, it is very hard for the model to return to generating reasonable images.

\n", "

To adress this, we adopt the idea from CycleGAN paper, where logarithm losses are replaced by square losses as shown below. Also notice that the game is now max-min instead of min-max.

" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Image(url = \"https://superresolution.blob.core.windows.net/superresolutionresources/square_min_max.PNG\")" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "

In addition to the adversarial and MSE loss, the generator's loss function will also have the perceptual loss part. A common way to introduce the perceptual loss is with the pre-trained VGG19 network, as mentioned in SRGAN paper.

\n", "

The idea is to take the high resolution image and its generated counterpart and run them both through VGG19 network. Mean square difference of those two evaluations at the layer relu5_4 is the perceptual loss. Apparently, introducing this loss can help GANs generate perceptually more pleasing results than using MSE loss only (with adversarial losses, of course).

\n", "

Now we are ready to define a function which builds our computation graph for SRGAN. Coefficients in the generator loss were determined with the help of several papers and trial and error until we got decent results. By trying out more values, we could probably make further improvement and users are encouraged to experiment.

" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "#gan computation graph\n", "def build_GAN_graph(genG, disc, VGG, real_X_scaled, real_Y_scaled, real_Y): \n", " #discriminator on real images\n", " D_real = discriminator(real_Y_scaled)\n", " \n", " #discriminator on fake images\n", " D_fake = D_real.clone(method = 'share', substitutions = {real_Y_scaled.output: genG.output})\n", " \n", " #VGG on real images\n", " VGG_real = VGG.clone(method = 'share', substitutions = {VGG.arguments[0]: real_Y})\n", " \n", " #VGG on fake images\n", " VGG_fake = VGG.clone(method = 'share', substitutions = {VGG.arguments[0]: 255 * genG.output})\n", " \n", " #generator loss: GAN loss + MSE loss + perceptual (VGG) loss\n", " G_loss = -C.square(D_fake)*0.001 + C.reduce_mean(C.square(real_Y_scaled - genG)) + C.reduce_mean(C.square(VGG_real - VGG_fake))*0.08\n", " \n", " #discriminator loss: loss on real + los on fake images\n", " D_loss = C.square(1.0 - D_real) + C.square(D_fake)\n", " \n", " G_optim = C.adam(G_loss.parameters,\n", " lr = C.learning_rate_schedule([(20, 0.0001), (20, 0.00001)], C.UnitType.minibatch, 5000),\n", " momentum = C.momentum_schedule(0.9), gradient_clipping_threshold_per_sample = 0.1)\n", " \n", " D_optim = C.adam(D_loss.parameters,\n", " lr = C.learning_rate_schedule([(20, 0.0001), (20, 0.00001)], C.UnitType.minibatch, 5000),\n", " momentum = C.momentum_schedule(0.9), gradient_clipping_threshold_per_sample = 0.1)\n", " \n", " G_trainer = C.Trainer(genG, (G_loss, None), G_optim)\n", " \n", " D_trainer = C.Trainer(D_real, (D_loss, None), D_optim)\n", " \n", " return (G_trainer, D_trainer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training the model\n", "

The train function is also considerably different now since we have to alternate updates to the discriminator and the generator. Map files and image dimensions are the same as in SRResNet model. Before the training, we download the pretrained VGG19 model. It might take a few minutes.

" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Downloading VGG19 model...\n", "Done!\n" ] } ], "source": [ "data_dir = os.path.join(\"data\", \"BerkeleySegmentationDataset\")\n", "if not os.path.exists(data_dir):\n", " data_dir = os.makedirs(data_dir)\n", "\n", "models_dir = os.path.join(data_dir, \"PretrainedModels\")\n", "\n", "if not os.path.exists(models_dir):\n", " os.makedirs(models_dir)\n", "\n", "print(\"Downloading VGG19 model...\")\n", "urlretrieve(\"https://www.cntk.ai/Models/Caffe_Converted/VGG19_ImageNet_Caffe.model\",\n", " os.path.join(models_dir, \"VGG19_ImageNet_Caffe.model\"))\n", "print(\"Done!\")" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "def train_GAN(SRResNet_model, real_X, real_X_scaled, real_Y, real_Y_scaled):\n", " print(\"Starting training\")\n", "\n", " reader_train_X = create_mb_source(MAP_FILE_X, LR_W, LR_H)\n", " reader_train_Y = create_mb_source(MAP_FILE_Y, HR_W, HR_H)\n", " \n", " VGG19 = C.load_model(os.path.join(models_dir, \"VGG19_ImageNet_Caffe.model\"))\n", " print(\"Loaded VGG19 model.\")\n", "\n", " layer5_4 = VGG19.find_by_name('relu5_4')\n", " relu5_4 = C.combine([layer5_4.owner])\n", " \n", " G_trainer, D_trainer = build_GAN_graph(genG = SRResNet_model, disc = discriminator, VGG = relu5_4,\n", " real_X_scaled = real_X_scaled, real_Y_scaled = real_Y_scaled, real_Y = real_Y)\n", " \n", " print_frequency_mbsize = 50\n", " \n", " print(\"First row is discriminator loss, second row is generator loss:\")\n", " pp_D = C.logging.ProgressPrinter(print_frequency_mbsize)\n", " pp_G = C.logging.ProgressPrinter(print_frequency_mbsize)\n", "\n", " inp_map_X = {real_X: reader_train_X.streams.features}\n", " inp_map_Y = {real_Y: reader_train_Y.streams.features}\n", " \n", " for train_step in range(NUM_MINIBATCHES):\n", " X_data = reader_train_X.next_minibatch(MINIBATCH_SIZE, inp_map_X)\n", " batch_inps_X = {real_X: X_data[real_X].data}\n", " \n", " Y_data = reader_train_Y.next_minibatch(MINIBATCH_SIZE, inp_map_Y)\n", " batch_inps_X_Y = {real_X: X_data[real_X].data, real_Y : Y_data[real_Y].data}\n", " \n", " D_trainer.train_minibatch(batch_inps_X_Y)\n", " pp_D.update_with_trainer(D_trainer)\n", " D_trainer_loss = D_trainer.previous_minibatch_loss_average\n", "\n", " G_trainer.train_minibatch(batch_inps_X_Y)\n", " pp_G.update_with_trainer(G_trainer)\n", " G_trainer_loss = G_trainer.previous_minibatch_loss_average\n", " \n", " model = G_trainer.model\n", " \n", " return model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Remember that the requirement for training SRGAN is the SRResNet model, so we pass it into the training function.

\n", "

The training with isFast set to True might take around 20 minutes. Since our SRResNet (which is used to initialize the SRGAN) and SRGAN were both trained for a very few iterations, the results of SRGAN will probably not be something to behold. We don't mind since this is only to showcase the code and training procedure and the real models (trained for enough iterations) are used in tutorial CNTK 302 part A.

" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Starting training\n", "Loaded VGG19 model.\n", "Discriminator input shape: (3, 224, 224)\n", "First row is discriminator loss, second row is generator loss:\n", " Minibatch[ 1- 50]: loss = 0.971728 * 100;\n", " Minibatch[ 1- 50]: loss = 0.063828 * 100;\n", " Minibatch[ 51- 100]: loss = 1.000000 * 100;\n", " Minibatch[ 51- 100]: loss = 0.006320 * 100;\n", " Minibatch[ 101- 150]: loss = 1.000000 * 100;\n", " Minibatch[ 101- 150]: loss = 0.005728 * 100;\n", " Minibatch[ 151- 200]: loss = 1.000000 * 100;\n", " Minibatch[ 151- 200]: loss = 0.006376 * 100;\n" ] } ], "source": [ "SRGAN_model = train_GAN(SRResNet_model, real_X, real_X_scaled, real_Y, real_Y_scaled)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Now these models can be used to evaluate any image following the description in the CNTK 302 part A. We don't recommend using these models since they have been trained for a very small number of iterations. We recommend using pre-trained models and they are used in CNTK 302 part A. Training here was done only to showcase the training speed and procedure.

" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 2 }