Before install nvidia-docker, you will need these first:
Then download these file:
Before install nvidia-docker, you will need these first:
Then download these file:
Here's a simple implementation of bilinear interpolation on tensors using PyTorch.
I wrote this up since I ended up learning a lot about options for interpolation in both the numpy and PyTorch ecosystems. More generally than just interpolation, too, it's also a nice case study in how PyTorch magically can put very numpy-like code on the GPU (and by the way, do autodiff for you too).
For interpolation in PyTorch, this open issue calls for more interpolation features. There is now a nn.functional.grid_sample() feature but at least at first this didn't look like what I needed (but we'll come back to this later).
In particular I wanted to take an image, W x H x C, and sample it many times at different random locations. Note also that this is different than upsampling which exhaustively samples and also doesn't give us fle
| import numpy as np | |
| import tensorflow as tf | |
| import matplotlib.pyplot as plt | |
| from tensorflow.contrib.distributions import Bernoulli | |
| class VariationalDense: | |
| """Variational Dense Layer Class""" | |
| def __init__(self, n_in, n_out, model_prob, model_lam): | |
| self.model_prob = model_prob |
| from __future__ import division | |
| import numpy as np | |
| class Edge(object): | |
| def __init__(self, p, q): | |
| self.xmin = np.min((p[0], q[0])) | |
| self.xmax = np.max((p[0], q[0])) | |
| self.ymin = np.min((p[1], q[1])) |
| import numpy as np | |
| import tensorflow as tf | |
| __author__ = "Sangwoong Yoon" | |
| def np_to_tfrecords(X, Y, file_path_prefix, verbose=True): | |
| """ | |
| Converts a Numpy array (or two Numpy arrays) into a tfrecord file. | |
| For supervised learning, feed training inputs to X and training labels to Y. | |
| For unsupervised learning, only feed training inputs to X, and feed None to Y. |
| #!/usr/bin/env python | |
| # -*- coding: utf-8 -*- | |
| """ | |
| Example employing Lasagne for digit generation using the MNIST dataset and | |
| Wasserstein Generative Adversarial Networks | |
| (WGANs, see https://arxiv.org/abs/1701.07875 for the paper and | |
| https://github.com/martinarjovsky/WassersteinGAN for the "official" code). | |
| It is based on a DCGAN example: |
| from __future__ import absolute_import | |
| import json | |
| import sys | |
| import re | |
| import requests | |
| import urllib | |
| import urlparse | |
| from urllib2 import HTTPError | |
| from urllib2 import URLError | |
| from urllib2 import urlopen |
| # testing variable order init | |
| import tensorflow as tf | |
| def initialize_all_variables(sess=None): | |
| """Initializes all uninitialized variables in correct order. Initializers | |
| are only run for uninitialized variables, so it's safe to run this multiple | |
| times. | |
| Args: |