您的位置:首页 > 其它

Enhanced Deep Residual Networks for Single Image Super-Resolution

2017-09-23 23:08 1146 查看
网络结构为,conv(3,3)+residual block(N个residual block层),+conv(3,3)+upsample block.

residual block:

def resBlock(x,channels=64,kernel_size=[3,3],scale=1):
tmp = slim.conv2d(x,channels,kernel_size,activation_fn=None)
tmp = tf.nn.relu(tmp)
tmp = slim.conv2d(tmp,channels,kernel_size,activation_fn=None)
tmp *= scale
return x + tmp


upsample block:

def upsample(x,scale=2,features=64,activation=tf.nn.relu):
assert scale in [2,3,4]
x = slim.conv2d(x,features,[3,3],activation_fn=activation)
if scale == 2:
ps_features = 3*(scale**2)
x = slim.conv2d(x,ps_features,[3,3],activation_fn=activation)
#x = slim.conv2d_transpose(x,ps_features,6,stride=1,activation_fn=activation)
x = PS(x,2,color=True)
elif scale == 3:
ps_features =3*(scale**2)
x = slim.conv2d(x,ps_features,[3,3],activation_fn=activation)
#x = slim.conv2d_transpose(x,ps_features,9,stride=1,activation_fn=activation)
x = PS(x,3,color=True)
elif scale == 4:
ps_features = 3*(2**2)
for i in range(2):
x = slim.conv2d(x,ps_features,[3,3],activation_fn=activation)
#x = slim.conv2d_transpose(x,ps_features,6,stride=1,activation_fn=activation)
x = PS(x,2,color=True)
return x


整体代码为:

def __init__(self,img_size=32,num_layers=32,feature_size=256,scale=2,output_channels=3):
print("Building EDSR...")
#Placeholder for image inputs
self.input = x = tf.placeholder(tf.float32,[None,img_size,img_size,output_channels])
#Placeholder for upscaled image ground-truth
self.target = y = tf.placeholder(tf.float32,[None,img_size*scale,img_size*scale,output_channels])

"""
Preprocessing as mentioned in the paper, by subtracting the mean
However, the subtract the mean of the entire dataset they use. As of
now, I am subtracting the mean of each batch
"""
mean_x = tf.reduce_mean(self.input)
image_input =x- mean_x
mean_y = tf.reduce_mean(self.target)
image_target =y- mean_y

#One convolution before res blocks and to convert to required feature depth
x = slim.conv2d(image_input,feature_size,[3,3])

#Store the output of the first convolution to add later
conv_1 = x
scaling_factor = 0.1

#Add the residual blocks to the model
for i in range(num_layers):
x = utils.resBlock(x,feature_size,scale=scaling_factor)

#One more convolution, and then we add the output of our first conv layer
x = slim.conv2d(x,feature_size,[3,3])
x += conv_1

#Upsample output of the convolution
x = utils.upsample(x,scale,feature_size,None)

#One final convolution on the upsampling output
output =x # slim.conv2d(x,output_channels,[3,3])
self.out = tf.clip_by_value(output+mean_x,0.0,255.0)

self.loss = loss = tf.reduce_mean(tf.losses.absolute_difference(image_target,output))


数据处理

对数据采取固定大小,得到target image,之后对target image下采样,得到source image.

def get_batch(batch_size,original_size,shrunk_size):
x =[]
y =[]
img_indices = random.sample(range(len(train_set)),batch_size)
for i in range(len(img_indices)):
index = img_indices[i]
img = scipy.misc.imread(train_set[index])
img = crop_center(img,original_size,original_size)
x_img = scipy.misc.imresize(img,(shrunk_size,shrunk_size))
x.append(x_img)
y.append(img)
return x,y


训练

git clone https://github.com/jmiller656/EDSR-Tensorflow 
cd EDSR-Tensorflow/

python train.py --dataset General-100


输入图像:



输出图像:



ground truth:



gi/thub代码:https://github.com/jmiller656/EDSR-Tensorflow
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: