考虑创建从高分辨率图像目录中随机抽取小图像块的数据集的问题。
Tensorflow 数据集 API 提供了一种非常简单的方法,通过构建图像名称的数据集,对它们进行混洗,将其映射到加载的图像,然后映射到随机裁剪的补丁。
然而,这种简单的实现非常低效,因为将加载和裁剪单独的高分辨率图像以生成每个补丁。理想情况下,图像可以加载一次并重复使用以生成许多补丁。
之前讨论过的一种简单方法是从图像生成多个补丁并将它们展平。然而,这有一个不幸的影响,即过度偏向数据。我们希望每个训练批次都来自不同的图像。
理想情况下,我想要的是“随机缓存过滤器”转换,它采用底层数据集并将其 N 个元素缓存到内存中。它的迭代器将从缓存中返回一个随机元素。此外,以预定义的频率,它将用来自底层数据集的新元素替换缓存中的随机元素。此过滤器将允许以更少的随机化和更高的内存消耗为代价更快地访问数据。
有这样的功能吗?
如果不是,它应该作为一个新的数据集转换还是只是一个新的迭代器来实现?似乎只需要一个新的迭代器。关于如何创建一个新的数据集迭代器的任何指针,最好是在 C++ 中?
最佳答案
您应该可以使用 tf.data.Dataset.shuffle
达到你想要的。以下是目标的快速摘要:
您可以使用
tf.data
实现所有这些。 API通过执行以下步骤:这是相关的代码:
filenames = ... # filenames containing the big images
num_samples = len(filenames)
# Parameters
num_patches = 100 # number of patches to extract from each image
patch_size = 32 # size of the patches
buffer_size = 50 * num_patches # shuffle patches from 50 different big images
num_parallel_calls = 4 # number of threads
batch_size = 10 # size of the batch
get_patches_fn = lambda image: get_patches(image, num_patches=num_patches, patch_size=patch_size)
# Create a Dataset serving batches of random patches in our images
dataset = (tf.data.Dataset.from_tensor_slices(filenames)
.shuffle(buffer_size=num_samples) # step 1: all the filenames into the buffer ensures good shuffling
.map(parse_fn, num_parallel_calls=num_parallel_calls) # step 2
.map(get_patches_fn, num_parallel_calls=num_parallel_calls) # step 3
.apply(tf.contrib.data.unbatch()) # unbatch the patches we just produced
.shuffle(buffer_size=buffer_size) # step 4
.batch(batch_size) # step 5
.prefetch(1) # step 6: make sure you always have one batch ready to serve
)
iterator = dataset.make_one_shot_iterator()
patches = iterator.get_next() # shape [None, patch_size, patch_size, 3]
sess = tf.Session()
res = sess.run(patches)
功能 parse_fn
和 get_patches
定义如下:def parse_fn(filename):
"""Decode the jpeg image from the filename and convert to [0, 1]."""
image_string = tf.read_file(filename)
# Don't use tf.image.decode_image, or the output shape will be undefined
image_decoded = tf.image.decode_jpeg(image_string, channels=3)
# This will convert to float values in [0, 1]
image = tf.image.convert_image_dtype(image_decoded, tf.float32)
return image
def get_patches(image, num_patches=100, patch_size=16):
"""Get `num_patches` random crops from the image"""
patches = []
for i in range(num_patches):
patch = tf.image.random_crop(image, [patch_size, patch_size, 3])
patches.append(patch)
patches = tf.stack(patches)
assert patches.get_shape().dims == [num_patches, patch_size, patch_size, 3]
return patches
关于tensorflow - TF 数据 API : how to efficiently sample small patches from images,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48777889/