tensorflow:tf.data.Dataset用法解析和模
关键词:tensorflow
,tf.estimator
内容目录
- tf.data.Dataset简介
- tf.data.Dataset.from_tensor_slices的使用
- shuffle,repeat,batch的顺序搭配
- 使用from_tensor_slices和from_structure管道进行训练和测试
- 使用from_tensor_slices管道和tf.estimator评估器进行训练和测试
tf.data.Dataset简介
tf.data.Dataset
支持将内存中的训练数据(列表,元组,字典)输入为tensor对象,且可以使用各种API完成对数据进行映射,乱序,批次,复制,另外它采用输入管道的方式进行数据输入,不再使用占位符和feed_dict将Python对象在每个批次中传递到静态图,而是直接在管道内部转化为tensor对象直接输入到图,降低了整体由于等待数据输入导致的计算资源闲置时间,简单而言使用tf.data.Dataset管道进行训练数据输入使得训练效率更高。
快速开始
import tensorflow as tf
x = [[2.0, 3.3], [1.2, 3.2]]
y = [1, 0]
data = tf.data.Dataset.from_tensor_slices((x, y)) # 以元组进行输入
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y}) # 以字典进行输入
iters = data.make_one_shot_iterator() # 转化为迭代器
iters2 = data2.make_one_shot_iterator()
with tf.Session() as sess:
for i in range(2):
one = iters.get_next()
a = sess.run(one)
print(a)
two = iters2.get_next()
b = sess.run(two)
print(b)
打印结果如下
(array([2. , 3.3], dtype=float32), 1)
{'x': array([2. , 3.3], dtype=float32), 'y': 1}
(array([1.2, 3.2], dtype=float32), 0)
{'x': array([1.2, 3.2], dtype=float32), 'y': 0}
管道每次输入分别为训练的x和y的各一行,其中以元组输入以下标获得对应的特征或者标签,以字典输入以key获得特征或者标签,管道的输入是一个tensor需要在Session里面run出来。
tf.data.Dataset.from_tensor_slices的含义和输入要求
该函数是把内存中的Python数据输入管道,slices的含义是针对列表形式的向量,以最外边的那一维(向量的第一维)进行切割,作为样本和样本之间分割条件(新的一行),例如输入x是一个三维向量(3,2,2),y是一个一维向量
x = [[[2.0, 3.3], [1.2, 3.2]], [[1.0, -2.3], [1.0, 2.1]], [[-1.5, 0.7], [1.9, -0.2]]]
y = [1, 0, 1]
data = tf.data.Dataset.from_tensor_slices((x, y))
data_iter = data.make_one_shot_iterator()
with tf.Session() as sess:
try:
for i in range(3):
one = data_iter.get_next()
a = sess.run(one)
print(a)
except tf.errors.OutOfRangeError:
print("已经没有数据")
输入如下
(array([[2. , 3.3],
[1.2, 3.2]], dtype=float32), 1)
(array([[ 1. , -2.3],
[ 1. , 2.1]], dtype=float32), 0)
(array([[-1.5, 0.7],
[ 1.9, -0.2]], dtype=float32), 1)
对于字典的形式,只是给数据增加了一个自定义的key,而value也是遵守同元组一样的切分规则,只需把代码改成
data = tf.data.Dataset.from_tensor_slices({"x": x, "y": y})
输出如下
{'x': array([[2. , 3.3],
[1.2, 3.2]], dtype=float32), 'y': 1}
{'x': array([[ 1. , -2.3],
[ 1. , 2.1]], dtype=float32), 'y': 0}
{'x': array([[-1.5, 0.7],
[ 1.9, -0.2]], dtype=float32), 'y': 1}
定义元组和字典给from_tensor_slices是在告诉它输入的是不同的列,每个列必须是列表元素。
获取管道数据
将tf.data.Dataset创建的DatasetV1Adapter对象通过make_one_shot_iterator
,make_initializable_iterator
转化为Iterator,通过迭代器的get_next方法获取数据,数据是tensor类型
make_one_shot_iterator
一次迭代,不需要显式初始化,它自动初始化,不支持参数化,例如
x = [[2.0, 3.3], [1.2, 3.2], [1.0, -2.3], [1.0, 2.1], [-1.5, 0.7], [1.9, -0.2], [1.9, -0.3]]
y = [1, 0, 1, 1, 0, 1, 0]
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y})
iters2 = data2.make_one_shot_iterator()
with tf.Session() as sess:
try:
for i in range(7):
one = iters2.get_next()
a = sess.run(one)
except tf.errors.OutOfRangeError:
print("已经没有数据")
get_next的结果要传递给Session,在Session中不需要对迭代器做初始化,另外get_next随便放在Session内还是外都可以,比如下面效果是一样的
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y})
iters2 = data2.make_one_shot_iterator()
one = iters2.get_next()
with tf.Session() as sess:
try:
for i in range(7):
a = sess.run(one)
except tf.errors.OutOfRangeError:
print("已经没有数据")
make_initializable_iterator
需要首先运行初始化指令iterator.initializer(),支持参数化,使用tf.placeholder()可以在管道内传参
x = [[2.0, 3.3], [1.2, 3.2], [1.0, -2.3], [1.0, 2.1], [-1.5, 0.7], [1.9, -0.2], [1.9, -0.3]]
y = [1, 0, 1, 1, 0, 1, 0]
z = tf.placeholder(tf.float32, shape=[])
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y}).map(lambda x: {"x": x["x"] + z, "y": x["y"]})
iters2 = data2.make_initializable_iterator()
with tf.Session() as sess:
sess.run(iters2.initializer, feed_dict={z: -10.0})
try:
for i in range(7):
one = iters2.get_next()
a = sess.run(one)
print(a)
except tf.errors.OutOfRangeError:
print("已经没有数据")
打印如下
{'x': array([-8. , -6.7], dtype=float32), 'y': 1}
{'x': array([-8.8, -6.8], dtype=float32), 'y': 0}
{'x': array([ -9. , -12.3], dtype=float32), 'y': 1}
{'x': array([-9. , -7.9], dtype=float32), 'y': 1}
{'x': array([-11.5, -9.3], dtype=float32), 'y': 0}
{'x': array([ -8.1, -10.2], dtype=float32), 'y': 1}
{'x': array([ -8.1, -10.3], dtype=float32), 'y': 0}
在Session中调用了迭代器的initializer
,同时将占位符传递到管道内部,作用是给管道的map函数作为参数使用,本例中是给x每个元素减10。
对管道数据进行操作
tf.data.Dataset创建的管道数据支持训练需要的数据复制,打乱,批次生成等操作
repeat操作
将数据进行复制,类似epoch进行循环
x = [[2.0, 3.3], [1.2, 3.2]]
y = [1, 0]
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y}).repeat(2)
iters2 = data2.make_one_shot_iterator()
one = iters2.get_next()
with tf.Session() as sess:
for i in range(4):
a = sess.run(one)
print(a)
打印如下,整个数据被重复读取了1次
{'x': array([2. , 3.3], dtype=float32), 'y': 1}
{'x': array([1.2, 3.2], dtype=float32), 'y': 0}
{'x': array([2. , 3.3], dtype=float32), 'y': 1}
{'x': array([1.2, 3.2], dtype=float32), 'y': 0}
如果直接调用repeat()的话,生成的序列就会无限重复下去,没有结束,因此也不会抛出tf.errors.OutOfRangeError异常。
batch操作
迭代器每次返回一个小批次而不是整个数据集
x = [[2.0, 3.3], [1.2, 3.2], [1.0, -2.3], [1.0, 2.1], [-1.5, 0.7], [1.9, -0.2]]
y = [1, 0, 1, 1, 0, 1]
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y}).batch(3)
iters2 = data2.make_one_shot_iterator()
one = iters2.get_next()
with tf.Session() as sess:
for i in range(2):
a = sess.run(one)
print(a)
以三个为一组对整个数据集进行切分,输出如下
{'x': array([[ 2. , 3.3],
[ 1.2, 3.2],
[ 1. , -2.3]], dtype=float32), 'y': array([1, 0, 1], dtype=int32)}
{'x': array([[ 1. , 2.1],
[-1.5, 0.7],
[ 1.9, -0.2]], dtype=float32), 'y': array([1, 0, 1], dtype=int32)}
如果batch不能刚好整除样本数,会在最后一个批次有不足batch的一组,例如改为4个一组
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y}).batch(4)
输出如下最后一组数据量不足4
{'x': array([[ 2. , 3.3],
[ 1.2, 3.2],
[ 1. , -2.3],
[ 1. , 2.1]], dtype=float32), 'y': array([1, 0, 1, 1], dtype=int32)}
{'x': array([[-1.5, 0.7],
[ 1.9, -0.2]], dtype=float32), 'y': array([0, 1], dtype=int32)}
可以加入drop_remainder
参数删除不足batch的批次,同时可迭代次数也因此减1,因为删除了最后一个批次
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y}).batch(4, drop_remainder=True)
设置batch之后对应的可迭代数量变少,同样的如果调用迭代数大于batch除以总样本数后的值,也会报错 End of sequence,通过异常捕获可以在没有数据的停止下来
x = [[2.0, 3.3], [1.2, 3.2], [1.0, -2.3], [1.0, 2.1], [-1.5, 0.7], [1.9, -0.2], [1.9, -0.3]]
y = [1, 0, 1, 1, 0, 1, 0]
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y}).repeat(2).batch(3, drop_remainder=True)
iters2 = data2.make_one_shot_iterator()
one = iters2.get_next()
with tf.Session() as sess:
try:
for i in range(14):
a = sess.run(one)
except tf.errors.OutOfRangeError:
print("已经没有数据")
shuffle操作
打乱整个数据集的顺序,参数buffsize的大小越大,数据的混乱程度越高
x = [[2.0, 3.3], [1.2, 3.2], [1.0, -2.3], [1.0, 2.1], [-1.5, 0.7], [1.9, -0.2]]
y = [1, 0, 1, 1, 0, 1]
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y}).shuffle(100000)
iters2 = data2.make_one_shot_iterator()
one = iters2.get_next()
with tf.Session() as sess:
for i in range(6):
a = sess.run(one)
print(a)
输出如下,整体乱序,但是 元素都有输出
{'x': array([ 1.9, -0.2], dtype=float32), 'y': 1}
{'x': array([1.2, 3.2], dtype=float32), 'y': 0}
{'x': array([1. , 2.1], dtype=float32), 'y': 1}
{'x': array([2. , 3.3], dtype=float32), 'y': 1}
{'x': array([-1.5, 0.7], dtype=float32), 'y': 0}
{'x': array([ 1. , -2.3], dtype=float32), 'y': 1}
repeat,batch,shuffle的顺序要求
三者联合使用的正确顺序是先shuffle再repeat最后batch,例如
x = [[2.0, 3.3], [1.2, 3.2], [1.0, -2.3], [1.0, 2.1], [-1.5, 0.7], [1.9, -0.2], [1.9, -0.3]]
y = [1, 0, 1, 1, 0, 1, 0]
data2 = tf.data.Dataset.from_tensor_slices({"x": x, "y": y}).shuffle(1000).repeat(2).batch(3, drop_remainder=True)
iters2 = data2.make_one_shot_iterator()
one = iters2.get_next()
with tf.Session() as sess:
for i in range(4):
a = sess.run(one)
- 先shuffle:保证一个epoch先shuffle,如果先repeat则整体shuffle,可能在一个epoch/batch之内一个样本输出多条
- 先repeat再batch:如果先batch再repeat,相当于对batch的结果再repeat,如果epoch不能被batch整除,就会出现每个epoch都会出现剩余的batch,这种情况被repeat之后导致训练的时候动不动就出现样本不足的batch
map操作
类似于Python的map,可以对管道的数据进行映射处理,此处不做展开
管道数据流转总结
以一个特征和标签数据输入为例
x = [[[2.0, 3.3], [1.2, 3.2]], [[1.0, -2.3], [1.0, 2.1]], [[-1.5, 0.7], [1.9, -0.2]]]
y = [1, 0, 1]
管道数据流转
通过from_tensor_slices将python的元组,字段对象转化为DatasetV1Adapter对象,batch操作将数据拓展一维,make_one_shot_iterator将DatasetV1Adapter转化为tensorflow可迭代对象,通过get_next获取管道数据,输出是一个元组或者字典形式的tensorflow的Tensor。
使用tf.data.Dataset.from_tensor_slices进行模型训练
由于管道的输出直接是tensor,因此可以直接输入网络而不需要feed_dict,如果不使用管道,一个简单的模型网络代码如下
class Model(object):
def __init__(self, num_class, feature_size, learning_rate=0.05, weight_decay=0.01, decay_learning_rate=0.99):
self.input_x = tf.placeholder(tf.float32, [None, feature_size], name="input_x")
self.input_y = tf.placeholder(tf.float32, [None, num_class], name="input_y")
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob")
self.global_step = tf.Variable(0, name="global_step", trainable=False)
with tf.name_scope('layer_1'):
dense_out_1 = tf.layers.dense(self.input_x, 64)
dense_out_act_1 = tf.nn.relu(dense_out_1)
with tf.name_scope('layer_2'):
dense_out_2 = tf.layers.dense(dense_out_act_1, 32)
dense_out_act_2 = tf.nn.relu(dense_out_2)
with tf.name_scope('layer_out'):
self.output = tf.layers.dense(dense_out_act_2, 2)
self.probs = tf.nn.softmax(self.output, dim=1, name="probs")
with tf.name_scope('loss'):
self.loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(logits=self.output, labels=self.input_y))
vars = tf.trainable_variables()
loss_l2 = tf.add_n([tf.nn.l2_loss(v) for v in vars if
v.name not in ['bias', 'gamma', 'b', 'g', 'beta']]) * weight_decay
self.loss += loss_l2
with tf.name_scope("optimizer"):
if decay_learning_rate:
learning_rate = tf.train.exponential_decay(learning_rate, self.global_step, 100, decay_learning_rate)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
self.train_step = optimizer.minimize(self.loss, global_step=self.global_step)
with tf.name_scope("metrics"):
self.accuracy = tf.reduce_mean(
tf.cast(tf.equal(tf.arg_max(self.probs, 1), tf.arg_max(self.input_y, 1)), dtype=tf.float32))
需要手动用yield实现一个迭代器,完成复制,批次,打乱的操作
def get_batch(epochs, batch_size, features, labels):
for epoch in range(epochs):
tmp = list(zip(features, labels))
shuffle(tmp)
features, labels = zip(*tmp)
for batch in range(0, len(features), batch_size):
if batch + batch_size < len(features):
batch_features = features[batch: (batch + batch_size)]
batch_labels = labels[batch: (batch + batch_size)]
else:
batch_features = features[batch: len(features)]
batch_labels = labels[batch: len(features)]
yield epoch, batch_features, batch_labels
然后在Session使用feed_dict传入数据
feed_dict = {model.input_x: batch_x, model.input_y: batch_y, model.dropout_keep_prob: 0.8}
_, step, loss_train, acc_train = sess.run([model.train_step, model.global_step, model.loss, model.accuracy], feed_dict=feed_dict)
使用管道数据的场景下代码修改如下
# 导入管道数据
train_data = tf.data.Dataset.from_tensor_slices({"feature": train_x, "label": train_y}).shuffle(1000).repeat(20).batch(128, drop_remainder=True)
test_data = tf.data.Dataset.from_tensor_slices({"feature": test_x, "label": test_y}).batch(len(test_x))
data = tf.data.Iterator.from_structure(train_data.output_types, train_data.output_shapes)
next_one = data.get_next()
train_init_op = data.make_initializer(train_data)
test_init_op = data.make_initializer(test_data)
# 构建网络
dense_out_1 = tf.layers.dense(next_one["feature"], 64)
dense_out_act_1 = tf.nn.relu(dense_out_1)
dense_out_2 = tf.layers.dense(dense_out_act_1, 32)
dense_out_act_2 = tf.nn.relu(dense_out_2)
output = tf.layers.dense(dense_out_act_2, 2)
probs = tf.nn.softmax(output, dim=1, name="probs")
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(logits=output, labels=next_one["label"]))
vars = tf.trainable_variables()
loss_l2 = tf.add_n([tf.nn.l2_loss(v) for v in vars if
v.name not in ['bias', 'gamma', 'b', 'g', 'beta']]) * 0.001
loss += loss_l2
optimizer = tf.train.AdamOptimizer(learning_rate=0.005)
global_step = tf.Variable(0, name="global_step", trainable=False)
train_step = optimizer.minimize(loss, global_step=global_step)
accuracy = tf.reduce_mean(
tf.cast(tf.equal(tf.arg_max(probs, 1), tf.arg_max(next_one["label"], 1)), dtype=tf.float32))
saver = tf.train.Saver(tf.global_variables(), max_to_keep=1)
with tf.Session() as sess:
init_op = tf.group(tf.global_variables_initializer())
sess.run(init_op)
train_loss_list = []
steps = []
acc_list = []
train_acc_list = []
sess.run(train_init_op)
while True:
try:
_, step, loss_val, acc_val = sess.run([train_step, global_step, loss, accuracy])
train_loss_list.append(loss_val)
steps.append(step)
train_acc_list.append(acc_val)
if step % 10 == 0:
print("step:", step, "loss:", loss_val)
# ckpt
saver.save(sess, os.path.join(BASIC_PATH, "./ckpt1/ckpt"))
except tf.errors.OutOfRangeError:
print("已经没有数据")
break
# 测试
sess.run(test_init_op)
loss_val, acc_val = sess.run([loss, accuracy])
print("{:-^30}".format("evaluation"))
print("[evaluation]", "loss:", loss_val, "acc", acc_val)
其中需要使用tf.data.Iterator.from_structure
将训练集和测试集一起输入,通过make_initializer
切换状态,在训练的时候使用训练,测试的时候使用测试,但是在代码上共享一个变量
使用tf.data.Dataset+tf.estimator.Estimator训练模型
tf.data.Dataset最常见的是和评估器tf.estimator.Estimator一起使用,将以上代码改为如下格式,先定义输入数据的函数,包含训练,测试和预测
def train_input_fn(train_x, train_y, batch_size):
dataset = tf.data.Dataset.from_tensor_slices((train_x, train_y))
dataset = dataset.shuffle(1000).repeat().batch(batch_size)
return dataset
def eval_input_fn(data, label, batch=None):
if label is None:
return tf.data.Dataset.from_tensor_slices(data).batch(batch)
else:
return tf.data.Dataset.from_tensor_slices((data, label)).batch(batch)
网络结构函数定义如下,将特征和标签直接以tensor的形式输入
def model(features: tf.Tensor, labels: tf.Tensor, mode: str, params: dict):
# 定义网络结构
dense_out_1 = tf.layers.dense(features, params["hidden_1_dim"])
dense_out_act_1 = tf.nn.relu(dense_out_1)
dense_out_2 = tf.layers.dense(dense_out_act_1, params["hidden_2_dim"])
dense_out_act_2 = tf.nn.relu(dense_out_2)
output = tf.layers.dense(dense_out_act_2, params["output_dim"])
probs = tf.nn.softmax(output, dim=1, name="probs")
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode, predictions=probs)
accuracy = tf.metrics.accuracy(tf.arg_max(probs, 1), tf.arg_max(labels, 1))
metrics = {"acc": accuracy}
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(logits=output, labels=labels))
vars = tf.trainable_variables()
loss_l2 = tf.add_n([tf.nn.l2_loss(v) for v in vars if
v.name not in ['bias', 'gamma', 'b', 'g', 'beta']]) * params["weight_decay"]
loss += loss_l2
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(mode, loss=loss, eval_metric_ops=metrics)
assert mode == tf.estimator.ModeKeys.TRAIN
optimizer = tf.train.AdamOptimizer(learning_rate=params["learning_rate"])
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
训练,测试,预测过程如下
params = {
"learning_rate": 0.01,
"weight_decay": 0.001,
"hidden_1_dim": 64,
"hidden_2_dim": 32,
"output_dim": 2
}
config = tf.estimator.RunConfig()
# 定义评估器
estimator = tf.estimator.Estimator(model_fn=model, model_dir="./tf_estimator", params=params, config=config)
# 训练
estimator.train(lambda: train_input_fn(train_x, train_y, 128), steps=200)
# 测试
train_metrics = estimator.evaluate(input_fn=lambda: eval_input_fn(test_x, test_y, len(test_x)))
print(train_metrics)
# 预测
predictins = estimator.predict(input_fn=lambda: eval_input_fn(test_x, None, len(test_x)))
注意estimator的train,evaluate,predict接收的input_fn都要时无参数的函数,而train_input_fn,eval_input_fn都是有参数的,因此使用匿名函数再包一层。