博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
tensorflow1.12 queue 笔记
阅读量:4969 次
发布时间:2019-06-12

本文共 4379 字,大约阅读时间需要 14 分钟。

 

主要参考:https://www.tensorflow.org/api_guides/python/threading_and_queues#Queue_usage_overview

 

自动方式

For most use cases, the automatic thread startup and management provided by tf.train.MonitoredSession is sufficient. In the rare case that it is not, TensorFlow provides tools for manually managing your threads and queues.

与tf.read_file()、tf.image.decode_jpeg()、tfrecord API等函数配合,可以实现自动图片流并行读取

 

import tensorflow as tfdef simple_shuffle_batch(source, capacity, batch_size=10):    # Create a random shuffle queue.    queue = tf.RandomShuffleQueue(capacity=capacity,                                  min_after_dequeue=int(0.9*capacity),                                  shapes=source.shape, dtypes=source.dtype)    # Create an op to enqueue one item.    enqueue = queue.enqueue(source)    # Create a queue runner that, when started, will launch 4 threads applying    # that enqueue op.    num_threads = 4    qr = tf.train.QueueRunner(queue, [enqueue] * num_threads)    # Register the queue runner so it can be found and started by    # tf.train.start_queue_runners later (the threads are not launched yet).    tf.train.add_queue_runner(qr)    # Create an op to dequeue a batch    return queue.dequeue_many(batch_size)# create a dataset that counts from 0 to 99input = tf.constant(list(range(100)))input = tf.data.Dataset.from_tensor_slices(input)input = input.make_one_shot_iterator().get_next()# Create a slightly shuffled batch from the sorted elementsget_batch = simple_shuffle_batch(input, capacity=20)# `MonitoredSession` will start and manage the `QueueRunner` threads.with tf.train.MonitoredSession() as sess:    # Since the `QueueRunners` have been started, data is available in the    # queue, so the `sess.run(get_batch)` call will not hang.    while not sess.should_stop():        print(sess.run(get_batch))

 

手动方式

通过官方例程微调(以便能正常运行)得到,目前能运行,结果也正确,但是运行警告,尚未解决。

WARNING:tensorflow:From /home/work/Downloads/python_scripts/tensorflow_example/test_tf_queue_manual.py:52: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.

Instructions for updating:
To construct input pipelines, use the `tf.data` module.

import tensorflow as tf# Using Python's threading library.import threadingimport timebatch_size = 10thread_num = 3print("-" * 50)def MyLoop(coord, id):    step = 0    while not coord.should_stop():        step += 1        print("thread id: %02d, step: %02d, ...do something..." %(id, step))        time.sleep(0.01)        if step >= 5:            coord.request_stop()# Main thread: create a coordinator.coord = tf.train.Coordinator()# Create thread_num threads that run 'MyLoop()'threads = [threading.Thread(target=MyLoop, args=(coord,i)) for i in range(thread_num)]# Start the threads and wait for all of them to stop.for t in threads:    t.start()coord.join(threads)print("-" * 50)# create a dataset that counts from 0 to 99example = tf.constant(list(range(100)))example = tf.data.Dataset.from_tensor_slices(example)example = example.make_one_shot_iterator().get_next()# Create a queue, and an op that enqueues examples one at a time in the queue.queue = tf.RandomShuffleQueue(capacity=20,                              min_after_dequeue=int(0.9*20),                              shapes=example.shape,                              dtypes=example.dtype)enqueue_op = queue.enqueue(example)# Create a training graph that starts by dequeueing a batch of examples.inputs = queue.dequeue_many(batch_size)train_op = inputs # ...use 'inputs' to build the training part of the graph...# Create a queue runner that will run thread_num threads in parallel to enqueue examples.qr = tf.train.QueueRunner(queue, [enqueue_op] * thread_num)# Launch the graph.sess = tf.Session()# Create a coordinator, launch the queue runner threads.coord = tf.train.Coordinator()enqueue_threads = qr.create_threads(sess, coord=coord, start=True)# Run the training loop, controlling termination with the coordinator.try:    for step in range(1000000):        if coord.should_stop():            break        y = sess.run(train_op)        print(step, ",  y =", y)except Exception as e:    # Report exceptions to the coordinator.    coord.request_stop(e)finally:    # Terminate as usual. It is safe to call `coord.request_stop()` twice.    coord.request_stop()    coord.join(threads)

 

转载于:https://www.cnblogs.com/xbit/p/10083516.html

你可能感兴趣的文章
如何快速三个月成为一个领域的高手的四个方法
查看>>
[51nod]1347 旋转字符串
查看>>
SpringBoot2.0 + SpringCloud Eureka搭建高可用注册中心(Eureka之三)
查看>>
tomcat文件夹与文件解析
查看>>
【Linux开发】CCS远程调试ARM,AM4378
查看>>
springmvc常用注解标签详解
查看>>
Linux之ssh服务介绍
查看>>
Sql语句里的递归查询(转)
查看>>
[JAVA]《Java 核心技术》(一)
查看>>
libevent机制
查看>>
rabbit ip登录
查看>>
呼叫器
查看>>
Hadoop Archives
查看>>
.Net基础篇_学习笔记_第六天_for循环语法_正序输出和倒序输出
查看>>
Java 十进制和十六制之间的转化(负数的处理)
查看>>
反射那些事儿——Java动态装载和反射技术
查看>>
Java Swing提供的文件选择对话框 - JFileChooser
查看>>
排序:冒泡排序
查看>>
github下载安装
查看>>
【转】编程技术面试的五大要点
查看>>