Ray Worker 调用时 Tensorflow 无法检测到 GPU

标签 tensorflow machine-learning neural-network distributed-computing multi-gpu

当我尝试使用以下代码示例将 Tensorflow 与 Ray 结合使用时,Tensorflow 在由“远程”工作程序调用时无法检测到我机器上的 GPU,但在“本地”调用时确实找到了 GPU。我将“远程”和“本地”放在引号中,因为所有内容都在我的桌面上运行,该桌面有两个 GPU 并且运行 Ubuntu 16.04,并且我使用 tensorflow-gpu Anaconda 包安装了 Tensorflow。

local_network 似乎负责日志中的这些消息:

2018-01-26 17:24:33.149634: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Quadro M5000, pci bus id: 0000:03:00.0)
2018-01-26 17:24:33.149642: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:1) -> (device: 1, name: Quadro M5000, pci bus id: 0000:04:00.0)

remote_network 似乎负责此消息:

2018-01-26 17:24:34.309270: E tensorflow/stream_executor/cuda/cuda_driver.cc:406] failed call to cuInit: CUDA_ERROR_NO_DEVICE

为什么 Tensorflow 在一种情况下能够检测到 GPU,而在另一种情况下却不能?

import tensorflow as tf
import numpy as np
import ray

ray.init()

BATCH_SIZE = 100
NUM_BATCHES = 1
NUM_ITERS = 201

class Network(object):
    def __init__(self, x, y):
        # Seed TensorFlow to make the script deterministic.
        tf.set_random_seed(0)
        # Define the inputs.
        x_data = tf.constant(x, dtype=tf.float32)
        y_data = tf.constant(y, dtype=tf.float32)
        # Define the weights and computation.
        w = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
        b = tf.Variable(tf.zeros([1]))
        y = w * x_data + b
        # Define the loss.
        self.loss = tf.reduce_mean(tf.square(y - y_data))
        optimizer = tf.train.GradientDescentOptimizer(0.5)
        self.grads = optimizer.compute_gradients(self.loss)
        self.train = optimizer.apply_gradients(self.grads)
        # Define the weight initializer and session.
        init = tf.global_variables_initializer()
        self.sess = tf.Session()
        # Additional code for setting and getting the weights
        self.variables = ray.experimental.TensorFlowVariables(self.loss, self.sess)
        # Return all of the data needed to use the network.
        self.sess.run(init)

    # Define a remote function that trains the network for one step and returns the
    # new weights.
    def step(self, weights):
        # Set the weights in the network.
        self.variables.set_weights(weights)
        # Do one step of training. We only need the actual gradients so we filter over the list.
        actual_grads = self.sess.run([grad[0] for grad in self.grads])
        return actual_grads

    def get_weights(self):
        return self.variables.get_weights()

# Define a remote function for generating fake data.
@ray.remote(num_return_vals=2)
def generate_fake_x_y_data(num_data, seed=0):
    # Seed numpy to make the script deterministic.
    np.random.seed(seed)
    x = np.random.rand(num_data)
    y = x * 0.1 + 0.3
    return x, y

# Generate some training data.
batch_ids = [generate_fake_x_y_data.remote(BATCH_SIZE, seed=i) for i in range(NUM_BATCHES)]
x_ids = [x_id for x_id, y_id in batch_ids]
y_ids = [y_id for x_id, y_id in batch_ids]
# Generate some test data.
x_test, y_test = ray.get(generate_fake_x_y_data.remote(BATCH_SIZE, seed=NUM_BATCHES))

# Create actors to store the networks.
remote_network = ray.remote(Network)
actor_list = [remote_network.remote(x_ids[i], y_ids[i]) for i in range(NUM_BATCHES)]
local_network = Network(x_test, y_test)

# Get initial weights of local network.
weights = local_network.get_weights()

# Do some steps of training.
for iteration in range(NUM_ITERS):
    # Put the weights in the object store. This is optional. We could instead pass
    # the variable weights directly into step.remote, in which case it would be
    # placed in the object store under the hood. However, in that case multiple
    # copies of the weights would be put in the object store, so this approach is
    # more efficient.
    weights_id = ray.put(weights)
    # Call the remote function multiple times in parallel.
    gradients_ids = [actor.step.remote(weights_id) for actor in actor_list]
    # Get all of the weights.
    gradients_list = ray.get(gradients_ids)

    # Take the mean of the different gradients. Each element of gradients_list is a list
    # of gradients, and we want to take the mean of each one.
    mean_grads = [sum([gradients[i] for gradients in gradients_list]) / len(gradients_list) for i in range(len(gradients_list[0]))]

    feed_dict = {grad[0]: mean_grad for (grad, mean_grad) in zip(local_network.grads, mean_grads)}
    local_network.sess.run(local_network.train, feed_dict=feed_dict)
    weights = local_network.get_weights()

    # Print the current weights. They should converge to roughly to the values 0.1
    # and 0.3 used in generate_fake_x_y_data.
    if iteration % 20 == 0:
        print("Iteration {}: weights are {}".format(iteration, weights))

最佳答案

GPU 被 ray.remote 切断装饰器本身。从其源代码来看:

def remote(*args, **kwargs):
    ...
    num_cpus = kwargs["num_cpus"] if "num_cpus" in kwargs else 1
    num_gpus = kwargs["num_gpus"] if "num_gpus" in kwargs else 0  # !!!
    ...

因此以下调用有效设置num_gpus=0:

remote_network = ray.remote(Network)

Ray API 有点奇怪,你不能简单地说 ray.remote(Network, num_gpus=2) (尽管这正是你想要的)。这是我所做的,它似乎在我的机器上工作:

ray.init(num_gpus=2)

...

@ray.remote(num_gpus=2)
class RemoteNetwork(Network):
    pass

actor_list = [RemoteNetwork.remote(x_ids[i],y_ids[i]) for i in range(NUM_BATCHES)]

关于Ray Worker 调用时 Tensorflow 无法检测到 GPU,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48471716/

相关文章:

python - Tensorflow Beta 2.0 中的 tf.keras 是否支持自动混合精度?

python - 没有名为 tensorflow.contrib 的模块

python - Mnist识别使用keras

machine-learning - SKlearn (scikit-learn) 用于回归的多元特征选择

python - keras中的神经网络不收敛

tensorflow - Tensorflow 是否有 tf.unravel_index 的倒数?

tensorflow - Tensorflow 中的 block 对角矩阵

java - 有人可以检查我的异或神经网络代码有什么问题吗

neural-network - 无需微积分的神经网络

python - 将 SVM 添加到最后一层