machine-learning - 在mxnet错误中定义一个简单的神经网络

标签 machine-learning neural-network deep-learning mxnet gluon

我正在使用 MXnet 制作简单的神经网络,但在 step() 方法中遇到一些问题

x1.shape=(64, 1, 1000)
y1.shape=(64, 1, 10)
net =nm.Sequential()
net.add(nn.Dense(H,activation='relu'),nn.Dense(90,activation='relu'),nn.Dense(D_out))
for t in range(500):
    #y_pred = net(x1)

    #loss = loss_fn(y_pred, y)
    #for i in range(len(x1)):

    with autograd.record():
        output=net(x1)
        loss =loss_fn(output,y1)
    loss.backward()
    trainer.step(64)
    if t % 100 == 99:
        print(t, loss)
        #optimizer.zero_grad()

UserWarning: Gradient of Parameter dense30_weight on context cpu(0) has not been updated by backward since last step. This could mean a bug in your model that made it only use a subset of the Parameters (Blocks) for this iteration. If you are intentionally only using a subset, call step with ignore_stale_grad=True to suppress this warning and skip updating of Parameters with stale gradient

最佳答案

该错误表明您在训练器中传递的参数不在计算图中。 您需要初始化模型的参数并定义训练器。与 Pytorch 不同,您不需要在 MXNet 中调用 Zero_grad,因为默认情况下新的梯度会被写入而不是累积。以下代码显示了使用 MXNet 的 Gluon API 实现的简单神经网络:

# Define model
net = gluon.nn.Dense(1)
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
square_loss = gluon.loss.L2Loss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.0001})

# Create random input and labels
def real_fn(X):
    return 2 * X[:, 0] - 3.4 * X[:, 1] + 4.2

X = nd.random_normal(shape=(num_examples, num_inputs))
noise = 0.01 * nd.random_normal(shape=(num_examples,))
y = real_fn(X) + noise

# Define Dataloader
batch_size = 4
train_data = gluon.data.DataLoader(gluon.data.ArrayDataset(X, y), batch_size=batch_size, shuffle=True)
num_batches = num_examples / batch_size

for e in range(10):

    # Iterate over training batches
    for i, (data, label) in enumerate(train_data):

    # Load data on the CPU
        data = data.as_in_context(mx.cpu())
        label = label.as_in_context(mx.cpu())

        with autograd.record():
            output = net(data)
            loss = square_loss(output, label)

    # Backpropagation
        loss.backward()
        trainer.step(batch_size)

        cumulative_loss += nd.mean(loss).asscalar()

    print("Epoch %s, loss: %s" % (e, cumulative_loss / num_examples))

关于machine-learning - 在mxnet错误中定义一个简单的神经网络,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57720590/

相关文章:

machine-learning - 如何构建决策树回归模型

python - TfidfVectorizer 出错但 CountVectorizer 正常

algorithm - k-最近邻算法类计数

python - 为什么在 Tensorflow 简单神经网络示例中再添加一层会破坏它?

matlab - matlab中的主成分分析?

neural-network - "The more training data the better"是否适用于神经网络?

machine-learning - 何时在 Caffe 中使用就地层?

python - 如何使用Keras的多层感知器进行多类分类

machine-learning - 使用 Keras 进行文本分类 : How to add custom features?

algorithm - Web 导航模式挖掘/网络聚类算法/Web 流量聚类方法