0
  • 聊天消息
  • 系統(tǒng)消息
  • 評(píng)論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識(shí)你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

PyTorch教程-13.6. 多個(gè) GPU 的簡(jiǎn)潔實(shí)現(xiàn)

jf_pJlTbmA9 ? 來源:PyTorch ? 作者:PyTorch ? 2023-06-05 15:44 ? 次閱讀

為每個(gè)新模型從頭開始實(shí)施并行性并不好玩。此外,優(yōu)化同步工具以獲得高性能有很大的好處。在下文中,我們將展示如何使用深度學(xué)習(xí)框架的高級(jí) API 來執(zhí)行此操作。數(shù)學(xué)和算法與第 13.5 節(jié)中的相同。毫不奇怪,您至少需要兩個(gè) GPU 才能運(yùn)行本節(jié)的代碼。

import torch
from torch import nn
from d2l import torch as d2l

from mxnet import autograd, gluon, init, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l

npx.set_np()

13.6.1。玩具網(wǎng)絡(luò)

讓我們使用一個(gè)比13.5 節(jié)中的 LeNet 更有意義的網(wǎng)絡(luò) ,它仍然足夠容易和快速訓(xùn)練。我們選擇了一個(gè) ResNet-18 變體(He et al. , 2016)。由于輸入圖像很小,我們對(duì)其進(jìn)行了輕微修改。特別地,與第 8.6 節(jié)的不同之處在于,我們?cè)陂_始時(shí)使用了更小的卷積核、步長(zhǎng)和填充。此外,我們刪除了最大池化層。

#@save
def resnet18(num_classes, in_channels=1):
  """A slightly modified ResNet-18 model."""
  def resnet_block(in_channels, out_channels, num_residuals,
           first_block=False):
    blk = []
    for i in range(num_residuals):
      if i == 0 and not first_block:
        blk.append(d2l.Residual(out_channels, use_1x1conv=True,
                    strides=2))
      else:
        blk.append(d2l.Residual(out_channels))
    return nn.Sequential(*blk)

  # This model uses a smaller convolution kernel, stride, and padding and
  # removes the max-pooling layer
  net = nn.Sequential(
    nn.Conv2d(in_channels, 64, kernel_size=3, stride=1, padding=1),
    nn.BatchNorm2d(64),
    nn.ReLU())
  net.add_module("resnet_block1", resnet_block(64, 64, 2, first_block=True))
  net.add_module("resnet_block2", resnet_block(64, 128, 2))
  net.add_module("resnet_block3", resnet_block(128, 256, 2))
  net.add_module("resnet_block4", resnet_block(256, 512, 2))
  net.add_module("global_avg_pool", nn.AdaptiveAvgPool2d((1,1)))
  net.add_module("fc", nn.Sequential(nn.Flatten(),
                    nn.Linear(512, num_classes)))
  return net

#@save
def resnet18(num_classes):
  """A slightly modified ResNet-18 model."""
  def resnet_block(num_channels, num_residuals, first_block=False):
    blk = nn.Sequential()
    for i in range(num_residuals):
      if i == 0 and not first_block:
        blk.add(d2l.Residual(
          num_channels, use_1x1conv=True, strides=2))
      else:
        blk.add(d2l.Residual(num_channels))
    return blk

  net = nn.Sequential()
  # This model uses a smaller convolution kernel, stride, and padding and
  # removes the max-pooling layer
  net.add(nn.Conv2D(64, kernel_size=3, strides=1, padding=1),
      nn.BatchNorm(), nn.Activation('relu'))
  net.add(resnet_block(64, 2, first_block=True),
      resnet_block(128, 2),
      resnet_block(256, 2),
      resnet_block(512, 2))
  net.add(nn.GlobalAvgPool2D(), nn.Dense(num_classes))
  return net

13.6.2。網(wǎng)絡(luò)初始化

我們將在訓(xùn)練循環(huán)內(nèi)初始化網(wǎng)絡(luò)。有關(guān)初始化方法的復(fù)習(xí),請(qǐng)參閱第 5.4 節(jié)。

net = resnet18(10)
# Get a list of GPUs
devices = d2l.try_all_gpus()
# We will initialize the network inside the training loop

The initialize function allows us to initialize parameters on a device of our choice. For a refresher on initialization methods see Section 5.4. What is particularly convenient is that it also allows us to initialize the network on multiple devices simultaneously. Let’s try how this works in practice.

net = resnet18(10)
# Get a list of GPUs
devices = d2l.try_all_gpus()
# Initialize all the parameters of the network
net.initialize(init=init.Normal(sigma=0.01), ctx=devices)

Using the split_and_load function introduced in Section 13.5 we can divide a minibatch of data and copy portions to the list of devices provided by the devices variable. The network instance automatically uses the appropriate GPU to compute the value of the forward propagation. Here we generate 4 observations and split them over the GPUs.

x = np.random.uniform(size=(4, 1, 28, 28))
x_shards = gluon.utils.split_and_load(x, devices)
net(x_shards[0]), net(x_shards[1])

[08:00:43] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)

(array([[ 2.2610207e-06, 2.2045981e-06, -5.4046786e-06, 1.2869955e-06,
     5.1373163e-06, -3.8297967e-06, 1.4339059e-07, 5.4683451e-06,
     -2.8279192e-06, -3.9651104e-06],
    [ 2.0698672e-06, 2.0084667e-06, -5.6382510e-06, 1.0498458e-06,
     5.5506434e-06, -4.1065491e-06, 6.0830087e-07, 5.4521784e-06,
     -3.7365021e-06, -4.1891640e-06]], ctx=gpu(0)),
 array([[ 2.4629783e-06, 2.6015525e-06, -5.4362617e-06, 1.2938218e-06,
     5.6387889e-06, -4.1360108e-06, 3.5758853e-07, 5.5125256e-06,
     -3.1957325e-06, -4.2976326e-06],
    [ 1.9431673e-06, 2.2600434e-06, -5.2698201e-06, 1.4807417e-06,
     5.4830934e-06, -3.9678889e-06, 7.5751018e-08, 5.6764356e-06,
     -3.2530229e-06, -4.0943951e-06]], ctx=gpu(1)))

Once data passes through the network, the corresponding parameters are initialized on the device the data passed through. This means that initialization happens on a per-device basis. Since we picked GPU 0 and GPU 1 for initialization, the network is initialized only there, and not on the CPU. In fact, the parameters do not even exist on the CPU. We can verify this by printing out the parameters and observing any errors that might arise.

weight = net[0].params.get('weight')

try:
  weight.data()
except RuntimeError:
  print('not initialized on cpu')
weight.data(devices[0])[0], weight.data(devices[1])[0]

not initialized on cpu

(array([[[ 0.01382882, -0.01183044, 0.01417865],
     [-0.00319718, 0.00439528, 0.02562625],
     [-0.00835081, 0.01387452, -0.01035946]]], ctx=gpu(0)),
 array([[[ 0.01382882, -0.01183044, 0.01417865],
     [-0.00319718, 0.00439528, 0.02562625],
     [-0.00835081, 0.01387452, -0.01035946]]], ctx=gpu(1)))

Next, let’s replace the code to evaluate the accuracy by one that works in parallel across multiple devices. This serves as a replacement of the evaluate_accuracy_gpu function from Section 7.6. The main difference is that we split a minibatch before invoking the network. All else is essentially identical.

#@save
def evaluate_accuracy_gpus(net, data_iter, split_f=d2l.split_batch):
  """Compute the accuracy for a model on a dataset using multiple GPUs."""
  # Query the list of devices
  devices = list(net.collect_params().values())[0].list_ctx()
  # No. of correct predictions, no. of predictions
  metric = d2l.Accumulator(2)
  for features, labels in data_iter:
    X_shards, y_shards = split_f(features, labels, devices)
    # Run in parallel
    pred_shards = [net(X_shard) for X_shard in X_shards]
    metric.add(sum(float(d2l.accuracy(pred_shard, y_shard)) for
            pred_shard, y_shard in zip(
              pred_shards, y_shards)), labels.size)
  return metric[0] / metric[1]

13.6.3。訓(xùn)練

和以前一樣,訓(xùn)練代碼需要執(zhí)行幾個(gè)基本功能以實(shí)現(xiàn)高效并行:

需要在所有設(shè)備上初始化網(wǎng)絡(luò)參數(shù)。

在迭代數(shù)據(jù)集時(shí),小批量將被劃分到所有設(shè)備上。

我們跨設(shè)備并行計(jì)算損失及其梯度。

梯度被聚合并且參數(shù)被相應(yīng)地更新。

最后,我們計(jì)算精度(再次并行)以報(bào)告網(wǎng)絡(luò)的最終性能。訓(xùn)練例程與前面章節(jié)中的實(shí)現(xiàn)非常相似,只是我們需要拆分和聚合數(shù)據(jù)。

def train(net, num_gpus, batch_size, lr):
  train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
  devices = [d2l.try_gpu(i) for i in range(num_gpus)]
  def init_weights(module):
    if type(module) in [nn.Linear, nn.Conv2d]:
      nn.init.normal_(module.weight, std=0.01)
  net.apply(init_weights)
  # Set the model on multiple GPUs
  net = nn.DataParallel(net, device_ids=devices)
  trainer = torch.optim.SGD(net.parameters(), lr)
  loss = nn.CrossEntropyLoss()
  timer, num_epochs = d2l.Timer(), 10
  animator = d2l.Animator('epoch', 'test acc', xlim=[1, num_epochs])
  for epoch in range(num_epochs):
    net.train()
    timer.start()
    for X, y in train_iter:
      trainer.zero_grad()
      X, y = X.to(devices[0]), y.to(devices[0])
      l = loss(net(X), y)
      l.backward()
      trainer.step()
    timer.stop()
    animator.add(epoch + 1, (d2l.evaluate_accuracy_gpu(net, test_iter),))
  print(f'test acc: {animator.Y[0][-1]:.2f}, {timer.avg():.1f} sec/epoch '
     f'on {str(devices)}')

def train(num_gpus, batch_size, lr):
  train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
  ctx = [d2l.try_gpu(i) for i in range(num_gpus)]
  net.initialize(init=init.Normal(sigma=0.01), ctx=ctx, force_reinit=True)
  trainer = gluon.Trainer(net.collect_params(), 'sgd',
              {'learning_rate': lr})
  loss = gluon.loss.SoftmaxCrossEntropyLoss()
  timer, num_epochs = d2l.Timer(), 10
  animator = d2l.Animator('epoch', 'test acc', xlim=[1, num_epochs])
  for epoch in range(num_epochs):
    timer.start()
    for features, labels in train_iter:
      X_shards, y_shards = d2l.split_batch(features, labels, ctx)
      with autograd.record():
        ls = [loss(net(X_shard), y_shard) for X_shard, y_shard
           in zip(X_shards, y_shards)]
      for l in ls:
        l.backward()
      trainer.step(batch_size)
    npx.waitall()
    timer.stop()
    animator.add(epoch + 1, (evaluate_accuracy_gpus(net, test_iter),))
  print(f'test acc: {animator.Y[0][-1]:.2f}, {timer.avg():.1f} sec/epoch '
     f'on {str(ctx)}')

讓我們看看這在實(shí)踐中是如何工作的。作為熱身,我們?cè)趩蝹€(gè) GPU 上訓(xùn)練網(wǎng)絡(luò)。

train(net, num_gpus=1, batch_size=256, lr=0.1)

test acc: 0.90, 14.0 sec/epoch on [device(type='cuda', index=0)]

pYYBAGR9OuKAe7JBAAC_x5kwato614.svg

train(num_gpus=1, batch_size=256, lr=0.1)

test acc: 0.93, 14.3 sec/epoch on [gpu(0)]

poYBAGR9OuWAWF8-AADjgDgKCWc638.svg

接下來我們使用 2 個(gè) GPU 進(jìn)行訓(xùn)練。與 13.5 節(jié)中評(píng)估的 LeNet 相比,ResNet-18 的模型要復(fù)雜得多。這就是并行化顯示其優(yōu)勢(shì)的地方。計(jì)算時(shí)間明顯大于同步參數(shù)的時(shí)間。這提高了可伸縮性,因?yàn)椴⑿谢拈_銷不太相關(guān)。

train(net, num_gpus=2, batch_size=512, lr=0.2)

test acc: 0.89, 8.8 sec/epoch on [device(type='cuda', index=0), device(type='cuda', index=1)]

poYBAGR9OueAPCNJAACf4wqnIl8145.svg

train(num_gpus=2, batch_size=512, lr=0.2)

test acc: 0.91, 14.2 sec/epoch on [gpu(0), gpu(1)]

poYBAGR9OumAO48UAAD7oP2IzW8432.svg

13.6.4。概括

Gluon 通過提供上下文列表為跨多個(gè)設(shè)備的模型初始化提供原語。

在可以找到數(shù)據(jù)的設(shè)備上自動(dòng)評(píng)估數(shù)據(jù)。

在嘗試訪問該設(shè)備上的參數(shù)之前,請(qǐng)注意初始化每個(gè)設(shè)備上的網(wǎng)絡(luò)。否則你會(huì)遇到錯(cuò)誤。

優(yōu)化算法自動(dòng)聚合多個(gè) GPU。

13.6.5。練習(xí)

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場(chǎng)。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請(qǐng)聯(lián)系本站處理。 舉報(bào)投訴
  • gpu
    gpu
    +關(guān)注

    關(guān)注

    27

    文章

    4640

    瀏覽量

    128487
  • pytorch
    +關(guān)注

    關(guān)注

    2

    文章

    795

    瀏覽量

    13088
收藏 人收藏

    評(píng)論

    相關(guān)推薦

    新手小白怎么學(xué)GPU云服務(wù)器跑深度學(xué)習(xí)?

    新手小白想用GPU云服務(wù)器跑深度學(xué)習(xí)應(yīng)該怎么做? 用個(gè)人主機(jī)通常pytorch可以跑但是LexNet,AlexNet可能就直接就跑不動(dòng),如何實(shí)現(xiàn)更經(jīng)濟(jì)便捷的實(shí)現(xiàn)
    發(fā)表于 06-11 17:09

    如何往星光2板子里裝pytorch?

    如題,想先gpu版本的pytorch只安裝cpu版本的pytorch,pytorch官網(wǎng)提供了基于conda和pip兩種安裝方式。因?yàn)樵凼莚isc架構(gòu)沒對(duì)應(yīng)的conda,而使用pip安
    發(fā)表于 09-12 06:30

    pytorch模型轉(zhuǎn)換需要注意的事項(xiàng)有哪些?

    和記錄張量上的操作,不會(huì)記錄任何控制流操作。 為什么不能是GPU模型? 答:BMNETP的編譯過程不支持。 如何將GPU模型轉(zhuǎn)成CPU模型? 答:在加載PyTorch的Python模型
    發(fā)表于 09-18 08:05

    Pytorch 1.1.0,來了!

    許多用戶已經(jīng)轉(zhuǎn)向使用標(biāo)準(zhǔn)PyTorch運(yùn)算符編寫自定義實(shí)現(xiàn),但是這樣的代碼遭受高開銷:大多數(shù)PyTorch操作在GPU上啟動(dòng)至少一個(gè)內(nèi)核,并且RNN由于其重復(fù)性質(zhì)通常運(yùn)行許多操作。但是
    的頭像 發(fā)表于 05-05 10:02 ?5868次閱讀
    <b class='flag-5'>Pytorch</b> 1.1.0,來了!

    基于PyTorch的深度學(xué)習(xí)入門教程之DataParallel使用多GPU

    講到DataParallel使用多GPU。 在PyTorch中使用GPU比較簡(jiǎn)單,可以這樣把模型放到GPU上。 model.gpu() 還可
    的頭像 發(fā)表于 02-15 09:55 ?4035次閱讀

    PyTorch在哪些地方分配GPU內(nèi)存

    PyTorch 核心開發(fā)者和 FAIR 研究者 Zachary DeVito 創(chuàng)建了一個(gè)新工具(添加實(shí)驗(yàn)性 API),通過生成和可視化內(nèi)存快照(memory snapshot)來可視化 GPU 內(nèi)存的分配狀態(tài)。這些內(nèi)存快照記錄了內(nèi)存分配的堆棧跟蹤以及內(nèi)存在緩存分配器狀態(tài)中
    發(fā)表于 10-27 11:34 ?694次閱讀

    PyTorch教程3.5之線性回歸的簡(jiǎn)潔實(shí)現(xiàn)

    電子發(fā)燒友網(wǎng)站提供《PyTorch教程3.5之線性回歸的簡(jiǎn)潔實(shí)現(xiàn).pdf》資料免費(fèi)下載
    發(fā)表于 06-05 11:28 ?0次下載
    <b class='flag-5'>PyTorch</b>教程3.5之線性回歸的<b class='flag-5'>簡(jiǎn)潔</b><b class='flag-5'>實(shí)現(xiàn)</b>

    PyTorch教程7.4之多個(gè)輸入和多個(gè)輸出通道

    電子發(fā)燒友網(wǎng)站提供《PyTorch教程7.4之多個(gè)輸入和多個(gè)輸出通道.pdf》資料免費(fèi)下載
    發(fā)表于 06-05 10:17 ?0次下載
    <b class='flag-5'>PyTorch</b>教程7.4之<b class='flag-5'>多個(gè)</b>輸入和<b class='flag-5'>多個(gè)</b>輸出通道

    PyTorch教程9.6之遞歸神經(jīng)網(wǎng)絡(luò)的簡(jiǎn)潔實(shí)現(xiàn)

    電子發(fā)燒友網(wǎng)站提供《PyTorch教程9.6之遞歸神經(jīng)網(wǎng)絡(luò)的簡(jiǎn)潔實(shí)現(xiàn).pdf》資料免費(fèi)下載
    發(fā)表于 06-05 09:56 ?0次下載
    <b class='flag-5'>PyTorch</b>教程9.6之遞歸神經(jīng)網(wǎng)絡(luò)的<b class='flag-5'>簡(jiǎn)潔</b><b class='flag-5'>實(shí)現(xiàn)</b>

    PyTorch教程13.5之在多個(gè)GPU上進(jìn)行訓(xùn)練

    電子發(fā)燒友網(wǎng)站提供《PyTorch教程13.5之在多個(gè)GPU上進(jìn)行訓(xùn)練.pdf》資料免費(fèi)下載
    發(fā)表于 06-05 14:18 ?0次下載
    <b class='flag-5'>PyTorch</b>教程13.5之在<b class='flag-5'>多個(gè)</b><b class='flag-5'>GPU</b>上進(jìn)行訓(xùn)練

    PyTorch教程13.6多個(gè)GPU簡(jiǎn)潔實(shí)現(xiàn)

    電子發(fā)燒友網(wǎng)站提供《PyTorch教程13.6多個(gè)GPU簡(jiǎn)潔實(shí)現(xiàn).pdf》資料免費(fèi)下載
    發(fā)表于 06-05 14:21 ?0次下載
    <b class='flag-5'>PyTorch</b>教程<b class='flag-5'>13.6</b>之<b class='flag-5'>多個(gè)</b><b class='flag-5'>GPU</b>的<b class='flag-5'>簡(jiǎn)潔</b><b class='flag-5'>實(shí)現(xiàn)</b>

    PyTorch教程23.5之選擇服務(wù)器和GPU

    電子發(fā)燒友網(wǎng)站提供《PyTorch教程23.5之選擇服務(wù)器和GPU.pdf》資料免費(fèi)下載
    發(fā)表于 06-06 09:17 ?0次下載
    <b class='flag-5'>PyTorch</b>教程23.5之選擇服務(wù)器和<b class='flag-5'>GPU</b>

    PyTorch教程-13.2. 異步計(jì)算

    SageMaker Studio Lab 中打開筆記本 當(dāng)今的計(jì)算機(jī)是高度并行的系統(tǒng),由多個(gè) CPU 內(nèi)核(通常每個(gè)內(nèi)核多個(gè)線程)、每個(gè) GPU 多個(gè)處理元素以及每個(gè)設(shè)備通常
    的頭像 發(fā)表于 06-05 15:44 ?556次閱讀
    <b class='flag-5'>PyTorch</b>教程-13.2. 異步計(jì)算

    PyTorch教程-13.5。在多個(gè) GPU 上進(jìn)行訓(xùn)練

    13.5。在多個(gè) GPU 上進(jìn)行訓(xùn)練? Colab [火炬]在 Colab 中打開筆記本 Colab [mxnet] Open the notebook in Colab Colab
    的頭像 發(fā)表于 06-05 15:44 ?891次閱讀
    <b class='flag-5'>PyTorch</b>教程-13.5。在<b class='flag-5'>多個(gè)</b> <b class='flag-5'>GPU</b> 上進(jìn)行訓(xùn)練

    深度學(xué)習(xí)框架pytorch介紹

    PyTorch具有易于使用的API和文檔,并強(qiáng)制執(zhí)行Python編碼標(biāo)準(zhǔn)。這使得它成為機(jī)器學(xué)習(xí)從業(yè)者的首選框架之一。PyTorch支持CPU和GPU計(jì)算以及分布式訓(xùn)練模型。 PyTorch
    的頭像 發(fā)表于 08-17 16:10 ?1644次閱讀