Machine Learning & Recommendation & NLP & DL

Pytorch学习记录-使用Pytorch进行深度学习,数据加载

2019-03-24  本文已影响4人  我的昵称违规了
首页.jpg

注意确定已经安装了torch和torchvision
数据加载和预处理
在完成60分钟入门之后,接下来有六节tutorials和五节关于文本处理的tutorials。争取一天一节。不过重点还是神经网络构建和数据处理部分。

运行任何机器学习都会在准备数据上花费很大精力,毕竟Rubbish in Rubbish out。Pytorch提供很多工具帮助使数据加载更加简便。本教程中,我们将看到图和从一个不重要的数据集中加载和预处理/增强数据。
预安装库

1. 读取数据

1.1 引入必须库

from __future__ import print_function, division
import os
import torch
import pandas as pd
from skimage import io, transform
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils

import warnings
warnings.filterwarnings('ignore')

1.2 使用pandas读取csv数据

landmarks_frame = pd.read_csv('data/faces/face_landmarks.csv')
n = 10
img_name = landmarks_frame.ix[n, 0]
landmarks = landmarks_frame.ix[n, 1:].as_matrix().astype('float')
landmarks = landmarks.reshape(-1, 2)
print('image name:{}'.format(img_name))
print('landmarks shape:{}'.format(landmarks.shape))
print('first 4 landmarks"{}'.format(landmarks[:4]))

读取数据后,我们可以展示一下数据

def show_landmarks(image, landmarks):
    plt.imshow(image)
    plt.scatter(landmarks[:,0],landmarks[:,1],s=10,marker='.',c='r')
    plt.pause(0.001)

plt.figure()
show_landmarks(io.imread(os.path.join('data/faces/',img_name)),landmarks)
plt.show()
image.png

2. Dataset类

torch.utils.data.Dataset是用于表示数据集的类,常用数据集都继承自Dataset,并且覆盖下面两个方法:

数据集的样本时一个字典格式{'image':image, 'landmarks':landmarks}。数据集会有一个操作参数transform,这样任何需要操作的都能够提交到样例中来。

class FaceLandmarksDataset(Dataset):
    def __init__(self, csv_files, root_dir, transform=None):
        self.landmarks_frame = pd.read_csv(csv_files)
        self.root_dir = root_dir
        self.transform = transform

    def __len__(self):
        return len(self.landmarks_frame)

    def __getitem__(self, idx):
        img_name = os.path.join(self.root_dir, self.landmarks_frame.iloc[idx, 0])
        image = io.imread(img_name)
        landmarks = self.landmarks_frame.iloc[idx, 1:].as_matrix()
        landmarks = landmarks.astype('float').reshape(-1, 2)
        sample = {'image': image, 'landmarks': landmarks}

        if self.transform:
            sample = self.transform(sample)
        return sample

接下来就可以实现一下,输出四张样例和它们的标签

face_dataset=FaceLandmarksDataset(csv_files='data/faces/face_landmarks.csv',root_dir='data/faces')
fig=plt.figure()
for i in range(len(face_dataset)):
    sample=face_dataset[i]
    print(i,sample['image'].shape, sample['landmarks'].shape)
    ax=plt.subplot(1,4,i+1)
    plt.tight_layout()
    ax.set_title('sample #{}'.format(i))
    ax.axis('off')
    show_landmarks(**sample)

    if i==3:
        plt.show()
        break

0 (324, 215, 3) (68, 2)
1 (500, 333, 3) (68, 2)
2 (250, 258, 3) (68, 2)
3 (434, 290, 3) (68, 2)
image.png

3. Transform

上面输出的图有一些问题,样本的尺寸不一,而神经网络希望处理的是固定大小的数据,这样就需要对这些数据做一些预处理。
在前面的Transformer模型中,mask其实也是一种Transform,将不一致的句子通过掩码调整成相同长度。
这里有三种变换

我们将把它们写成一个可调用的类而不是函数,所以变换所需的参数不必在每次调用时都传递。为此,我们只需实现 call 方法,如果需要可以实现 init 方法。我们可以向下面这样使用他们,这段不在代码中体现

tsfm = Transform(params)
transform_sample = tsfm(sample)

下面实现Transform,是对上面三种变化的实现

class Rescale(object):
    '''
    将图片缩放成实例要求的大小
    '''

    def __init__(self, output_size):
        assert isinstance(output_size, (int, tuple))
        self.output_size = output_size

    def __call__(self, sample):
        image, landmarks = sample['image'], sample['landmarks']
        h, w = image.shape[:2]
        if isinstance(self.output_size, int):
            # 图片的大小缩放,配平
            if h > w:
                new_h, new_w = self.output_size * h / w, self.output_size
            else:
                new_h, new_w = self.output_size, self.output_size * w / h
        else:
            new_h, new_w = self.output_size
        new_h, new_w = int(new_h), int(new_w)
        img = transform.resize(image, (new_h, new_w))

        landmarks = landmarks * [new_w / w, new_h / h]
        return {'image': img, 'landmarks': landmarks}


class RandomCrop(object):
    '''
    对样例图片进行随机剪切
    '''

    def __init__(self, output_size):
        assert isinstance(output_size, (int, tuple))
        if isinstance(output_size, int):
            self.output_size = (output_size, output_size)
        else:
            assert len(output_size) == 2
            self.output_size = output_size

    def __call__(self, sample):
        image, landmarks = sample['image'], sample['landmarks']

        h, w = image.shape[:2]
        new_h, new_w = self.output_size
        top = np.random.randint(0, h - new_h)
        left = np.random.randint(0, w - new_w)
        img = image[top: top + new_h, left: left + new_w]

        landmarks = landmarks - [left, top]
        return {'image': img, 'landmarks': landmarks}


class ToTensor(object):
    '''
    将numpy数据转为Tensors
    '''

    def __call__(self, sample):
        image, landmarks = sample['image'], sample['landmarks']

        # swap color axis because
        # numpy image: H x W x C
        # torch image: C X H X W
        image = image.transpose((2, 0, 1))
        return {'image': torch.from_numpy(image),
                'landmarks': torch.from_numpy(landmarks)}

3.2 整合Transform

假如我们想先把图像的较短的一边缩放到256,然后从中随机剪裁一个224*224大小的图像。即我们想要组合 Rescale 和 RandomCrop 两个变换。
torchvision.transforms.Compose是一个简单的可调用类,允许我们来组合多个变换

scale=Rescale(256)
crop=RandomCrop(128)
composed=transforms.Compose([Rescale(256),RandomCrop(224)])
fig=plt.figure()
sample=face_dataset[65]
for i ,tsfm in enumerate([scale, crop, composed]):
    transform_sample=tsfm(sample)
    ax=plt.subplot(1,3, i+1)
    plt.tight_layout()
    ax.set_title(type(tsfm).__name__)
    show_landmarks(**transform_sample)
plt.show()
image.png

4. 迭代数据集

将之前实现的类整合创建一个包含组合变换的数据集,执行以下操作:

if __name__ == '__main__':
    # 搞定数据,还有整合变换
    transformed_dataset = FaceLandmarksDataset(csv_files='data/faces/face_landmarks.csv', root_dir='data/faces',
                                               transform=transforms.Compose([
                                                   Rescale(256),
                                                   RandomCrop(224),
                                                   ToTensor()
                                               ]))
    # 使用DataLoader,包括变换好的数据集、批处理大小、自动打乱
    dataloader = DataLoader(transformed_dataset, batch_size=4, shuffle=True, num_workers=4)


    def show_landmarks_batch(sample_batched):
        images_batch, landmarks_batch = sample_batched['image'], sample_batched['landmarks']
        batch_size = len(images_batch)
        im_size = images_batch.size(2)
        grid = utils.make_grid(images_batch)
        plt.imshow(grid.numpy().transpose((1, 2, 0)))

        for i in range(batch_size):
            plt.scatter(landmarks_batch[i, :, 0].numpy() + i * im_size,
                        landmarks_batch[i, :, 1].numpy(),
                        s=10, marker='.', c='r')


    for i_batch, sample_batched in enumerate(dataloader):
        print(i_batch, sample_batched['image'].size(), sample_batched['landmarks'].size())

        if i_batch == 3:
            plt.figure()
            show_landmarks_batch(sample_batched)
            plt.axis('off')
            plt.ioff()
            plt.show()
            break

回顾一下数据预处理:

上一篇 下一篇

猜你喜欢

热点阅读