python - 如何将 numpy 数组转换为图像数据集?

标签 python numpy python-imaging-library

这里我使用PIL库加载图像,它是不用于图像数据集的单个数据,并使用numpy库转换numpy数组。它非常适合单个图像数据。

现在,我想将图像数据集转换为 numpy 数组。其中将显示训练测试验证数据。

下面我分享一下将单个图像数据转换为 numpy 数组的代码。

Imported needed library

from PIL import Image
from numpy import asarray

load the image

image = Image.open('flower/1.jpg')

convert image to numpy array

data = asarray(image)
#data is array format of image

最佳答案

如果您只想将 numpy 数组转换回图像,那么以下代码片段应该可以工作。如果您想为整个数据集复制该过程,那么您需要在每个图像上调用它。如何做到这一点取决于您尝试构建的模型(图像分类、对象检测等)以及构建模型所使用的内容(tensorflow、theano 等)

解决方案1

from PIL import Image 
from numpy import asarray
image = Image.open('flower/1.jpg')
data = asarray(image)

img_w, img_h = 200, 200
img = Image.fromarray(data, 'RGB')
img.save('test.png')
img.show()

因为您正在研究图像分类问题。下面的代码可以很好地为您服务。根据您的问题进行自定义。我已在代码中注释了您需要进行更改的位置。

解决方案2

from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

import os
import numpy as np
import pandas as pd
import cv2
from glob import glob

import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.applications import MobileNetV2 #Change Here: Select the classification architecture you need
from tensorflow.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from tensorflow.keras.optimizers import Adam

from sklearn.model_selection import train_test_split

def build_model(size, num_classes):
    inputs = Input((size, size, 3))
    backbone = MobileNetV2(input_tensor=inputs, include_top=False, weights="imagenet") #Change Here: Select the classification architecture you need
    backbone.trainable = True
    x = backbone.output
    x = GlobalAveragePooling2D()(x)
    x = Dropout(0.2)(x) #Chage Here: Try different droput values b/w .2 to .8
    x = Dense(1024, activation="relu")(x)
    x = Dense(num_classes, activation="softmax")(x)

    model = tf.keras.Model(inputs, x)
    return model

def read_image(path, size):
    image = cv2.imread(path, cv2.IMREAD_COLOR)
    train_datagen = ImageDataGenerator(rescale=1./255, 
                                   rotation_range=30, #Change Here: Select any rotation range b/w 10 to 90
                                   zoom_range = 0.3, 
                                   width_shift_range=0.2, #Change Here: Select width shift as per your images. My advice- try b/w .2 to .5
                                   height_shift_range=0.2, #Change Here: Select height shift as per your images., My advice try b/w .2 to .5
                                   horizontal_flip = 'true') 
    
    image = train_datagen.flow_from_directory(path, shuffle=False, batch_size=10, seed=10) #Change Here: Select batch_size as per your need
    image = cv2.resize(image, (size, size))
    image = image / 255.0
    image = image.astype(np.float32)
    return image

def parse_data(x, y):
    x = x.decode()

    num_class = 120 #Change Here: num_class should be equal to types of blood cells you have in your dataset i.e. number of labels
    size = 224 #Change Here: Select size as per your chosen model architecture 

    image = read_image(x, size)
    label = [0] * num_class
    label[y] = 1
    label = np.array(label)
    label = label.astype(np.int32)

    return image, label

def tf_parse(x, y):
    x, y = tf.numpy_function(parse_data, [x, y], [tf.float32, tf.int32])
    x.set_shape((224, 224, 3))
    y.set_shape((120))
    return x, y

def tf_dataset(x, y, batch=8): #Change Here: Choose default batch size as per your needs
    dataset = tf.data.Dataset.from_tensor_slices((x, y))
    dataset = dataset.map(tf_parse)
    dataset = dataset.batch(batch)
    dataset = dataset.repeat()
    return dataset

if __name__ == "__main__":
    path = "/content/gdrive/My Drive/Dog Breed Classification/" #Change Here: Give path to your parent directory
    train_path = os.path.join(path, "train/*")
    test_path = os.path.join(path, "test/*")
    labels_path = os.path.join(path, "labels.csv") #Change Here: Give name of your csv file

    labels_df = pd.read_csv(labels_path)
    breed = labels_df["breed"].unique() #Change Here: replace breed with the column name, denoting class, in your csv file
    print("Number of Breed: ", len(breed))

    breed2id = {name: i for i, name in enumerate(breed)} #Change Here: replace breed & id with the column names denoting class & image file in your csv file
                                                         #repeat the same every place where breed or id is mentioned

    ids = glob(train_path)
    labels = []

    for image_id in ids:
      # print(image_id,"\n\n\n")
      image_id = image_id.split("/")[-1]
      breed_name = list(labels_df[labels_df.id == image_id]["breed"])[0]
      breed_idx = breed2id[breed_name]
      labels.append(breed_idx)


    ## Spliting the dataset
    train_x, valid_x = train_test_split(ids, test_size=0.2, random_state=42) #Change Here: select test size as per your need. My advice go between .2 to .3
    train_y, valid_y = train_test_split(labels, test_size=0.2, random_state=42)

    ## Parameters
    size = 224    #Change Here: Select size as per your chosen model architecture 
    num_classes = 120 #Change Here: num_class should be equal to types of blood cells you have in your dataset i.e. number of labels
    lr = 1e-4 #Change Here: Select as per you need. My advice chose any where b/w 1e-4 to 1e-2
    batch = 16 #Change Here: Select as per your need
    epochs = 50 #Change Here: Select as per your need

    ## Model
    model = build_model(size, num_classes)
    model.compile(loss="categorical_crossentropy", optimizer=Adam(lr), metrics=["acc"])
    # model.summary()

    ## Dataset
    train_dataset = tf_dataset(train_x, train_y, batch=batch)
    valid_dataset = tf_dataset(valid_x, valid_y, batch=batch)

    ## Training
    callbacks = [
        ModelCheckpoint("/content/gdrive/My Drive/Dog Breed Classification/Model/model-1-{epoch:02d}.h5", #Change Here :Give the path where you want to store your model
                        verbose=1, save_best_only=True),
        ReduceLROnPlateau(factor=0.1, patience=5, min_lr=1e-6)] #Change Here: Set factor, patience, min_lr as per your need. My advice leave as it is and then change to see if model performance improves.
    train_steps = (len(train_x)//batch) + 1
    valid_steps = (len(valid_x)//batch) + 1
    model.fit(train_dataset,
        steps_per_epoch=train_steps,
        validation_steps=valid_steps,
        validation_data=valid_dataset,
        epochs=epochs,
        callbacks=callbacks)

关于python - 如何将 numpy 数组转换为图像数据集?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/64328135/

相关文章:

python - 如何在 Pandas 中将byby()。transform()转换为value_counts()?

Python总结时间

python - 如何将列添加到numpy数组

python - 如何使用 PIL 从一张图片裁剪并粘贴到另一张图片?

python PIL 砍掉了我的draw.text 的顶部

python - RLE8图像支持/用Pillow解压(PIL fork)

python - django模板生成的html代码

python - 迭代期间在同一行打印字典的值和下一个值

python - 对索引的二维数组的值求和

python - 使用不带 griddata() 的一维数组的二维绘图