深度学习 实战

Image Representation & Class

2018-08-12  本文已影响109人  徐凯_xp

1.Computer Vision Pipeline(计算机视觉管道)


预处理主要是关于标准化数据,比如处理输入图像大小。

Separating Data(分离数据)

Images as Grids of Pixels

Import resources

import numpy as np
import matplotlib.image as mpimg  # for reading in images
import matplotlib.pyplot as plt
import cv2  # computer vision library
%matplotlib inline

Read in and display the image

# Read in the image
image = mpimg.imread('images/waymo_car.jpg')
# Print out the image dimensions
print('Image dimensions:', image.shape)
# Change from color to grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
plt.imshow(gray_image, cmap='gray')
# Create a 5x5 image using just grayscale, numerical values
tiny_image = np.array([[0, 20, 30, 150, 120],
                      [200, 200, 250, 70, 3],
                      [50, 180, 85, 40, 90],
                      [240, 100, 50, 255, 10],
                      [30, 0, 75, 190, 220]])

# To show the pixel grid, use matshow
plt.matshow(tiny_image, cmap='gray')

RGB colorspace

import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
# Read in the image
image = mpimg.imread('images/wa_state_highway.jpg') a
plt.imshow(image)

RGB channels
Visualize the levels of each color channel. Pay close attention to the traffic signs!

# Isolate RGB channels
r = image[:,:,0]
g = image[:,:,1]
b = image[:,:,2]

# Visualize the individual color channels
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10))
ax1.set_title('R channel')
ax1.imshow(r, cmap='gray')
ax2.set_title('G channel')
ax2.imshow(g, cmap='gray')
ax3.set_title('B channel')
ax3.imshow(b, cmap='gray')

编码蓝屏应用

色彩空间

我们已经知道要怎么检测蓝幕背景了,但这种检测方法是有前提的,那就是场景光线要好 而且蓝幕的颜色得十分连贯,如果光线发生了变化墙壁有阴影、很斑驳或太亮了怎么办?这时简单的蓝色阀值就不适用了。
那我们要如何完整地检测出处于不同光线下的物体呢?
其实 表示图像颜色的方法还有很多,不仅仅有RGB这种颜色分量。
我们通常把各种各样的颜色表示法称为“颜色空间”


RGB

R\G\B三维坐标来表示,比如白色坐标为(255,255,255)

HSV

三个字母分别表示色相、饱和度、明度

HLS

则是指色相、亮度、饱和度
以上就是图像处理最常用的几种颜色空间

利用HSV颜色空间进行图像处理

分离出每个像素的明度,即Value(明度),明度受照明条件的影响最大。
H通道基本不受阴影或过高亮度影响,如果们用H通道,舍弃V通道信息,那就能对彩色物体进行检测,而且效果会比在RGB颜色空间更为可靠

依靠HSV检测粉色气球

标准化输出

分类数值转换为数值:

整数编码意味着每个类别分配一个整数值,比如:day = 0;night = 1;

数据标准化

提取特征

使用HSV色彩空间提取平均亮度作为特征,具体来说 我们会使用明度 (value) 通道它用来测量亮度,接下来 把总和除以图像面积,得到图像的平均亮度

#RGB to HSV
image_num = 0
test_im = STANDARDIZED_LIST[image_num][0]
test_label = STANDARDIZED_LIST[image_num][1]

# Convert to HSV
hsv = cv2.cvtColor(test_im, cv2.COLOR_RGB2HSV)

# Print image label
print('Label: ' + str(test_label))

# HSV channels
h = hsv[:,:,0]
s = hsv[:,:,1]
v = hsv[:,:,2]

# Plot the original image and the three channels
f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,10))
ax1.set_title('Standardized image')
ax1.imshow(test_im)
ax2.set_title('H channel')
ax2.imshow(h, cmap='gray')
ax3.set_title('S channel')
ax3.imshow(s, cmap='gray')
ax4.set_title('V channel')
ax4.imshow(v, cmap='gray')

在本例中单独绘制 H、S 和 V 通道,这是一张白天的图像 以及不同颜色通道 H、S、V,我们可以看到 V 通道的天空亮度特别高,利用 V 通道确定平均亮度。
定义一个函数来找到图像的平均值,函数avg_brightness 会读入一个 RGB 图像:
1.把图像转换为 HSV 颜色空间
2.对 V 通道的所有像素值求和
3.计算图像面积,这里是 600 乘以 1100,将亮度总和除以图像的面积

# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
    
    # Convert image to HSV
    hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)

    # Add up all the pixel values in the V channel
    sum_brightness = np.sum(hsv[:,:,2])
    
    ## TODO: Calculate the average brightness using the area of the image
    # and the sum calculated above
    area = 600 * 1100
    avg =  sum_brightness / area
    
    return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about 
# what average brightness value separates the two types of images
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]

avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)

分类器

#Import resources
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
# Training and Testing Data
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
# Load the datasets
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
# Visualize the standardized data
# Display a standardized image and its label

# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]

# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))

## Feature Extraction
def avg_brightness(rgb_image):
    # Convert image to HSV
    hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)

    # Add up all the pixel values in the V channel
    sum_brightness = np.sum(hsv[:,:,2])
    area = 600*1100.0  # pixels
    
    # find the avg
    avg = sum_brightness/area
    
    return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about 
# what average brightness value separates the two types of images

# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]

avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
# This function should take in RGB image input
def estimate_label(rgb_image):
    
    ## TODO: extract average brightness feature from an RGB image 
    # Use the avg brightness feature to predict a label (0, 1)
    predicted_label = 0
    avg = avg_brightness(rgb_image)
    ## TODO: set the value of a threshold that will separate day and night images
    if avg > 110:
        predicted_label = 1
    else:
        predicted_label = 0 
    ## TODO: Return the predicted_label (0 or 1) based on whether the avg is 
    # above or below the threshold
    
    return predicted_label   
# Test dataset
import random

# Using the load_dataset function in helpers.py
# Load test data
TEST_IMAGE_LIST = helpers.load_dataset(image_dir_test)

# Standardize the test data
STANDARDIZED_TEST_LIST = helpers.standardize(TEST_IMAGE_LIST)

# Shuffle the standardized test data
random.shuffle(STANDARDIZED_TEST_LIST)
def get_misclassified_images(test_images):
    # Track misclassified images by placing them into a list
    misclassified_images_labels = []

    # Iterate through all the test images
    # Classify each image and compare to the true label
    for image in test_images:

        # Get true data
        im = image[0]
        true_label = image[1]

        # Get predicted label from your classifier
        predicted_label = estimate_label(im)

        # Compare true and predicted labels 
        if(predicted_label != true_label):
            # If these labels are not equal, the image has been misclassified
            misclassified_images_labels.append((im, predicted_label, true_label))
            
    # Return the list of misclassified [image, predicted_label, true_label] values
    return misclassified_images_labels
上一篇 下一篇

猜你喜欢

热点阅读