图像处理实战-答题卡检测

2020-08-19  本文已影响0人  YvanYan

完整代码:https://github.com/YvanYan/image_processing/tree/master/answer_sheet


流程:
1.图像预处理
2.对答题卡区域进行轮廓检测,并对该区域进行透视变换
3.对每个选项进行边缘检测
4.设置mask,通过mask对每个选项区域进行判断

1.图像预处理

image = cv2.imread('images\\test_02.png')
img_copy = image.copy()
img_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
img_blur = cv2.GaussianBlur(img_gray, (5, 5), 0)
cv_show('img_blur', img_blur)
edge = cv2.Canny(img_blur, 75, 200)
cv_show('edge', edge)

预处理过程包括图像读取、灰度化、滤波操作以及边缘检测。


img_blur.png
edge.png

2.对答题卡区域进行轮廓检测,并对该区域进行透视变换

cnts = cv2.findContours(edge.copy(), cv2.RETR_EXTERNAL,
                        cv2.CHAIN_APPROX_SIMPLE)[1]
cv2.drawContours(img_copy, cnts, -1, (0, 0, 255), 3)
cv_show('drawCnts', img_copy)
cnt = None

if len(cnts) > 0:
    cnts = sorted(cnts, key=cv2.contourArea, reverse=True)

    for c in cnts:
        peri = cv2.arcLength(c, True)
        approx = cv2.approxPolyDP(c, 0.02 * peri, True)

        if len(approx) == 4:
            cnt = approx
            break

首先进行答题卡区域的轮廓检测,通过findContours找到所有可疑轮廓,然后通过将所有轮廓按从大到小排序,最大的四边形区域即为答题卡区域。

drawCnts.png
def get_pts(pts):
    rect = np.zeros((4, 2), dtype='float32')

    # 左上,右上,右下,左下
    s = pts.sum(axis=1)
    rect[0] = pts[np.argmin(s)]
    rect[2] = pts[np.argmax(s)]

    diff = np.diff(pts, axis=1)
    rect[1] = pts[np.argmin(diff)]
    rect[3] = pts[np.argmax(diff)]

    return rect


def get_pts_transform(image, pts):
    rect = get_pts(pts)
    (tl, tr, br, bl) = rect

    width_top = np.sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2))
    width_bot = np.sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2))
    widthMax = max(int(width_bot), int(width_top))

    height_left = np.sqrt(((bl[0] - tl[0]) ** 2) + ((bl[1] - tl[1]) ** 2))
    height_right = np.sqrt(((br[0] - tr[0]) ** 2) + ((br[1] - tr[1]) ** 2))
    heightMax = max(int(height_left), int(height_right))

    dest = np.array([[0, 0], [widthMax - 1, 0], [widthMax - 1, heightMax - 1], [0, heightMax - 1]], dtype='float32')

    M = cv2.getPerspectiveTransform(rect, dest)
    warped = cv2.warpPerspective(image, M, (widthMax, heightMax))
    return warped

warped = get_pts_transform(img_gray, cnt.reshape(4, 2))
cv_show('warped', warped)

get_pts函数为找到答题卡的四个角。然后get_pts_transform找到答题卡的长和宽,计算变换矩阵M,通过M对原始图像进行变换,得到仅包含答题卡区域的图像warped

warped.png

3.对每个选项进行边缘检测

thresh = cv2.threshold(warped, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
cv_show('thresh', thresh)
thresh_copy = thresh.copy()
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[1]
cv2.drawContours(thresh_copy, cnts, -1, (0, 0, 255), 3)
cv_show('thresh_drawCnts', thresh_copy)
recordCnts = []

for c in cnts:
    (x, y, w, h) = cv2.boundingRect(c)
    ratio = w / float(h)
    if w >= 20 and h >= 20 and ratio >= 0.9 and ratio <= 1.1:
        recordCnts.append(c)

warped进行二值化处理,然后找到所有选项区域。同样通过findContours找到所有和候选区域。由于在本次实验的图片中,选项为圆形,所以通过0.9<=(长/宽)<=1.1来筛选。并将所有选项区域记录在recordCnts中。

thresh.png
thresh_drawCnts.png

4.设置mask,通过mask对每个选项区域进行判断

for (q, i) in enumerate(np.arange(0, len(recordCnts), 5)):
    cnts = sort_contours(recordCnts[i:i + 5])[0]
    bubbled = None

    for (j, c) in enumerate(cnts):
        mask = np.zeros(thresh.shape, dtype='uint8')
        cv2.drawContours(mask, [c], -1, 255, -1)
        cv_show('mask', mask)
        mask = cv2.bitwise_and(thresh, thresh, mask=mask)
        total = cv2.countNonZero(mask)

        if bubbled is None or total > bubbled[0]:
            bubbled = (total, j)

    k = ANSWER_KEY[q]
    if k == bubbled[1]:
        cv2.drawContours(warped, [cnts[k]], -1, (0, 255, 0), 3)
        result +=1
    else:
        cv2.drawContours(warped, [cnts[k]], -1, (0, 0, 255), 3)

本次实验中,每一行有五个选项,共5行,因此将每一行作为一批次处理。设置mask为全0,即为全黑。依次将每个选项的区域设置为白色,通过bitwise_and将选项区域与mask进行与操作。由于mask为白色,即为255, 若某个选项区域被学生标记为答案,那么该选项区域为白色区域。因此只需要在与操作后,选出这一行中白色像素最多的选项区域即可。

mask.png
exam.png
上一篇 下一篇

猜你喜欢

热点阅读