iOS黑科技之(CoreImage)静态人脸识别(一)

2017-11-22  本文已影响582人  TitanCoder

iOS黑科技之(CoreImage)静态人脸识别(一)

人脸识别原理简介:每一张图片都是由每一个像素点组成,而每一个像素点中又有对应的颜色值(如RGB),人的面部特征中,不同的五官,颜色值肯定存在差异,而人脸识别技术就是通过对照片中每一个像素的识别进行大量的算法处理,最终得出五官的轮廓

Core Image框架图.png

一. 主要类介绍

二. 项目代码介绍

1. 创建

1.1 这里要先介绍一下检测器的类别

//人脸检测器
public let CIDetectorTypeFace: String

//矩形识别
public let CIDetectorTypeRectangle: String

//二维码识别
public let CIDetectorTypeQRCode: String

//文本识别
public let CIDetectorTypeText: String

//指定检测精度
public let CIDetectorAccuracy: String

//指定使用特征跟踪,这个功能就像相机中的人脸跟踪功能
public let CIDetectorTracking: String

//设置将要识别的特征的最小尺寸
public let CIDetectorMinFeatureSize: String

//针对矩形探测器的,用于设置返回矩形特征的最多个数。
//这个关键字的值是一个1~...的NSNumber值。有效范围1 < = CIDetectorMaxFeatureCount < = 256。默认值为1
public let CIDetectorMaxFeatureCount: String

//脸部透视数, 值为包含1、3、5、7、9、11的NSNumber对象
public let CIDetectorNumberOfAngles: String

//设置识别方向,值是一个从1 ~ 8的整型的NSNumber
public let CIDetectorImageOrientation: String

//设置这个参数为true(bool类型的NSNumber),识别器将提取眨眼特征
public let CIDetectorEyeBlink: String

//如果设置这个参数为ture(bool类型的NSNumber),识别器将提取微笑特征
public let CIDetectorSmile: String

//用于设置每帧焦距,值得类型为floot类型的NSNumber
public let CIDetectorFocalLength: String

//用于设置矩形的长宽比,值得类型为floot类型的NSNumber
public let CIDetectorAspectRatio: String

//控制文本检测器是否应该检测子特征。默认值是否,值的类型为bool类型的NSNumber
public let CIDetectorReturnSubFeatures: String

//1. 创建上下文对象
let context = CIContext()

//2. UIImage转成CIImage
guard let image = imageView.image  else { return }
guard let ciImage = CIImage(image: image) else { return }

//3. 参数设置(精度设置)
let parmes = [CIDetectorAccuracy: CIDetectorAccuracyHigh]

//4. 创建识别类
let detector = CIDetector(ofType: CIDetectorTypeFace, context: context, options: parmes)

2. 参数设置

//识别精度低,但识别速度快、性能高
public let CIDetectorAccuracyLow: String 

// 识别精度高,但识别速度慢、性能低
public let CIDetectorAccuracyHigh: String 

3. CIFaceFeature概述

//检测到的脸部在图片中的frame
open var bounds: CGRect { get }

//是否检测到左眼的位置
open var hasLeftEyePosition: Bool { get }

//左眼的位置
open var leftEyePosition: CGPoint { get }

//是否检测到右眼的位置
open var hasRightEyePosition: Bool { get }

//右眼的位置
open var rightEyePosition: CGPoint { get }

//是否检测到嘴巴的位置
open var hasMouthPosition: Bool { get }

//嘴巴的位置
open var mouthPosition: CGPoint { get }

//脸部是否倾斜    
open var hasFaceAngle: Bool { get }

//脸部倾斜角度
open var faceAngle: Float { get }

//是否微笑    
open var hasSmile: Bool { get }

//左眼是否闭上
open var leftEyeClosed: Bool { get }

//右眼是否闭上
open var rightEyeClosed: Bool { get }

4. Core Image坐标系问题

坐标系对比.png
resultView.transform = CGAffineTransform(scaleX: 1, y: -1)

5. 人脸检测(核心代码)

/// 通过人脸识别提取有效的人脸图片
static func faceImagesByFaceRecognition(imageView: UIImageView, resultCallback: @escaping ((_ count: Int) -> ())) {
    //0. 删除子控件
    let subViews = imageView.subviews
    for subview in subViews {
        if subview.isKind(of: UIView.self) {
            subview.removeFromSuperview()
        }
    }
    
    //1. 创建上下文对象
    let context = CIContext()
    
    //2. UIImage转成CIImage
    guard let image = imageView.image  else { return }
    guard let ciImage = CIImage(image: image) else { return }
    
    //3. 参数设置(精度设置)
    let parmes = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
    
    //4. 创建识别类
    let detector = CIDetector(ofType: CIDetectorTypeFace, context: context, options: parmes)
    
    //5. 找到识别其中的人连对象
    guard let faceArr = detector?.features(in: ciImage) else { return }
    
    //6. 添加识别的红框
    let resultView = UIView(frame: CGRect(x: 0, y: 0, width: imageView.frame.width, height: imageView.frame.height))
    imageView.addSubview(resultView)
    
    //7. 遍历扫描结果
    for faceFeature in faceArr {
        resultView.addSubview(addRedrectangleView(rect: faceFeature.bounds))
        
        //7.1 如果识别到眼睛
        guard let feature = faceFeature as? CIFaceFeature else { return }
        //左眼
        if feature.hasLeftEyePosition {
            let leftView = addRedrectangleView(rect: CGRect(x: 0, y: 0, width: 5, height: 5))
            leftView.center = feature.leftEyePosition
            resultView.addSubview(leftView)
        }
        //右眼
        if feature.hasRightEyePosition {
            let rightView = addRedrectangleView(rect: CGRect(x: 0, y: 0, width: 5, height: 5))
            rightView.setValue(feature.rightEyePosition, forKey: "center")
            resultView.addSubview(rightView)
        }
        
        //7.2 识别嘴巴
        if feature.hasMouthPosition {
            let mouthView = addRedrectangleView(rect: CGRect(x: 0, y: 0, width: 10, height: 5))
            mouthView.setValue(feature.mouthPosition, forKey: "center")
            resultView.addSubview(mouthView)
        }
    }
    
    //8. 将resultView沿x轴翻转
    resultView.transform = CGAffineTransform(scaleX: 1, y: -1)
    
    //9. 结果回调
    resultCallback(faceArr.count)
}

6. 检测结果展示

WechatIMG29.jpeg

7. 注意事项:

static func getScale(imageView: UIImageView, image: UIImage) -> CGFloat{
    let viewSize = imageView.frame.size
    let imageSize = image.size
    
    let widthScale = imageSize.width / viewSize.width
    let heightScale = imageSize.height / viewSize.height
    
    return widthScale > heightScale ? widthScale : heightScale
}

下一篇: iOS黑科技之(AVFoundation)动态人脸识别(二)


GitHub--Demo地址


其他相关文章

上一篇 下一篇

猜你喜欢

热点阅读