iOS 使用 CoreImage 实现人脸检测

2024-03-10  本文已影响0人  zackzheng

iOS 平台有多种方式实现人脸检测,CoreImage 是其中一种简单的方式。

一、核心代码

let ciImage = xxx
let options = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: options)
let faceList = faceDetector?.features(in: ciImage)

通过 CIDetector 根据配置的 type 和 options,检测 CIImage 中是否包含人脸。
faceList 是 CIFeature 数组,可以转成 CIFaceFeature。通过访问其属性可以获取到一些检测的结果。

1.1摄像头拍照转成 CIImage

对于非由 CIImage 生成的 UIImage(比如使用 AVKit 拍照),直接访问 ciImage 会为空,所以需要自己渲染提取成 CIImage。

extension UIImage {

    var customCIImage: CIImage? {

        let systemCIImage = ciImage
        guard systemCIImage == nil else {
            return systemCIImage
        }

        let imageRect = CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)
        UIGraphicsBeginImageContextWithOptions(self.size, true, 0.0)
        self.draw(in: imageRect)
        guard let newImage = UIGraphicsGetImageFromCurrentImageContext(), let cgImage = newImage.cgImage else {
            return nil
        }
        UIGraphicsEndImageContext()

        let ciImage = CIImage(cgImage: cgImage)
        return ciImage
    }
}

二、检测结果支持检测的特性

通过查看 CIFaceFeature 的定义,我们来看下支持检测出什么特性。

/** A face feature found by a CIDetector.
 All positions are relative to the original image. */
@available(iOS 5.0, *)
open class CIFaceFeature : CIFeature {

    
    
    /** coordinates of various cardinal points within a face.
     
     Note that the left eye is the eye on the left side of the face
     from the observer's perspective. It is not the left eye from
     the subject's perspective. */
    open var bounds: CGRect { get }

    open var hasLeftEyePosition: Bool { get }

    open var leftEyePosition: CGPoint { get }

    open var hasRightEyePosition: Bool { get }

    open var rightEyePosition: CGPoint { get }

    open var hasMouthPosition: Bool { get }

    open var mouthPosition: CGPoint { get }

    
    open var hasTrackingID: Bool { get }

    open var trackingID: Int32 { get }

    open var hasTrackingFrameCount: Bool { get }

    open var trackingFrameCount: Int32 { get }

    
    open var hasFaceAngle: Bool { get }

    open var faceAngle: Float { get }

    
    open var hasSmile: Bool { get }

    open var leftEyeClosed: Bool { get }

    open var rightEyeClosed: Bool { get }
}

2.1 人脸位置计算和展示

CIImage 有个属性 extent.size,即图片的尺寸。CIFaceFeature 有个属性 bounds,即人脸所在位置。
即可转换成想要的 view 的相对坐标。

2.2 实际检测结果

- face.bounds
(593.9743650052696, 1409.8143731430173, 1052.3287265683757, 1016.9813878205605)
- face.hasLeftEyePosition
true
- face.leftEyePosition
(986.4649061334785, 2220.677064805641)
- face.hasRightEyePosition
true
- face.rightEyePosition
(1391.2080312655598, 1945.6312162347604)
- face.hasMouthPosition 
true
- face.mouthPosition
(883.7131392527372, 1705.263066681102)
- face.hasTrackingID
false
- face.trackingID
0
- face.hasTrackingFrameCount
false
- face.trackingFrameCount
0
- face.hasFaceAngle
true
- face.faceAngle
37.74833
- face.hasSmile
false
- face.leftEyeClosed
false
- face.rightEyeClosed
false

三、封装

简单封装下:

extension CIImage {
    enum FaceDetectResult {
        case noFace
        case oneFace(ratio: CGFloat, faceBounds: CGRect, imageSize: CGSize)
        case oneFaceShouldCrop(ratio: CGFloat, faceBounds: CGRect, imageSize: CGSize)
        case moreThanOneFace
    }
    var faceDetectResult: FaceDetectResult {
        let options = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
        let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: options)
        let faceList = faceDetector?.features(in: self) ?? []
        switch faceList.count {
        case 0:
            return .noFace
        case 1:
            if let face = faceList.first as? CIFaceFeature {
                let ciImageSize = ciImage.extent.size
                let faceBounds = face.bounds
                let ciImageArea = ciImageSize.width * ciImageSize.height
                let faceArea = faceBounds.width * faceBounds.height
                let ratio = faceArea / ciImageArea
                return ratio > 0.3 ? .oneFace(ratio: ratio, faceBounds: faceBounds, imageSize: ciImageSize) : .oneFaceShouldCrop(ratio: ratio, faceBounds: faceBounds, imageSize: ciImageSize)
            } else {
                return .noFace
            }
        default:
            return .moreThanOneFace
        }
    }
}

四、参考

在 iOS 上用 Core Image 实现人脸检测

上一篇 下一篇

猜你喜欢

热点阅读