来自 金沙js77888 2020-04-21 13:55 的文章
当前位置: 金沙js77888 > 金沙js77888 > 正文

忘记一个东西 依赖库 大家别忘记 点一波

  • 近几年随着移动设备硬件设备越来越优各种美颜相机App应运而生,美颜、瘦脸、添加挂件等等一系列的功能,这其中的原理一定离不开一个关键的技术,那就是人脸识别

一般的检测流程

人脸检测检测图片中是否有人脸,或者有多少个人脸,同时会给出人脸的位置信息 人脸关键点检测第一步我们找出来图中是否有人脸的信息,然后通过人脸的位置,与图片信息,获取人脸的关键点 处理信息通过关键点,来做一些你需要的东西

我们来看一张图

图片 1image.png

这张图通过 68 个点描述了人脸的轮廓,这 68 个点 就是关键点,也有 5 个点的关键点和其他的规格;

今天我们来通过iOS系统本身的AVFoundation 框架 来检测视频流中出现的人脸,并把检测出来的框绘制到视频流中,我们先看一下效果是什么样子的

图片 2IMB_hHOF7t.GIF

Mars 可能太酷 都检测不到!

  • ### 原料

  • AVFoundation

  • opencv2.framework 下载opencv2ps:opencv 有的库带有iOS 用的一些方法 有的版本不带,我忘记了 大家自行下载查阅,没有的话也可以自己写方法,主要是做转换用的,你的controller的.m文件要换成.mm
#import "ViewController.h"#import <AVFoundation/AVFoundation.h>#import <opencv2/imgproc/types_c.h>#import <opencv2/imgproc/imgproc_c.h>#import <opencv2/imgcodecs/ios.h>#import <opencv2/opencv.hpp>@interface ViewController ()<AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureMetadataOutputObjectsDelegate>@property (nonatomic,strong) AVCaptureSession *session;@property (nonatomic,strong) UIImageView *cameraView;@property (nonatomic,strong) dispatch_queue_t sample;@property (nonatomic,strong) dispatch_queue_t faceQueue;@property (nonatomic,copy) NSArray *currentMetadata; //?< 如果检测到了人脸系统会返回一个数组 我们将这个数组存起来@end@implementation ViewController- viewDidLoad { [super viewDidLoad]; _currentMetadata = [NSMutableArray arrayWithCapacity:0]; [self.view addSubview: self.cameraView]; _sample = dispatch_queue_create("sample", NULL); _faceQueue = dispatch_queue_create("face", NULL); NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; AVCaptureDevice *deviceF; for (AVCaptureDevice *device in devices ) { if ( device.position == AVCaptureDevicePositionFront ) { deviceF = device; break; } } AVCaptureDeviceInput*input = [[AVCaptureDeviceInput alloc] initWithDevice:deviceF error:nil]; AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init]; [output setSampleBufferDelegate:self queue:_sample]; AVCaptureMetadataOutput *metaout = [[AVCaptureMetadataOutput alloc] init]; [metaout setMetadataObjectsDelegate:self queue:_faceQueue]; self.session = [[AVCaptureSession alloc] init]; [self.session beginConfiguration]; if ([self.session canAddInput:input]) { [self.session addInput:input]; } if ([self.session canSetSessionPreset:AVCaptureSessionPreset640x480]) { [self.session setSessionPreset:AVCaptureSessionPreset640x480]; } if ([self.session canAddOutput:output]) { [self.session addOutput:output]; } if ([self.session canAddOutput:metaout]) { [self.session addOutput:metaout]; } [self.session commitConfiguration]; NSString *key = (NSString *)kCVPixelBufferPixelFormatTypeKey; NSNumber *value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; NSDictionary *videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; [output setVideoSettings:videoSettings]; //这里 我们告诉要检测到人脸 就给我一些反应,里面还有QRCode 等 都可以放进去,就是 如果视频流检测到了你要的 就会出发下面第二个代理方法 [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]]; AVCaptureSession* session = (AVCaptureSession *)self.session; //前置摄像头一定要设置一下 要不然画面是镜像 for (AVCaptureVideoDataOutput* output in session.outputs) { for (AVCaptureConnection * av in output.connections) { //判断是否是前置摄像头状态 if (av.supportsVideoMirroring) { //镜像设置 av.videoOrientation = AVCaptureVideoOrientationPortrait; av.videoMirrored = YES; } } } [self.session startRunning];}#pragma mark - AVCaptureSession Delegate -- captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ NSMutableArray *bounds = [NSMutableArray arrayWithCapacity:0]; //每一帧,我们都看一下 self.currentMetadata 里面有没有东西,然后将里面的 //AVMetadataFaceObject 转换成 AVMetadataObject,其中AVMetadataObject 的bouns 就是人脸的位置 ,我们将bouns 存到数组中 for (AVMetadataFaceObject *faceobject in self.currentMetadata) { AVMetadataObject *face = [output transformedMetadataObjectForMetadataObject:faceobject connection:connection]; [bounds addObject:[NSValue valueWithCGRect:face.bounds]]; }}- captureOutput:(AVCaptureOutput *)output didOutputMetadataObjects:(NSArray<__kindof AVMetadataObject *> *)metadataObjects fromConnection:(AVCaptureConnection *)connection{ //当检测到了人脸会走这个回调 _currentMetadata = metadataObjects;}- imageFromPixelBuffer:(CMSampleBufferRef)p { CVImageBufferRef buffer; buffer = CMSampleBufferGetImageBuffer; CVPixelBufferLockBaseAddress(buffer, 0); uint8_t *base; size_t width, height, bytesPerRow; base = (uint8_t *)CVPixelBufferGetBaseAddress; width = CVPixelBufferGetWidth; height = CVPixelBufferGetHeight; bytesPerRow = CVPixelBufferGetBytesPerRow; CGColorSpaceRef colorSpace; CGContextRef cgContext; colorSpace = CGColorSpaceCreateDeviceRGB(); cgContext = CGBitmapContextCreate(base, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGColorSpaceRelease(colorSpace); CGImageRef cgImage; UIImage *image; cgImage = CGBitmapContextCreateImage(cgContext); image = [UIImage imageWithCGImage:cgImage]; CGImageRelease; CGContextRelease(cgContext); CVPixelBufferUnlockBaseAddress(buffer, 0); return image;}- (UIImageView *)cameraView{ if (!_cameraView) { _cameraView = [[UIImageView alloc] initWithFrame:self.view.bounds]; //不拉伸 _cameraView.contentMode = UIViewContentModeScaleAspectFill; } return _cameraView;}

注意的地方

  • 1.output的设置一定在添加之后
  • 2.info.plist 要设置相机权限 Privacy - Camera Usage Description

现在我们视频流拿到了 但是还没有显示出来,下面我们会通过opencv 将人脸框绘制在视频流上,并通过UIImageView 将 处理后的图像显示出来

将人脸框绘制到显示的视频流上

我们先写一个方法 将CMSampleBufferRef 转换成 UIImage(其实也可以直接CMSampleBufferRef 转换成cv::Mat)

- imageFromPixelBuffer:(CMSampleBufferRef)p { CVImageBufferRef buffer; buffer = CMSampleBufferGetImageBuffer; CVPixelBufferLockBaseAddress(buffer, 0); uint8_t *base; size_t width, height, bytesPerRow; base = (uint8_t *)CVPixelBufferGetBaseAddress; width = CVPixelBufferGetWidth; height = CVPixelBufferGetHeight; bytesPerRow = CVPixelBufferGetBytesPerRow; CGColorSpaceRef colorSpace; CGContextRef cgContext; colorSpace = CGColorSpaceCreateDeviceRGB(); cgContext = CGBitmapContextCreate(base, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGColorSpaceRelease(colorSpace); CGImageRef cgImage; UIImage *image; cgImage = CGBitmapContextCreateImage(cgContext); image = [UIImage imageWithCGImage:cgImage]; CGImageRelease; CGContextRelease(cgContext); CVPixelBufferUnlockBaseAddress(buffer, 0); return image;}

我们在继续在 AVCaptureVideoDataOutputSampleBufferDelegate 去处理视频流,已经可以拿到 有关人脸的信息了 我们直接绘制上去就可以了

- captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ NSMutableArray *bounds = [NSMutableArray arrayWithCapacity:0]; for (AVMetadataFaceObject *faceobject in self.currentMetadata) { AVMetadataObject *face = [output transformedMetadataObjectForMetadataObject:faceobject connection:connection]; [bounds addObject:[NSValue valueWithCGRect:face.bounds]]; } //转换成UIImage UIImage *image = [self imageFromPixelBuffer:sampleBuffer]; cv::Mat mat; //转换成cv::Mat UIImageToMat(image, mat); for (NSValue *rect in bounds) { CGRect r = [rect CGRectValue]; //画框 cv::rectangle(mat, cv::Rect(r.origin.x,r.origin.y,r.size.width,r.size.height), cv::Scalar(255,0,0,1)); } //这里不考虑性能 直接怼Image dispatch_async(dispatch_get_main_queue(), ^{ self.cameraView.image = MatToUIImage;}

忘记一个东西 依赖库 大家别忘记 点一波

图片 3image.png

终于传上去了 觉得不错别忘记Star-demoGitHub

本文由金沙js77888发布于金沙js77888,转载请注明出处:忘记一个东西 依赖库 大家别忘记 点一波

关键词: js333金沙娱乐 iOS 流人脸 相机