iOS防截屏-完全解决
2021-11-10 本文已影响0人
shyizne
几行代码解决问题,有2个方案往下看
说到iOS防截屏大部分人肯定说无法实现,但是我在这样回复了老板后,老板给我答复是:爱奇艺,实现了!!!(付费视频)
1.查阅大量资料不难发现,爱奇艺之所以防截屏,是因为有版权的视频都已转化成加密流的视频,这样你在截图的时候,就能达到防截屏的效果
2.接下来我们就是要做的把需要防止用户截屏的区域做成加密流的视频就可以了。
首先我把吧图片生成一帧帧的加密流视频
+ (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image {
CGSize size = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess){
NSLog(@"Failed to create pixel buffer");
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaPremultipliedFirst);
//kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
接着,我将它制作成视频
+ (void)imageToMP4:(UIImage *)img completion:(void (^)(NSData *))handler {
NSError *error = nil;
NSFileManager *fileMgr = [NSFileManager defaultManager];
NSString *documentsDirectory = [NSHomeDirectory()
stringByAppendingPathComponent:@"tmp"];
NSString *videoOutputPath = [documentsDirectory stringByAppendingPathComponent:@"test_output.mp4"];
if ([fileMgr removeItemAtPath:videoOutputPath error:&error] != YES)
NSLog(@"Unable to delete file: %@", [error localizedDescription]);
CGSize imageSize = CGSizeMake(img.size.width, img.size.height);
NSUInteger fps = 5;
NSArray *imageArray = @[img];
NSLog(@"videoOutputPath========%@", videoOutputPath);
NSLog(@"Start building video from defined frames.");
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeQuickTimeMovie
error:&error];
/// !!!需要设置faststart
videoWriter.shouldOptimizeForNetworkUse = YES;
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecTypeH264, AVVideoCodecKey,
[NSNumber numberWithInt:imageSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:imageSize.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//convert uiimage to CGImage.
int frameCount = 0;
double numberOfSecondsPerFrame = 1;
double frameDuration = fps * numberOfSecondsPerFrame;
//for(VideoFrame * frm in imageArray)
NSLog(@"**************************************************");
for(UIImage * img in imageArray)
{
//UIImage * img = frm._imageFrame;
buffer = [self pixelBufferFromCGImage:[img CGImage]];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
//print out status:
NSLog(@"Processing video frame (%d,%lu)",frameCount,(unsigned long)[imageArray count]);
CMTime frameTime = CMTimeMake(frameCount*frameDuration,(int32_t) fps);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(!append_ok){
NSError *error = videoWriter.error;
if(error!=nil) {
NSLog(@"Unresolved error %@,%@.", error, [error userInfo]);
}
}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
frameCount++;
}
NSLog(@"**************************************************");
//Finish the session:
[videoWriterInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
if (handler) {
NSData *data = [NSData dataWithContentsOfFile:videoOutputPath];
handler(data);
}
}];
}
最后完成。有些小伙伴可能会发现图片变形了,那就是最大的坑了,绘制图片需要宽度是16的整数倍于是开码:
+ (UIImage *)composite_Picture:(UIImage *)image{
CGSize imageContSize = CGSizeMake(((int)image.size.width/16+1)*16, ((int)image.size.width/16+1)*16/(image.size.width/image.size.height));
//开启图形上下文
UIGraphicsBeginImageContext(imageContSize);
[image drawInRect:CGRectMake(0, 0, imageContSize.width, imageContSize.height)];
//获取图片
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
//关闭上下文
UIGraphicsEndImageContext();
NSLog(@"%f%f",newImage.size.width,newImage.size.height);
return newImage;
}
请真机运行,进入Demo先点生成视频,在截图。如果对您有帮助,麻烦您点赞点赞,谢谢!
demo传送门:https://gitee.com/shyzine/i-os-anti-screenshot.git 不赞不回,谢谢
方案二:
任何添加到返回View上的控件,全部防截屏
swift代码
static func makeSecView() -> UIView {
let field = UITextField()
field.isSecureTextEntry = true
guard let view = field.subviews.first else {
return UIView()
}
view.subviews.forEach { $0.removeFromSuperview() }
view.isUserInteractionEnabled = true
return view
}
OC代码
-(UIView *)getBgView{
UITextField *bgTextField = [[UITextField alloc] init];
[bgTextField setSecureTextEntry:true];
UIView *bgView = bgTextField.subviews.firstObject;
[bgView setUserInteractionEnabled:true];
return bgView;
}