图片,视频上传&视频内容旋转
前言
我最近在接手一个智能盒子的iOS应用,上面有一个功能是这样的。把你本地的照片和视频可以甩屏到你绑定的盒子上。
我的上一位前辈做的时候必须要求再同一个局域网,但是当我做的时候要求不同的局域网也要实现这样的一个功能,优化用户的使用感受。
那么 我们下面就进入正题。
内容一:图片上传
我做的图片上传是用的AF,原理就是把你想上传的照片取出来,如果有文件大小的要求就先做相应的图片压缩。
然后用AFHTTPSessionManager 类去做的上传,下面就是代码:
AFHTTPSessionManager *session = [AFHTTPSessionManager manager];
[session.requestSerializer willChangeValueForKey:@"timeoutInterval"];
session.requestSerializer.timeoutInterval = 60.f;
[session.requestSerializer didChangeValueForKey:@"timeoutInterval"];
session.responseSerializer = [AFHTTPResponseSerializer serializer];
NSString *imageTitle =_imageSource[pageIndex].title;
imageTitle = [imageTitle substringToIndex:imageTitle.length-4];
NSDictionary *params = @{
@"fileType":@"jpg",
@"fileName":imageTitle
};
[session POST:@"你上传的接口地址" parameters:params constructingBodyWithBlock:^(id<AFMultipartFormData> _Nonnull formData) {
for (NSString *key in dict.allKeys) {
[formData appendPartWithFileData:[dict objectForKey:key] name:@"myFile" fileName:[NSString stringWithFormat:@"%@.jpg",key] mimeType:@"image/jpeg"];
}
} progress:nil success:^(NSURLSessionDataTask * _Nonnull task, id _Nullable responseObject) {
} failure:^(NSURLSessionDataTask * _Nullable task, NSError * _Nonnull error) {
}];
这里主要就是和你们的后台确认好相应的参数设置就好了。
内容二:视频上传
视频上传和图片上传其实是类似的,只不过就是参数的设置不一样而已。也是用上面的方法。关键也是设置好参数。
在这里我想说的是,有的时候视频太大要做相应的压缩处理。因为我之前对视频这面接触的比较少,也上网找了一些资料然后看到这样的一个方法:
- (void)lowQuailtyWithInputURL:(NSURL*)inputURL
outputURL:(NSURL*)outputURL
blockHandler:(void (^)(AVAssetExportSession*))handler
{
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:inputURL options:nil];
AVAssetExportSession *session = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetMediumQuality];
session.outputURL = outputURL;
session.outputFileType = AVFileTypeQuickTimeMovie;
[session exportAsynchronouslyWithCompletionHandler:^(void)
{
handler(session);
}];
}
通过这个方法就可以对视频进行相应的压缩处理。
下面就是视频内容的旋转问题了。
一开始我听到了这个需求的时候我的脑子里真的是瞬间万马飞奔啊!! 这是什么鬼需求啊!鬼知道我经历了什么?
我想,那我就上网找一下吧!我找了一下,还真的有这么样的技术博客。但是一上来就是把他的代码网上一贴。我先大概的浏览了一下那代码的长度。写的我实在是没有耐心去仔细的看到底是怎么实现的。我就关掉了。可是后来一看,也没有别的什么博客供我参考了。在我走投无路的时候,我看到了这么一个博客,说是Apple自己就有Demo,我就抱着好奇的态度去看了看。下载了那个Demo仔细的研究了一下。果然就好使了。 那么,接下来就说一下。
再旋转视频之前我们先要确认一下,这个视频是要旋转多少度。
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:inputURL options:nil];
NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
NSInteger degress;
if([tracks count] > 0) {
AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
CGAffineTransform t = videoTrack.preferredTransform;//这里的矩阵有旋转角度,转换一下即可
if(t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0){
// Portrait
degress = 90;
}else if(t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0){
// PortraitUpsideDown
degress = 270;
}else if(t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0){
// LandscapeRight
degress = 0;
}else if(t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0){
// LandscapeLeft
degress = 180;
}
}
这样我们就可以通过degress来确认旋转的角度了。
那么我们接下来就开始旋转视频
AVMutableVideoCompositionInstruction *instruction = nil;
AVMutableVideoCompositionLayerInstruction *layerInstruction = nil;
CGAffineTransform t1;
CGAffineTransform t2;
AVAssetTrack *assetVideoTrack = nil;
AVAssetTrack *assetAudioTrack = nil;
// Check if the asset contains video and audio tracks
if ([[asset tracksWithMediaType:AVMediaTypeVideo] count] != 0) {
assetVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
}
if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] != 0) {
assetAudioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
}
CMTime insertionPoint = kCMTimeZero;
NSError *error = nil;
// Step 1
// Create a composition with the given asset and insert audio and video tracks into it from the asset
if (!mutableComposition) {
// Check whether a composition has already been created, i.e, some other tool has already been applied
// Create a new composition
mutableComposition = [AVMutableComposition composition];
// Insert the video and audio tracks from AVAsset
if (assetVideoTrack != nil) {
AVMutableCompositionTrack *compositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&error];
}
if (assetAudioTrack != nil) {
AVMutableCompositionTrack *compositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&error];
}
}
// // Step 2
// // Translate the composition to compensate the movement caused by rotation (since rotation would cause it to move out of frame)
// t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0.0);
// // Rotate transformation
// t2 = CGAffineTransformRotate(t1, degreesToRadians(90));
if (degress == 0) {
t1 = CGAffineTransformMakeTranslation(0.0, 0.0);
t2 = CGAffineTransformRotate(t1, degreesToRadians(0));
}else if (degress == 90){
t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0.0);
t2 = CGAffineTransformRotate(t1, degreesToRadians(90));
}else if (degress == 180){
t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.width, assetVideoTrack.naturalSize.height);
t2 = CGAffineTransformRotate(t1, degreesToRadians(180));
}else if (degress == 270){
t1 = CGAffineTransformMakeTranslation(0,assetVideoTrack.naturalSize.height*1.78);
t2 = CGAffineTransformRotate(t1, degreesToRadians(-90));
}
// Step 3
// Set the appropriate render sizes and rotational transforms
if (!mutableVideoComposition) {
// Create a new video composition
mutableVideoComposition = [AVMutableVideoComposition videoComposition];
if (degress == 0 || degress == 180) {
mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.width,assetVideoTrack.naturalSize.height);
}else{
mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
}
mutableVideoComposition.frameDuration = CMTimeMake(1, 30);
// The rotate transform is set on a layer instruction
instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mutableComposition duration]);
layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:(mutableComposition.tracks)[0]];
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
} else {
mutableVideoComposition.renderSize = CGSizeMake(mutableVideoComposition.renderSize.height, mutableVideoComposition.renderSize.width);
// Extract the existing layer instruction on the mutableVideoComposition
instruction = (mutableVideoComposition.instructions)[0];
layerInstruction = (instruction.layerInstructions)[0];
// Check if a transform already exists on this layer instruction, this is done to add the current transform on top of previous edits
CGAffineTransform existingTransform;
if (![layerInstruction getTransformRampForTime:[mutableComposition duration] startTransform:&existingTransform endTransform:NULL timeRange:NULL]) {
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
} else {
// Note: the point of origin for rotation is the upper left corner of the composition, t3 is to compensate for origin
CGAffineTransform t3 = CGAffineTransformMakeTranslation(-1*assetVideoTrack.naturalSize.height/2, 0.0);
CGAffineTransform newTransform = CGAffineTransformConcat(existingTransform, CGAffineTransformConcat(t2, t3));
[layerInstruction setTransform:newTransform atTime:kCMTimeZero];
}
}
// Step 4
// Add the transform instructions to the video composition
instruction.layerInstructions = @[layerInstruction];
mutableVideoComposition.instructions = @[instruction];
到这个时候呢视频的旋转已经完成了。
上面我们不是写了一个视频压缩的方法吗?把这个时候生成的mutableVideoComposition赋值给session.videoComposition = mutableVideoComposition;就好了。这样就是先旋转视频然后直接压缩完成我们的需求。