动画设计学习音视频开发IM

即时通讯整个页面的搭建(三)

2016-09-28  本文已影响549人  CoderFM

即时通讯整个页面的搭建(一)
即时通讯整个页面的搭建(二)
继续写

前面两篇主要写完了, 界面上的问题, 这一篇就说说消息的发送接收过程(包含录语音, 发视频照片等等), 进行了简单的封装

消息体的整个生成过程基本上都在IMChatToolBar这个类里, 整个消息模型的生成写了类方法, 传入必要的参数就可以, 当然这不是所有的都是通用的代码, 基本上逻辑都是差不多的

从录制语音开始, 主要代码都在IMAudioTool里, 两个比较重要的Api

- (void)playWithFileName:(NSString *)fileName returnBeforeFileName:(ReplacePlayFileName)replaceBlock withFinishBlock:(FinishPlayBlock)playFinish;

- (void)recorderVoiceVolumeView:(IMVoiceVolumeView *)volumeView finishBlock:(FinishEncodeBlock)finishBlock;

@end

这是实现

- (void)recorderVoiceVolumeView:(IMVoiceVolumeView *)volumeView finishBlock:(FinishEncodeBlock)finishBlock{
    
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        self.isCancel = NO;
        self.seconds = 0;
        self.finishBlock = finishBlock;
        self.volumeView = volumeView;
        [self.timer invalidate];
        self.timer = nil;
        [self timer];
        
        if ([[[UIDevice currentDevice] systemVersion] compare:@"7.0"] != NSOrderedAscending) {
            //7.0第一次运行会提示,是否允许使用麦克风
            AVAudioSession *session = [AVAudioSession sharedInstance];
            NSError *sessionError;
            //AVAudioSessionCategoryPlayAndRecord用于录音和播放
            [session setCategory:AVAudioSessionCategoryPlayAndRecord error:&sessionError];
            if(session == nil)
                NSLog(@"Error creating session: %@", [sessionError description]);
            else
                [session setActive:YES error:nil];
        }
        
        NSString *fileName = [NSString stringWithFormat:@"%d%d", (int)[[NSDate date] timeIntervalSince1970], arc4random() % 100000];
        
        self.oldFileName = fileName;
        
        [self audioRecorderWithFileName:fileName];
        
        [self.audioRecorder record];
        
    });
    
}

[self audioRecorderWithFileName:fileName];实现如下

- (void)audioRecorderWithFileName:(NSString *)fileName{
    
    AVAudioSession * audioSession = [AVAudioSession sharedInstance];
    
    NSError* error1;
    
    [audioSession setCategory:AVAudioSessionCategoryRecord error: &error1];
    
    NSString *filePath =[[IMBaseAttribute dataAudioPath] stringByAppendingPathComponent:fileName];
    
            //录音设置
    NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc]init];
    //设置录音格式  AVFormatIDKey==kAudioFormatLinearPCM
    [recordSetting setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM] forKey:AVFormatIDKey];
    //设置录音采样率(Hz) 如:AVSampleRateKey==8000/44100/96000(影响音频的质量)
    [recordSetting setValue:[NSNumber numberWithFloat:11025.0] forKey:AVSampleRateKey];
    //录音通道数  1 或 2
    [recordSetting setValue:[NSNumber numberWithInt:2] forKey:AVNumberOfChannelsKey];
//    //线性采样位数  8、16、24、32
//    [recordSetting setValue:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey];
    //录音的质量
    [recordSetting setValue:[NSNumber numberWithInt:AVAudioQualityMedium] forKey:AVEncoderAudioQualityKey];
    
    NSError *error;
    //初始化
    self.audioRecorder = [[AVAudioRecorder alloc]initWithURL:[NSURL fileURLWithPath:filePath] settings:recordSetting error:&error];
    //开启音量检测
    self.audioRecorder.meteringEnabled = YES;
    [self.audioRecorder recordForDuration:(NSTimeInterval) IMAudioMaxDurtion];
    self.audioRecorder.delegate = self;
}

设置采样率的时候,设置老是变声, 一点点调出来的, 由此我猜想, 变声软件是不是改这个采样率的当道的变声效果没有去深究这个东西
录音的格式安卓和iOS要保持一致, 所以同一转码成MP3格式的, 转码用的lame这个库, 可以谷歌一下, 如果下载下来不支持64位, 可以直接到我的demo去扒下来, 转码的代码demo里有,这个就不贴了, 转码之后就是上传到服务器了, 发送成功之后, 就是发送到聊天服务器了

看看拍视频, 照片的IMImagePickerManager

/**
 *  视频, 照片
 *
 *  @param souceType      Picker的souceType
 *  @param viewController 需要一个跳转的控制器
 *  @param finishAction   选择成功之后的回调
 */
+ (void)showImagePickerWithSouceType:(ImagePickerSouceType)souceType withViewController:(UIViewController *)viewController finishAction:(IMImagePickerFinishAction)finishAction;
+ (void)showImagePickerWithSouceType:(ImagePickerSouceType)souceType withViewController:(UIViewController *)viewController finishAction:(IMImagePickerFinishAction)finishAction{
    [IMImagePickerManager shareInstance].souceType = souceType;
    [IMImagePickerManager shareInstance].finishAction = finishAction;
    UIImagePickerController *picker = [[UIImagePickerController alloc] init];
    picker.delegate = [IMImagePickerManager shareInstance];
    switch (souceType) {
        case ImagePickerSoucePhotoType:
            picker.sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
            break;
        case ImagePickerSouceCameraType:
            picker.sourceType = UIImagePickerControllerSourceTypeCamera;
            picker.mediaTypes = @[(NSString *)kUTTypeImage];
            break;
        case ImagePickerSouceVedioType:
            picker.sourceType = UIImagePickerControllerCameraCaptureModeVideo;
            picker.mediaTypes = @[(NSString *)kUTTypeMovie];
            picker.videoMaximumDuration = 10;
            break;
        default:
            break;
    }
    [viewController presentViewController:picker animated:YES completion:nil];
}

mediaTypes是一个数组, 可以穿多个类型, 就就像系统的拍照一样,左右滑动切换模式, 拍视频可以限制最大的时长, 不过没有倒计时的提示, 到时间了, 就会自动停止, 一个Alert提示, 这个UIImagePickerController控制器也不能继承, 想自定义也不行, 暂且先这样
选择照片比较简单, 选择完之后, 我是先写入本地聊天文件的路径, 然后回调出文件名, 拿到文件名, 拼接上对应的路径之后, 上传图片就好了
拍照片的时候, 横屏拍摄的时候会有旋转90°的问题, 粘贴处理的代码(谷歌出来的):

- (UIImage *)fixOrientation:(UIImage *)aImage {
    
    // No-op if the orientation is already correct
    if (aImage.imageOrientation == UIImageOrientationUp)
        return aImage;
    
    // We need to calculate the proper transformation to make the image upright.
    // We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored.
    CGAffineTransform transform = CGAffineTransformIdentity;
    
    switch (aImage.imageOrientation) {
        case UIImageOrientationDown:
        case UIImageOrientationDownMirrored:
            transform = CGAffineTransformTranslate(transform, aImage.size.width, aImage.size.height);
            transform = CGAffineTransformRotate(transform, M_PI);
            break;
            
        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
            transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);
            transform = CGAffineTransformRotate(transform, M_PI_2);
            break;
            
        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
            transform = CGAffineTransformTranslate(transform, 0, aImage.size.height);
            transform = CGAffineTransformRotate(transform, -M_PI_2);
            break;
        default:
            break;
    }
    
    switch (aImage.imageOrientation) {
        case UIImageOrientationUpMirrored:
        case UIImageOrientationDownMirrored:
            transform = CGAffineTransformTranslate(transform, aImage.size.width, 0);
            transform = CGAffineTransformScale(transform, -1, 1);
            break;
            
        case UIImageOrientationLeftMirrored:
        case UIImageOrientationRightMirrored:
            transform = CGAffineTransformTranslate(transform, aImage.size.height, 0);
            transform = CGAffineTransformScale(transform, -1, 1);
            break;
        default:
            break;
    }
    
    // Now we draw the underlying CGImage into a new context, applying the transform
    // calculated above.
    CGContextRef ctx = CGBitmapContextCreate(NULL, aImage.size.width, aImage.size.height,
                                             CGImageGetBitsPerComponent(aImage.CGImage), 0,
                                             CGImageGetColorSpace(aImage.CGImage),
                                             CGImageGetBitmapInfo(aImage.CGImage));
    CGContextConcatCTM(ctx, transform);
    switch (aImage.imageOrientation) {
        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
            // Grr...
            CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.height,aImage.size.width), aImage.CGImage);
            break;
            
        default:
            CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.width,aImage.size.height), aImage.CGImage);
            break;
    }
    
    // And now we just create a new UIImage from the drawing context
    CGImageRef cgimg = CGBitmapContextCreateImage(ctx);
    UIImage *img = [UIImage imageWithCGImage:cgimg];
    CGContextRelease(ctx);
    CGImageRelease(cgimg);
    return img;
}

基本上的原理根据旋转的imageOrientation来判断旋转的角度, 然后让照片旋转成正常的方向, 再画出来(不对的话, 就当我胡说😀)

视频跟语音一样,也要转码, 同一一种格式(不转这个文件太大了, 也受不了😖), 转码的代码和获取视频的第一帧的代码就不贴了,可以去demo里看

一起来看看上传的文件的类吧(基于AFN2.6的, 坑的就是进度没有3.0的block回调), 主要说上传时候的进度监听

@interface IMUploadProgressDelegate : NSObject
// 进度的block
@property(nonatomic, copy)void(^progressBlock)(CGFloat progress);
// 通过key获取这个监听者
+ (instancetype)uploadProgressDelegateWithKey:(NSString *)key;
@end

@implementation IMUploadProgressDelegate

- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(NSProgress *)object change:(NSDictionary<NSString *,id> *)change context:(void *)context{
    if ([change[@"new"] floatValue] == 1) {
        [object removeObserver:self forKeyPath:keyPath];
    }
    if (self.progressBlock) {
        self.progressBlock([change[@"new"] floatValue]);
    }
}

+ (instancetype)uploadProgressDelegateWithKey:(NSString *)key{
    id obj = [[IMBaseAttribute shareIMBaseAttribute].uploadProgressDict valueForKey:key];
    
    if (obj && [obj isKindOfClass:[self class]]) {
        return obj;
    } else {
        return nil;
    }
}

@end

下面添加这个监听者的代码, 是在上传文件的代码里

IMUploadProgressDelegate *progressDelegate = [[IMUploadProgressDelegate alloc] init];
    
[progress addObserver:progressDelegate forKeyPath:@"fractionCompleted" options:NSKeyValueObservingOptionNew context:nil];
    
[[IMBaseAttribute shareIMBaseAttribute].uploadProgressDict setValue:progressDelegate forKey:fileName];

上传图片语音视频的进度都是通过它来监听的, 通过文件名去存取这个类, 进度改变回调, 改变模型的值, 刷新这一行

[[IMUploadProgressDelegate uploadProgressDelegateWithKey:soucePath] setProgressBlock:^(CGFloat progress) {
       item.uploadProgress = progress;
       MainQueueBlock(^{
           if ([weakSelf.delegate respondsToSelector:@selector(IMChatToolBar:imBaseItem:)]) {
                [weakSelf.delegate IMChatToolBar:weakSelf imBaseItem:item];
            }
       })
}];

IMBaseAttribute在项目中,充当了管家的角色, 很多东西通过它去配置的,

@property(nonatomic, assign)CGFloat normalMargin;

@property(nonatomic, assign)CGFloat headImageViewWidth;

@property(nonatomic, strong)UIFont *nameLabelFont;

@property(nonatomic, strong)UIColor *nameLabelTextColor;

@property(nonatomic, strong)UIFont *nameInfoLabelFont;

@property(nonatomic, strong)UIColor *nameInfoLabelTextColor;

@property(nonatomic, strong)UIFont *contentTextFont;

@property(nonatomic, strong)UIColor *contentTextColor;

@property(nonatomic, assign)CGFloat headImageViewCornerRadius;

@property(nonatomic, assign)CGFloat bufferMaxWidth;

@property(nonatomic, assign)CGFloat contentTextInsetMargin;

@property(nonatomic, assign)CGFloat messageBodyImageWidth;

@property(nonatomic, assign)CGFloat messageBodyVoiceWidth;

@property(nonatomic, assign)CGFloat messageBodyVoiceHeight;

@property(nonatomic, assign)CGFloat messageBodyVedioHeight;

@property(nonatomic, assign)CGFloat messageBodyVedioWidth;

@property(nonatomic, assign)CGFloat timeViewHeight;

@property(nonatomic, strong)NSMutableDictionary *downloadProgressDict;

@property(nonatomic, strong)NSMutableDictionary *uploadProgressDict;

+ (NSString*)dataVedioPath;

+ (NSString*)dataAudioPath;

+ (NSString*)dataPicturePath;

上传下载的进度监听的对象都放在这个两个字典里,很多字体颜色, 间距的大小的配置都是放在这个里面的, 初始化的属性的都放在了+ (void)load方法里

下载的监听也是一样的, 当点击不同消息类型的Cell的时候, 去下载这个文件, 在下载中的, 时候不能再去点击, 这个要判断的

基本上消息的发送就是这么多了

播放视频, 查看图片的这个就不说啦, 播放语音的时候, 有一个靠近听筒的监听:

#pragma mark - 监听听筒or扬声器
- (void) handleNotification:(BOOL)state
{
    [[UIDevice currentDevice] setProximityMonitoringEnabled:state]; //建议在播放之前设置yes,播放结束设置NO,这个功能是开启红外感应
    
    if(state)//添加监听
        [[NSNotificationCenter defaultCenter] addObserver:self
                                                 selector:@selector(sensorStateChange:) name:@"UIDeviceProximityStateDidChangeNotification"
                                                   object:nil];
    else//移除监听
        [[NSNotificationCenter defaultCenter] removeObserver:self name:@"UIDeviceProximityStateDidChangeNotification" object:nil];
}

//处理监听触发事件
-(void)sensorStateChange:(NSNotificationCenter *)notification;
{
    //如果此时手机靠近面部放在耳朵旁,那么声音将通过听筒输出,并将屏幕变暗(省电啊)
    if ([[UIDevice currentDevice] proximityState] == YES)
    {
        NSLog(@"Device is close to user");
        [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
    }
    else
    {
        NSLog(@"Device is not close to user");
        [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:nil];
    }
}

最后说说缓存:

至于聊天内容肯定是要建一张表, 也可以建多张表, 我demo里是我是建了多张表的, 通过一个integer值去创建的, 这样在删除聊天内容的时候, 可以通过删除对应的表即可, 其他的聊天记录不影响, 一张表的话, 删除某个人的聊天记录的时候, 可能就没有那么快了, 我看微信与QQ聊天记录都是一起清除的, 可能是放在一个数据库里, 直接删除了这张表吧(猜测而已)

聊天文件的缓存我这里也是分开存储的, 微信删除聊天附件的时候, 对应一个聊天对象做的缓存, QQ则是清除缓存文件, 没有对应的删除某个人的, 当然具体怎么做的, 我就不清楚了

总结

只有自己写了才知道坑在哪里, 蹚过去, 就成长了
这个demo里还有很多缺陷
比如输入框, 多行输入调高度, 文件下载的时候断网的情况下, 也没有暂停下载的功能等等, 细节需要优化, 等有时间的😀

上一篇 下一篇

猜你喜欢

热点阅读