OpenGL、BeautifyiOS直播原理及实现(转)iOS Developer

GPUImage框架_文档翻译_04

2017-06-23  本文已影响438人  CC老师_HelloCoder

Documentation (文档)

Documentation is generated from header comments using appledoc. To build the documentation, switch to the "Documentation" scheme in Xcode. You should ensure that "APPLEDOC_PATH" (a User-Defined build setting) points to an appledoc binary, available on Github or through Homebrew. It will also build and install a .docset file, which you can view with your favorite documentation tool.

文件由标题评论使用appledoc生成。建立文档,切换到“文档”方案在Xcode。你应该确定“APPLEDOC_PATH”(一个用户定义的编译设置)指向一个appledoc二进制,可以在GitHub上或通过自制。它还将建设和安装 a.docset文件,你可以用你收藏的文件查看工具。

Performing common tasks(执行常见任务)

Filtering live video(滤波视频的直播)

To filter live video from an iOS device's camera, you can use code like the following:

要从iOS设备的摄像头过滤实时视频,您可以使用如下代码:

GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;

GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];

// Add the view somewhere so it's visible

[videoCamera addTarget:customFilter];
[customFilter addTarget:filteredVideoView];

[videoCamera startCameraCapture];

This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. This video is captured with the interface being in portrait mode, where the landscape-left-mounted camera needs to have its video frames rotated before display. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.

视频源来自iOS设备的后置摄像头,使用预设,试图捕捉在640x480。此视频被捕获的界面是在纵向模式,其中左侧的左侧安装的相机需要在显示前旋转其视频帧。自定义过滤器,使用从文件CustomShader.fsh代码,然后设置为目标,从相机的视频帧。这些过滤的视频帧,最后显示在屏幕上以一个UIView子类可以过滤的OpenGL ES纹理从管道获得结果。

The fill mode of the GPUImageView can be altered by setting its fillMode property, so that if the aspect ratio of the source video is different from that of the view, the video will either be stretched, centered with black bars, or zoomed to fill.

通过设置填充模式性能改变可以设置GPUImageView的填充模式。因此,如果源视频的纵横比是不同的。视频会被拉长,以black bars为中心或者填充放大。

For blending filters and others that take in more than one image, you can create multiple outputs and add a single filter as a target for both of these outputs. The order with which the outputs are added as targets will affect the order in which the input images are blended or otherwise processed.

对于混合过滤器和其他容纳多图像,你可以选择2个输出和添加一个单独的过滤器作为目标。把输出添加作为目标的顺序,会影响深入图像的混合和其他方式的处理顺序。

Also, if you wish to enable microphone audio capture for recording to a movie, you'll need to set the audioEncodingTarget of the camera to be your movie writer, like for the following:

另外,如果你想使麦克风音频捕捉到movie,你需要设置相机的audioEncodingTarget是你的movie 写入者,如下列:

videoCamera.audioEncodingTarget = movieWriter;

Capturing and filtering a still photo (捕捉和过滤静态图片)

To capture and filter still photos, you can use a process similar to the one for filtering video. Instead of a GPUImageVideoCamera, you use a GPUImageStillCamera:

要捕捉和过滤静态图片,你可以使用类似于过滤视频的过程。而不是使用GPUImageVideoCamera,你需要使用GPUImageStillCamera:

tillCamera = [[GPUImageStillCamera alloc] init];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;

filter = [[GPUImageGammaFilter alloc] init];
[stillCamera addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];

[stillCamera startCameraCapture];

This will give you a live, filtered feed of the still camera's preview video. Note that this preview video is only provided on iOS 4.3 and higher, so you may need to set that as your deployment target if you wish to have this functionality.

这将给你一个静物,过滤静态相机的预览视频。注意,此预览视频必须在iOS4.3 或者以上系统。所以你如果需要这个功能,设置你的部署目标(deployment target)。

Once you want to capture a photo, you use a callback block like the following:

一旦你想过滤一张图片,你会使用到一个回调block。如下所示:

[stillCamera capturePhotoProcessedUpToFilter:filter withCompletionHandler:^(UIImage *processedImage, NSError *error){
    NSData *dataForJPEGFile = UIImageJPEGRepresentation(processedImage, 0.8);

    NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
    NSString *documentsDirectory = [paths objectAtIndex:0];

    NSError *error2 = nil;
    if (![dataForJPEGFile writeToFile:[documentsDirectory stringByAppendingPathComponent:@"FilteredPhoto.jpg"] options:NSAtomicWrite error:&error2])
    {
        return;
    }
}];

The above code captures a full-size photo processed by the same filter chain used in the preview view and saves that photo to disk as a JPEG in the application's documents directory.

上面的代码在捕获预览层中使用一个通过同一个过滤链处理的全尺寸的图片,并把照片作为一个JPEG存储在项目里的documents directory中

Note that the framework currently can't handle images larger than 2048 pixels wide or high on older devices (those before the iPhone 4S, iPad 2, or Retina iPad) due to texture size limitations. This means that the iPhone 4, whose camera outputs still photos larger than this, won't be able to capture photos like this. A tiling mechanism is being implemented to work around this. All other devices should be able to capture and filter photos using this method.

注意:由于纹理大小的限制。目前框架无法处理老设备上大于2048像素或更高的图像(在iPhone 4S, iPad 2, or Retina iPad)。这意味着iPhone4 的摄像头输出的照片比这还大。无法捕捉到这样的照片。正在实施一种平铺机制来解决这个问题。所有其他设备都应该能够使用这种方法捕获和过滤照片。

小伙伴们阅读后,请喜欢一下。文章更新可以提醒到你哦~~~~

上一篇下一篇

猜你喜欢

热点阅读