您的位置:首页 > 移动开发 > IOS开发

IOS8 H264硬件解码

2015-12-29 13:40 337 查看

SPS,PPS数据提取

使用RTP传输H264的时候,需要用到sdp协议描述,其中有两项:Sequence Parameter Sets (SPS) 和Picture Parameter Set (PPS)需要用到,那么这两项从哪里获取呢?答案是从H264码流中获取.在H264码流中,都是以”0x00 0x00 0x01”或者”0x00 0x00 0x00 0x01”为开始码的,找到开始码之后,使用开始码之后的第一个字节的低5位判断是否为7(sps)或者8(pps), 及data[4] & 0x1f == 7 || data[4] & 0x1f == 8.然后对获取的nal去掉开始码之后进行base64编码,得到的信息就可以用于sdp.sps和pps需要用逗号分隔开来.

int naluType = ((uint8_t)pFrameData[4] & 0x1F);
if ((naluType == 7 || naluType == 8) && videoFormatDescription == NULL) {
if (naluType == 7) {
spsData = [NSData dataWithBytes:pFrameData + 4 length:length - 4];
}

if (naluType == 8) {
ppsData = [NSData dataWithBytes:pFrameData + 4 length:length - 4];
}
}


主要函数

从iOS8开始,apple开放了硬解码和硬编码API,就是名为 VideoToolbox.framework的API,需要用iOS 8以后才能使用,iOS 7.x上不能使用。

接口封装在 VideoToolbox.framework中,使用之前请导入相关的框架。

解码主要需要以下三个函数:

VTDecompressionSessionCreate 创建解码 session

VTDecompressionSessionDecodeFrame 解码一个frame

VTDecompressionSessionInvalidate 销毁解码 session

1: VTDecompressionSessionCreate创建会话

PPS和SPS保存着帧数据和图片数据的相关设置,首先调用CMVideoFormatDescriptionCreateFromH264ParameterSets函数设置相关的参数,然后再创建会话。

if (spsData != nil && ppsData != nil) {
const uint8_t* const parameterSetPointers[2] = { (const uint8_t*)[spsData bytes], (const uint8_t*)[ppsData bytes] };
const size_t parameterSetSizes[2] = { spsData.length, ppsData.length };
//construct h.264 parameter set
CMVideoFormatDescriptionRef formatDesc = NULL;
OSStatus formatCreateResult = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, parameterSetPointers, parameterSetSizes, 4, &formatDesc);
if (formatCreateResult == noErr) {
videoFormatDescription = formatDesc;
if (decompressionSession == NULL || VTDecompressionSessionCanAcceptFormatDescription(decompressionSession, formatDesc) == NO) {
[self createDecompSession];
}
}
}


创建会话:

VTDecompressionSessionCreate(

CM_NULLABLE CFAllocatorRef allocator,

CM_NONNULL CMVideoFormatDescriptionRef videoFormatDescription,

CM_NULLABLE CFDictionaryRef videoDecoderSpecification,

CM_NULLABLE CFDictionaryRef destinationImageBufferAttributes,

const VTDecompressionOutputCallbackRecord * CM_NULLABLE outputCallback,

CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTDecompressionSessionRef * CM_NONNULL decompressionSessionOut) __OSX_AVAILABLE_STARTING(__MAC_10_8,__IPHONE_8_0);

第一个参数传入默认就可以了,第二个参数是之前根据PPS创建的设置,这里要注意的是callback,需要我们自己实现一个C函数,用来接收一个压缩好的session。

-(void) createDecompSession
{
// make sure to destroy the old VTD session
decompressionSession = NULL;

VTDecompressionOutputCallbackRecord callBackRecord;
callBackRecord.decompressionOutputCallback = decompressionSessionDecodeFrameCallback;
// this is necessary if you need to make calls to Objective C "self" from within in the callback method.
callBackRecord.decompressionOutputRefCon = (__bridge void *)self;

// you can set some desired attributes for the destination pixel buffer.  I didn't use this but you may
// if you need to set some attributes, be sure to uncomment the dictionary in VTDecompressionSessionCreate
NSDictionary *destinationImageBufferAttributes =[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:NO],(id)kCVPixelBufferOpenGLESCompatibilityKey,
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],(id)kCVPixelBufferPixelFormatTypeKey,nil];

OSStatus status =  VTDecompressionSessionCreate(kCFAllocatorDefault,
videoFormatDescription,
NULL,
(__bridge CFDictionaryRef)(destinationImageBufferAttributes), // attrs, // NULL
&callBackRecord,
&decompressionSession);
}


2: 创建samplebuffer 显示

if ((naluType == 1 || naluType == 5) && videoFormatDescription) {
uint32_t dataLength32 = htonl(length - 4);
memcpy (pFrameData, &dataLength32, sizeof(uint32_t));
CMBlockBufferRef blockBuffer = NULL;
OSStatus status = CMBlockBufferCreateWithMemoryBlock(NULL, pFrameData, length, kCFAllocatorNull, NULL, 0, length, kCMBlockBufferAlwaysCopyDataFlag, &blockBuffer);
if (status == kCMBlockBufferNoErr) {
const size_t sampleSize = length; // CMBlockBufferGetDataLength(blockBuffer);
CMSampleBufferRef sampBuf = NULL;
status = CMSampleBufferCreate(kCFAllocatorDefault,
blockBuffer,
true,
NULL,
NULL,
videoFormatDescription,
1,
0,
NULL,
1,
&sampleSize,
&sampBuf);
if (status == noErr) {
// 然后根据获取到的buffer画到AVSampleBufferPlayerLayer上即可
}
}
}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: