您的位置:首页 > 编程语言

x264代码剖析(七):encode()函数之x264_encoder_encode()函数

2016-03-10 11:32 816 查看

x264代码剖析(七):encode()函数之x264_encoder_encode()函数

encode()函数是x264的主干函数,主要包括x264_encoder_open()函数、x264_encoder_headers()函数、x264_encoder_encode()函数与x264_encoder_close()函数四大部分,其中x264_encoder_encode()函数是其核心部分,具体的H.264视频编码算法均在此模块。上两篇博文主要分析了x264_encoder_open()函数与x264_encoder_headers()函数,本文主要学习x264_encoder_encode()函数。

在《x264代码剖析(三):主函数main()、解析函数parse()与编码函数encode()》的介绍中,我们知道x264_encoder_encode()函数被encode_frame()函数调用,而encode_frame()函数被encode()函数调用,encode()函数又被main()函数调用。由于main()函数、encode()函数与encode_frame()函数对应的代码已经分析完毕,本文主要分析x264_encoder_encode()函数。x264_encoder_encode()函数编码一帧YUV为H.264码流,对应的函数关系图如下,主要调用了下面的函数:

x264_frame_pop_unused():获取1个x264_frame_t类型结构体fenc。如果frames.unused[]队列不为空,就调用x264_frame_pop()从unused[]队列取1个现成的;否则就调用x264_frame_new()创建一个新的。

x264_frame_copy_picture():将输入的图像数据拷贝至fenc。

x264_lookahead_put_frame():将fenc放入lookahead.next.list[]队列,等待确定帧类型。

x264_lookahead_get_frames():通过lookahead分析帧类型。该函数调用了x264_slicetype_decide(),x264_slicetype_analyse()和x264_slicetype_frame_cost()等函数。经过一些列分析之后,最终确定了帧类型信息,并且将帧放入frames.current[]队列。

x264_frame_shift():从frames.current[]队列取出1帧用于编码。

x264_reference_update():更新参考帧队列。

x264_reference_reset():如果为IDR帧,调用该函数清空参考帧列表。

x264_reference_hierarchy_reset():如果是非IDR的I帧、P帧、B帧(可做为参考帧),调用该函数。

x264_reference_build_list():创建参考帧列表list0和list1。

x264_ratecontrol_start():开启码率控制。

x264_slice_init():创建 Slice Header。

x264_slices_write():编码数据(最关键的步骤)。其中调用了x264_slice_write()完成了编码的工作(注意“x264_slices_write()”和“x264_slice_write()”名字差了一个“s”)。

x264_encoder_frame_end():编码结束后做一些后续处理,例如记录一些统计信息。其中调用了x264_encoder_encapsulate_nals()封装NALU(添加起始码),调用x264_frame_push_unused()将fenc重新放回frames.unused[]队列,并且调用x264_ratecontrol_end()结束码率控制。



下面对x264_encoder_encode()函数源码进行分析,如下:

/******************************************************************/
/******************************************************************/
/*
======Analysed by RuiDong Fang
======Csdn Blog:http://blog.csdn.net/frd2009041510
======Date:2016.03.10
*/
/******************************************************************/
/******************************************************************/

/************====== x264_encoder_encode()函数 ======************/
/*
功能:编码一帧数据
*/
/****************************************************************************
* x264_encoder_encode:
*  XXX: i_poc   : is the poc of the current given picture
*       i_frame : is the number of the frame being coded
*  ex:  type frame poc
*       I      0   2*0
*       P      1   2*3
*       B      2   2*1
*       B      3   2*2
*       P      4   2*6
*       B      5   2*4
*       B      6   2*5
****************************************************************************/
int     x264_encoder_encode( x264_t *h,
x264_nal_t **pp_nal, int *pi_nal,
x264_picture_t *pic_in,
x264_picture_t *pic_out )
{
x264_t *thread_current, *thread_prev, *thread_oldest;
int i_nal_type, i_nal_ref_idc, i_global_qp;
int overhead = NALU_OVERHEAD;

#if HAVE_OPENCL
if( h->opencl.b_fatal_error )
return -1;
#endif

if( h->i_thread_frames > 1 )
{
thread_prev    = h->thread[ h->i_thread_phase ];
h->i_thread_phase = (h->i_thread_phase + 1) % h->i_thread_frames;
thread_current = h->thread[ h->i_thread_phase ];
thread_oldest  = h->thread[ (h->i_thread_phase + 1) % h->i_thread_frames ];
x264_thread_sync_context( thread_current, thread_prev );
x264_thread_sync_ratecontrol( thread_current, thread_prev, thread_oldest );
h = thread_current;
}
else
{
thread_current =
thread_oldest  = h;
}
h->i_cpb_delay_pir_offset = h->i_cpb_delay_pir_offset_next;

/* no data out */
*pi_nal = 0;
*pp_nal = NULL;

/* ------------------- Setup new frame from picture -------------------- */
if( pic_in != NULL )
{
if( h->lookahead->b_exit_thread )
{
x264_log( h, X264_LOG_ERROR, "lookahead thread is already stopped\n" );
return -1;
}

/* 1: Copy the picture to a frame and move it to a buffer */
x264_frame_t *fenc = x264_frame_pop_unused( h, 0 );//////////////////步骤1:fenc存储了编码帧(获取一帧的空间fenc,用来存放待编码的帧)
if( !fenc )
return -1;

if( x264_frame_copy_picture( h, fenc, pic_in ) < 0 ));//////////////////外部像素数据传递到内部系统,pic_in(外部结构体x264_picture_t)到fenc(内部结构体x264_frame_t)
return -1;

//宽和高都确保是16的整数倍(宏块宽度的整数倍)
if( h->param.i_width != 16 * h->mb.i_mb_width ||
h->param.i_height != 16 * h->mb.i_mb_height )
x264_frame_expand_border_mod16( h, fenc );//扩展至16整数倍

fenc->i_frame = h->frames.i_input++;

if( fenc->i_frame == 0 )
h->frames.i_first_pts = fenc->i_pts;
if( h->frames.i_bframe_delay && fenc->i_frame == h->frames.i_bframe_delay )
h->frames.i_bframe_delay_time = fenc->i_pts - h->frames.i_first_pts;

if( h->param.b_vfr_input && fenc->i_pts <= h->frames.i_largest_pts )
x264_log( h, X264_LOG_WARNING, "non-strictly-monotonic PTS\n" );

h->frames.i_second_largest_pts = h->frames.i_largest_pts;
h->frames.i_largest_pts = fenc->i_pts;

if( (fenc->i_pic_struct < PIC_STRUCT_AUTO) || (fenc->i_pic_struct > PIC_STRUCT_TRIPLE) )
fenc->i_pic_struct = PIC_STRUCT_AUTO;

if( fenc->i_pic_struct == PIC_STRUCT_AUTO )
{
#if HAVE_INTERLACED
int b_interlaced = fenc->param ? fenc->param->b_interlaced : h->param.b_interlaced;
#else
int b_interlaced = 0;
#endif
if( b_interlaced )
{
int b_tff = fenc->param ? fenc->param->b_tff : h->param.b_tff;
fenc->i_pic_struct = b_tff ? PIC_STRUCT_TOP_BOTTOM : PIC_STRUCT_BOTTOM_TOP;
}
else
fenc->i_pic_struct = PIC_STRUCT_PROGRESSIVE;
}

if( h->param.rc.b_mb_tree && h->param.rc.b_stat_read )
{
if( x264_macroblock_tree_read( h, fenc, pic_in->prop.quant_offsets ) )
return -1;
}
else
x264_stack_align( x264_adaptive_quant_frame, h, fenc, pic_in->prop.quant_offsets );

if( pic_in->prop.quant_offsets_free )
pic_in->prop.quant_offsets_free( pic_in->prop.quant_offsets );

//降低分辨率处理(原来的一半),线性内插
//注意这里并不是6抽头滤波器的半像素内插
if( h->frames.b_have_lowres )
x264_frame_init_lowres( h, fenc );

/* 2: Place the frame into the queue for its slice type decision */
x264_lookahead_put_frame( h, fenc );));//////////////////步骤2:fenc放入lookahead.next.list[]队列,等待确定帧类型

if( h->frames.i_input <= h->frames.i_delay + 1 - h->i_thread_frames )
{
/* Nothing yet to encode, waiting for filling of buffers */
pic_out->i_type = X264_TYPE_AUTO;
return 0;
}
}
else
{
/* signal kills for lookahead thread */
x264_pthread_mutex_lock( &h->lookahead->ifbuf.mutex );
h->lookahead->b_exit_thread = 1;
x264_pthread_cond_broadcast( &h->lookahead->ifbuf.cv_fill );
x264_pthread_mutex_unlock( &h->lookahead->ifbuf.mutex );
}

h->i_frame++;
/* 3: The picture is analyzed in the lookahead */
if( !h->frames.current[0] )
x264_lookahead_get_frames( h );//////////////////步骤3:通过lookahead分析帧类型

if( !h->frames.current[0] && x264_lookahead_is_empty( h ) )
return x264_encoder_frame_end( thread_oldest, thread_current, pp_nal, pi_nal, pic_out );

/* ------------------- Get frame to be encoded ------------------------- */
/* 4: get picture to encode */
h->fenc = x264_frame_shift( h->frames.current );//////////////////步骤4:从frames.current[]队列取出1帧[0]用于编码

/* If applicable, wait for previous frame reconstruction to finish */
if( h->param.b_sliced_threads )
if( x264_threadpool_wait_all( h ) < 0 )
return -1;

if( h->i_frame == 0 )
h->i_reordered_pts_delay = h->fenc->i_reordered_pts;
if( h->reconfig )
{
x264_encoder_reconfig_apply( h, &h->reconfig_h->param );
h->reconfig = 0;
}
if( h->fenc->param )
{
x264_encoder_reconfig_apply( h, h->fenc->param );
if( h->fenc->param->param_free )
{
h->fenc->param->param_free( h->fenc->param );
h->fenc->param = NULL;
}
}

// ok to call this before encoding any frames, since the initial values of fdec have b_kept_as_ref=0
//更新参考帧队列frames.reference[].若为B帧则不更新
//重建帧fdec移植参考帧列表,新建一个fdec
if( x264_reference_update( h ) ));//////////////////更新参考帧队列
return -1;
h->fdec->i_lines_completed = -1;

if( !IS_X264_TYPE_I( h->fenc->i_type ) )
{
int valid_refs_left = 0;
for( int i = 0; h->frames.reference[i]; i++ )
if( !h->frames.reference[i]->b_corrupt )
valid_refs_left++;
/* No valid reference frames left: force an IDR. */
if( !valid_refs_left )
{
h->fenc->b_keyframe = 1;
h->fenc->i_type = X264_TYPE_IDR;
}
}

if( h->fenc->b_keyframe )
{
h->frames.i_last_keyframe = h->fenc->i_frame;
if( h->fenc->i_type == X264_TYPE_IDR )
{
h->i_frame_num = 0;
h->frames.i_last_idr = h->fenc->i_frame;
}
}
h->sh.i_mmco_command_count =
h->sh.i_mmco_remove_from_end = 0;
h->b_ref_reorder[0] =
h->b_ref_reorder[1] = 0;
h->fdec->i_poc =
h->fenc->i_poc = 2 * ( h->fenc->i_frame - X264_MAX( h->frames.i_last_idr, 0 ) );

/* ------------------- Setup frame context ----------------------------- */
/* 5: Init data dependent of frame type */
//步骤5:确定帧类型
if( h->fenc->i_type == X264_TYPE_IDR )
{
//I与IDR区别
//注意IDR会导致参考帧列清空,而I不会
//I图像之后的图像可以引用I图像之间的图像做运动参考
/* reset ref pictures */
i_nal_type    = NAL_SLICE_IDR;
i_nal_ref_idc = NAL_PRIORITY_HIGHEST;
h->sh.i_type = SLICE_TYPE_I;
x264_reference_reset( h );//////////////////若是IDR帧,则清空所有参考帧
h->frames.i_poc_last_open_gop = -1;
}
else if( h->fenc->i_type == X264_TYPE_I )
{
//I与IDR区别
//注意IDR会导致参考帧列清空,而I不会
//I图像之后的图像可以引用I图像之间的图像做运动参考
i_nal_type    = NAL_SLICE;
i_nal_ref_idc = NAL_PRIORITY_HIGH; /* Not completely true but for now it is (as all I/P are kept as ref)*/
h->sh.i_type = SLICE_TYPE_I;
x264_reference_hierarchy_reset( h );//////////////////如果是非IDR的I帧,调用该函数
if( h->param.b_open_gop )
h->frames.i_poc_last_open_gop = h->fenc->b_keyframe ? h->fenc->i_poc : -1;
}
else if( h->fenc->i_type == X264_TYPE_P )
{
i_nal_type    = NAL_SLICE;
i_nal_ref_idc = NAL_PRIORITY_HIGH; /* Not completely true but for now it is (as all I/P are kept as ref)*/
h->sh.i_type = SLICE_TYPE_P;
x264_reference_hierarchy_reset( h ););//////////////////如果是非IDR的P帧,调用该函数
h->frames.i_poc_last_open_gop = -1;
}
else if( h->fenc->i_type == X264_TYPE_BREF )
{
//可以作为参考帧的B帧,这是个特色
i_nal_type    = NAL_SLICE;
i_nal_ref_idc = h->param.i_bframe_pyramid == X264_B_PYRAMID_STRICT ? NAL_PRIORITY_LOW : NAL_PRIORITY_HIGH;
h->sh.i_type = SLICE_TYPE_B;
x264_reference_hierarchy_reset( h ););//////////////////如果是非IDR的B帧(可做为参考帧),调用该函数
}
else    /* B frame */
{
//最普通
i_nal_type    = NAL_SLICE;
i_nal_ref_idc = NAL_PRIORITY_DISPOSABLE;
h->sh.i_type = SLICE_TYPE_B;
}

//重建帧与编码帧的赋值
h->fdec->i_type = h->fenc->i_type;
h->fdec->i_frame = h->fenc->i_frame;
h->fenc->b_kept_as_ref =
h->fdec->b_kept_as_ref = i_nal_ref_idc != NAL_PRIORITY_DISPOSABLE && h->param.i_keyint_max > 1;

h->fdec->mb_info = h->fenc->mb_info;
h->fdec->mb_info_free = h->fenc->mb_info_free;
h->fenc->mb_info = NULL;
h->fenc->mb_info_free = NULL;

h->fdec->i_pts = h->fenc->i_pts;
if( h->frames.i_bframe_delay )
{
int64_t *prev_reordered_pts = thread_current->frames.i_prev_reordered_pts;
h->fdec->i_dts = h->i_frame > h->frames.i_bframe_delay
? prev_reordered_pts[ (h->i_frame - h->frames.i_bframe_delay) % h->frames.i_bframe_delay ]
: h->fenc->i_reordered_pts - h->frames.i_bframe_delay_time;
prev_reordered_pts[ h->i_frame % h->frames.i_bframe_delay ] = h->fenc->i_reordered_pts;
}
else
h->fdec->i_dts = h->fenc->i_reordered_pts;
if( h->fenc->i_type == X264_TYPE_IDR )
h->i_last_idr_pts = h->fdec->i_pts;

/* ------------------- Init                ----------------------------- */
/* build ref list 0/1 */
x264_reference_build_list( h, h->fdec->i_poc );//////////////////创建参考帧列表list0和list1

/* ---------------------- Write the bitstream -------------------------- */
/* Init bitstream context */
//用于输出
if( h->param.b_sliced_threads )
{
for( int i = 0; i < h->param.i_threads; i++ )
{
bs_init( &h->thread[i]->out.bs, h->thread[i]->out.p_bitstream, h->thread[i]->out.i_bitstream );
h->thread[i]->out.i_nal = 0;
}
}
else
{
bs_init( &h->out.bs, h->out.p_bitstream, h->out.i_bitstream );
h->out.i_nal = 0;
}

if( h->param.b_aud )
{
int pic_type;

if( h->sh.i_type == SLICE_TYPE_I )
pic_type = 0;
else if( h->sh.i_type == SLICE_TYPE_P )
pic_type = 1;
else if( h->sh.i_type == SLICE_TYPE_B )
pic_type = 2;
else
pic_type = 7;

x264_nal_start( h, NAL_AUD, NAL_PRIORITY_DISPOSABLE );
bs_write( &h->out.bs, 3, pic_type );
bs_rbsp_trailing( &h->out.bs );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + NALU_OVERHEAD;
}

h->i_nal_type = i_nal_type;
h->i_nal_ref_idc = i_nal_ref_idc;

if( h->param.b_intra_refresh )
{
if( IS_X264_TYPE_I( h->fenc->i_type ) )
{
h->fdec->i_frames_since_pir = 0;
h->b_queued_intra_refresh = 0;
/* PIR is currently only supported with ref == 1, so any intra frame effectively refreshes
* the whole frame and counts as an intra refresh. */
h->fdec->f_pir_position = h->mb.i_mb_width;
}
else if( h->fenc->i_type == X264_TYPE_P )
{
int pocdiff = (h->fdec->i_poc - h->fref[0][0]->i_poc)/2;
float increment = X264_MAX( ((float)h->mb.i_mb_width-1) / h->param.i_keyint_max, 1 );
h->fdec->f_pir_position = h->fref[0][0]->f_pir_position;
h->fdec->i_frames_since_pir = h->fref[0][0]->i_frames_since_pir + pocdiff;
if( h->fdec->i_frames_since_pir >= h->param.i_keyint_max ||
(h->b_queued_intra_refresh && h->fdec->f_pir_position + 0.5 >= h->mb.i_mb_width) )
{
h->fdec->f_pir_position = 0;
h->fdec->i_frames_since_pir = 0;
h->b_queued_intra_refresh = 0;
h->fenc->b_keyframe = 1;
}
h->fdec->i_pir_start_col = h->fdec->f_pir_position+0.5;
h->fdec->f_pir_position += increment * pocdiff;
h->fdec->i_pir_end_col = h->fdec->f_pir_position+0.5;
/* If our intra refresh has reached the right side of the frame, we're done. */
if( h->fdec->i_pir_end_col >= h->mb.i_mb_width - 1 )
{
h->fdec->f_pir_position = h->mb.i_mb_width;
h->fdec->i_pir_end_col = h->mb.i_mb_width - 1;
}
}
}

if( h->fenc->b_keyframe )
{
//每个关键帧前面重复加上SPS和PPS
/* Write SPS and PPS */
if( h->param.b_repeat_headers )
{
/* generate sequence parameters */
x264_nal_start( h, NAL_SPS, NAL_PRIORITY_HIGHEST );
x264_sps_write( &h->out.bs, h->sps );
if( x264_nal_end( h ) )
return -1;
/* Pad AUD/SPS to 256 bytes like Panasonic */
if( h->param.i_avcintra_class )
h->out.nal[h->out.i_nal-1].i_padding = 256 - bs_pos( &h->out.bs ) / 8 - 2*NALU_OVERHEAD;
overhead += h->out.nal[h->out.i_nal-1].i_payload + h->out.nal[h->out.i_nal-1].i_padding + NALU_OVERHEAD;

/* generate picture parameters */
x264_nal_start( h, NAL_PPS, NAL_PRIORITY_HIGHEST );
x264_pps_write( &h->out.bs, h->sps, h->pps );
if( x264_nal_end( h ) )
return -1;
if( h->param.i_avcintra_class )
h->out.nal[h->out.i_nal-1].i_padding = 256 - h->out.nal[h->out.i_nal-1].i_payload - NALU_OVERHEAD;
overhead += h->out.nal[h->out.i_nal-1].i_payload + h->out.nal[h->out.i_nal-1].i_padding + NALU_OVERHEAD;
}

/* when frame threading is used, buffering period sei is written in x264_encoder_frame_end */
if( h->i_thread_frames == 1 && h->sps->vui.b_nal_hrd_parameters_present )
{
x264_hrd_fullness( h );
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_buffering_period_write( h, &h->out.bs );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
}
}

/* write extra sei */
//下面很大一段代码用于写入SEI(一部分是为了适配其他的解码器)
for( int i = 0; i < h->fenc->extra_sei.num_payloads; i++ )
{
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_write( &h->out.bs, h->fenc->extra_sei.payloads[i].payload, h->fenc->extra_sei.payloads[i].payload_size,
h->fenc->extra_sei.payloads[i].payload_type );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
if( h->fenc->extra_sei.sei_free )
{
h->fenc->extra_sei.sei_free( h->fenc->extra_sei.payloads[i].payload );
h->fenc->extra_sei.payloads[i].payload = NULL;
}
}

if( h->fenc->extra_sei.sei_free )
{
h->fenc->extra_sei.sei_free( h->fenc->extra_sei.payloads );
h->fenc->extra_sei.payloads = NULL;
h->fenc->extra_sei.sei_free = NULL;
}

//特殊的SEI信息(Avid等解码器需要)
if( h->fenc->b_keyframe )
{
/* Avid's decoder strictly wants two SEIs for AVC-Intra so we can't insert the x264 SEI */
if( h->param.b_repeat_headers && h->fenc->i_frame == 0 && !h->param.i_avcintra_class )
{
/* identify ourself */
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
if( x264_sei_version_write( h, &h->out.bs ) )
return -1;
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
}

if( h->fenc->i_type != X264_TYPE_IDR )
{
int time_to_recovery = h->param.b_open_gop ? 0 : X264_MIN( h->mb.i_mb_width - 1, h->param.i_keyint_max ) + h->param.i_bframe - 1;
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_recovery_point_write( h, &h->out.bs, time_to_recovery );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
}
}

if( h->param.i_frame_packing >= 0 && (h->fenc->b_keyframe || h->param.i_frame_packing == 5) )
{
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_frame_packing_write( h, &h->out.bs );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
}

/* generate sei pic timing */
if( h->sps->vui.b_pic_struct_present || h->sps->vui.b_nal_hrd_parameters_present )
{
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_pic_timing_write( h, &h->out.bs );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
}

/* As required by Blu-ray. */
if( !IS_X264_TYPE_B( h->fenc->i_type ) && h->b_sh_backup )
{
h->b_sh_backup = 0;
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
x264_sei_dec_ref_pic_marking_write( h, &h->out.bs );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;
}

if( h->fenc->b_keyframe && h->param.b_intra_refresh )
h->i_cpb_delay_pir_offset_next = h->fenc->i_cpb_delay;

/* Filler space: 10 or 18 SEIs' worth of space, depending on resolution */
if( h->param.i_avcintra_class )
{
/* Write an empty filler NAL to mimic the AUD in the P2 format*/
x264_nal_start( h, NAL_FILLER, NAL_PRIORITY_DISPOSABLE );
x264_filler_write( h, &h->out.bs, 0 );
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + NALU_OVERHEAD;

/* All lengths are magic lengths that decoders expect to see */
/* "UMID" SEI */
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
if( x264_sei_avcintra_umid_write( h, &h->out.bs ) < 0 )
return -1;
if( x264_nal_end( h ) )
return -1;
overhead += h->out.nal[h->out.i_nal-1].i_payload + SEI_OVERHEAD;

int unpadded_len;
int total_len;
if( h->param.i_height == 1080 )
{
unpadded_len = 5780;
total_len = 17*512;
}
else
{
unpadded_len = 2900;
total_len = 9*512;
}
/* "VANC" SEI */
x264_nal_start( h, NAL_SEI, NAL_PRIORITY_DISPOSABLE );
if( x264_sei_avcintra_vanc_write( h, &h->out.bs, unpadded_len ) < 0 )
return -1;
if( x264_nal_end( h ) )
return -1;

h->out.nal[h->out.i_nal-1].i_padding = total_len - h->out.nal[h->out.i_nal-1].i_payload - SEI_OVERHEAD;
overhead += h->out.nal[h->out.i_nal-1].i_payload + h->out.nal[h->out.i_nal-1].i_padding + SEI_OVERHEAD;
}//写入SEI代码结束

/* Init the rate control */
/* FIXME: Include slice header bit cost. */
x264_ratecontrol_start( h, h->fenc->i_qpplus1, overhead*8 );////////////////////////码率控制单元开始
i_global_qp = x264_ratecontrol_qp( h );

pic_out->i_qpplus1 =
h->fdec->i_qpplus1 = i_global_qp + 1;

if( h->param.rc.b_stat_read && h->sh.i_type != SLICE_TYPE_I )
{
x264_reference_build_list_optimal( h );
x264_reference_check_reorder( h );
}

if( h->i_ref[0] )
h->fdec->i_poc_l0ref0 = h->fref[0][0]->i_poc;

/* ------------------------ Create slice header  ----------------------- */
x264_slice_init( h, i_nal_type, i_global_qp );//////////////////////////创建Slice Header

/*------------------------- Weights -------------------------------------*/
//加权预测
if( h->sh.i_type == SLICE_TYPE_B )
x264_macroblock_bipred_init( h );

x264_weighted_pred_init( h );

if( i_nal_ref_idc != NAL_PRIORITY_DISPOSABLE )
h->i_frame_num++;

/* Write frame */
h->i_threadslice_start = 0;
h->i_threadslice_end = h->mb.i_mb_height;
if( h->i_thread_frames > 1 )
{
x264_threadpool_run( h->threadpool, (void*)x264_slices_write, h );
h->b_thread_active = 1;
}
else if( h->param.b_sliced_threads )
{
if( x264_threaded_slices_write( h ) )
return -1;
}
else
if( (intptr_t)x264_slices_write( h ) )////////////////////////真正的编码——编码1个图像帧(注意这里“slices”后面有“s”)
return -1;

return x264_encoder_frame_end( thread_oldest, thread_current, pp_nal, pi_nal, pic_out );//////////////////结束的时候做一些处理,记录一些统计信息:输出NALU、输出重建帧
}


从源代码可以看出,x264_encoder_encode()的流程大致如下:

(1)调用x264_frame_pop_unused获取一个空的fenc(x264_frame_t类型)用于存储一帧编码像素数据。

(2)调用x264_frame_copy_picture()将外部结构体的pic_in(x264_picture_t类型)的数据拷贝给内部结构体的fenc(x264_frame_t类型)。

(3)调用x264_lookahead_put_frame()将fenc放入Lookahead模块的队列中,等待确定帧类型。

(4)调用x264_lookahead_get_frames()分析Lookahead模块中一个帧的帧类型。分析后的帧保存在frames.current[]中。

(5)调用x264_frame_shift()从frames.current[]中取出分析帧类型之后的fenc。

(6)调用x264_reference_update()更新参考帧队列frames.reference[]。

(7)如果编码帧fenc是IDR帧,调用x264_reference_reset()清空参考帧队列frames.reference[]。

(8)调用x264_reference_build_list()创建参考帧列表List0和List1。

(9)根据选项做一些配置:

a)、如果b_aud不为0,输出AUD类型NALU

b)、在当前帧是关键帧的情况下,如果b_repeat_headers不为0,调用x264_sps_write()和x264_pps_write()输出SPS和PPS。

c)、输出一些特殊的SEI信息,用于适配各种解码器。

(10)调用x264_slice_init()初始化Slice Header信息。

(11)调用x264_slices_write()进行编码。该部分是libx264的核心,在后续文章中会详细分析。

(12)调用x264_encoder_frame_end()做一些编码后的后续处理。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: