Android Camera数据流分析全程记录(overlay方式一)
2016-08-31 19:29
531 查看
Android
Camera数据流分析全程记录(overlay方式)
这里为什么要研究overlay方式呢?android camera需要driver和app层需要有大量数据需要传输,如果使用非overlay方式进行数据从driver到app层的传输,使系统性能受到很到影响,使系统速度变慢,同时会影响功耗等,而在camera preview module时,通常我们是不必要将采集的数据保存下来的,而不像录像module下,需要将数据保存下来,所以overlay方式就是不经过数据回传,直接显示从driver的数据方式,采用这种方式app从无法获取到数据,所以这种方式应用在preview方式下
这里我是针对android4.0版本的,相对android2.x版本的overlay已经发生了很大的变化,想要研究这方面的可以自己去了解一下,这里不再多说了
开始部分我就直接在这里带过了,系统初始打开camera时,调用到app的onCreate方法,这里主要做了一下工作:
1.开始一个openCamera线程打开camera
2.实例化很多的对象,用于camera工作使用
3.实例化surfaceview和surfaceholder,并且填充了其中的surfacechanged,surfacedestoryed和surfacecreated这三个方式
4.开始一个preview线程用于preview过程
这其中3.4是我们这里要关注的重点,上面实例化了这个surfaceview将决定了我们到底是否使用overlay方式
在这里第三遍完成之后,系统会自动执行surfacechanged这个方式,每次显示区域发生改变都会自动调用这个方法,刚开始打开camera时,显示区域从无到有,因此必要这里会想调用到surfacechanged方法
我们就还是看看在这里都做了些什么事情
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
// Make sure we have a surface in the holder before proceeding.
if (holder.getSurface() == null) {
Log.d(TAG, "holder.getSurface()
== null");
return;
}
Log.v(TAG, "surfaceChanged.
w=" + w + ". h=" + h);
// We need to save the holder for later
use, even when the mCameraDevice
// is null. This
could happen if onResume() is invoked
after this
// function.
mSurfaceHolder = holder;
// The mCameraDevice will be null if it
fails to connect to the camera
// hardware. In this case we
will show a dialog and then finish the
// activity, so it's
OK to ignore it.
if (mCameraDevice == null) return;
// Sometimes surfaceChanged is called after onPause or before
onResume.
// Ignore it.
if (mPausing || isFinishing()) return;
setSurfaceLayout();
// Set preview display if the
surface is being created. Preview was
// already started. Also restart the preview if display
rotation has
// changed. Sometimes this happens when the device is held in portrait
// and camera app is opened. Rotation
animation takes some time and
// display rotation in onCreate may not be
what we want.
if (mCameraState == PREVIEW_STOPPED) {//这里表示第一次打开camera时,那么调用startpreview
startPreview(true);
startFaceDetection();
} else {//这里则表示camera已经打开过程中发生的显示变化,比如横屏竖频转换,所以zheli只需要重新设置previewdisplay
if (Util.getDisplayRotation(this) != mDisplayRotation) {
setDisplayOrientation();
}
if (holder.isCreating()) {
// Set preview display if the
surface is being created and preview
// was already started. That means preview display was set to null
// and we need to set it now.
setPreviewDisplay(holder);
}
}
// If first time initialization is not finished, send
a message to do
// it later. We want to finish
surfaceChanged as soon as possible to let
// user see preview first.
if (!mFirstTimeInitialized) {
mHandler.sendEmptyMessage(FIRST_TIME_INIT);
} else {
initializeSecondTime();
}
SurfaceView preview = (SurfaceView) findViewById(R.id.camera_preview);
CameraInfo info = CameraHolder.instance().getCameraInfo()[mCameraId];
boolean mirror = (info.facing == CameraInfo.CAMERA_FACING_FRONT);
int displayRotation = Util.getDisplayRotation(this);
int displayOrientation = Util.getDisplayOrientation(displayRotation, mCameraId);
mTouchManager.initialize(preview.getHeight() / 3, preview.getHeight() / 3,
preview, this, mirror, displayOrientation);
}
从上面代码我们必须知道,在surface发生变化时必须调用setPreviewDisplay,根据之后的学习,在startpreview方式中真正startpreview之前同样要调用setPreviewDisplay,在setPreviewDisplay的方法中完成了很多初始化,也是在这里决定是否使用overlay方式的,我们就先看看startpreview这个方法吧
private void startPreview(boolean updateAll) {
if (mPausing || isFinishing()) return;
mFocusManager.resetTouchFocus();
mCameraDevice.setErrorCallback(mErrorCallback);
// If we're
previewing already, stop the preview first (this will blank
// the screen).
if (mCameraState != PREVIEW_STOPPED) stopPreview();
setPreviewDisplay(mSurfaceHolder);
setDisplayOrientation();
if (!mSnapshotOnIdle) {
// If the focus mode is continuous
autofocus, call cancelAutoFocus to
// resume it because it may have been paused by autoFocus call.
if (Parameters.FOCUS_MODE_CONTINUOUS_PICTURE.equals(mFocusManager.getFocusMode())) {
mCameraDevice.cancelAutoFocus();
}
mFocusManager.setAeAwbLock(false); // Unlock
AE and AWB.
}
if ( updateAll ) {
Log.v(TAG, "Updating
all parameters!");
setCameraParameters(UPDATE_PARAM_INITIALIZE | UPDATE_PARAM_ZOOM | UPDATE_PARAM_PREFERENCE);
} else {
setCameraParameters(UPDATE_PARAM_MODE);
}
//setCameraParameters(UPDATE_PARAM_ALL);
// Inform the mainthread to go on the
UI initialization.
if (mCameraPreviewThread != null) {
synchronized (mCameraPreviewThread) {
mCameraPreviewThread.notify();
}
}
try {
Log.v(TAG, "startPreview");
mCameraDevice.startPreview();
} catch (Throwable ex) {
closeCamera();
throw new RuntimeException("startPreview failed", ex);
}
mZoomState = ZOOM_STOPPED;
setCameraState(IDLE);
mFocusManager.onPreviewStarted();
if ( mTempBracketingEnabled ) {
mFocusManager.setTempBracketingState(FocusManager.TempBracketingStates.ACTIVE);
}
if (mSnapshotOnIdle) {
mHandler.post(mDoSnapRunnable);
}
}
上面大家看到了,先调用了setPreviewDisplay,最后调用mCameraDevice.startPreview()开始preview
这里过程如下:app-->frameworks-->JNI-->camera client-->camera service-->hardware interface-->HAL
1.setPreviewDisplay方法调用时在app层最初的传入的参数是surfaceholder结构
2.到了JNI层setPreviewDisplay方法传入的参数已经是surface结构了
3.到了camera service层
sp binder(surface != 0 ? surface->asBinder() : 0);
sp window(surface);
return setPreviewWindow(binder, window);
通过上面的转换调用同名不同参数的另外一个方法,到这里调用的参数已经转变为IBinder和ANativeWindow
4.调用hardware interface的setPreviewWindow(window),这里只有一个ANativeWindow类型的参数
5.到了camerahal_module中转站时又发生了变化,看看下面的定义,参数变为preview_stream_ops 这个类型的结构
int camera_set_preview_window(struct camera_device * device, struct preview_stream_ops *window)
上面过程参数类型一直在变化,不过从app层一直传到这里,其实是对同一个内存地址的传输,就像张三换了身衣服,但是他还是张三一样
现在我们就直接看看HAL层的实现
/**
@brief Sets ANativeWindow object.
Preview buffers provided to CameraHal via this object. DisplayAdapter will be interfacing with it
to render buffers to display.
@param[in] window The
ANativeWindow object created by Surface flinger
@return NO_ERROR If the ANativeWindow object passes validation criteria
@todo Define validation criteria for ANativeWindow object. Define error codes for scenarios
*/
status_t CameraHal::setPreviewWindow(struct preview_stream_ops *window)
{
status_t ret = NO_ERROR;
CameraAdapter::BuffersDescriptor desc;
LOG_FUNCTION_NAME;
mSetPreviewWindowCalled = true;
///If the
Camera service passes a null window, we destroy existing window and free
the DisplayAdapter
if(!window)//这种情况下,window是null,表示不采用overlay方式,则不需要新建displayadapter
{
if(mDisplayAdapter.get() != NULL)
{
///NULL window passed, destroy
the display adapter if present
CAMHAL_LOGD("NULL window passed, destroying display adapter");
mDisplayAdapter.clear();
///@remarks If there
was a window previously existing, we usually expect another valid window to be
passed by the client
///@remarks
so, we will wait until it passes a valid window to begin
the preview again
mSetPreviewWindowCalled = false;
}
CAMHAL_LOGD("NULL ANativeWindow passed to setPreviewWindow");
return NO_ERROR;
}else if(mDisplayAdapter.get() == NULL)//传入的window不是null,但是还没有未使用overlay方式创建displayadapter,创建displayadapter
{
// Need to create the display adapter since it has not been
created
// Create display adapter
mDisplayAdapter = new ANativeWindowDisplayAdapter();
ret = NO_ERROR;
if(!mDisplayAdapter.get() || ((ret=mDisplayAdapter->initialize())!=NO_ERROR))
{
if(ret!=NO_ERROR)
{
mDisplayAdapter.clear();
CAMHAL_LOGEA("DisplayAdapter initialize failed");
LOG_FUNCTION_NAME_EXIT;
return ret;
}
else
{
CAMHAL_LOGEA("Couldn't create DisplayAdapter");
LOG_FUNCTION_NAME_EXIT;
return NO_MEMORY;
}
}
// DisplayAdapter needs to know where to get the
CameraFrames from inorder to display
// Since CameraAdapter is the one that provides the frames, set it
as the frame provider for DisplayAdapter
mDisplayAdapter->setFrameProvider(mCameraAdapter);
// Any dynamic errors that happen during the camera use case has to be
propagated back to the application
// via CAMERA_MSG_ERROR. AppCallbackNotifier is the class that
notifies such errors to the application
// Set it as the error handler for the
DisplayAdapter
mDisplayAdapter->setErrorHandler(mAppCallbackNotifier.get());
// Update the display adapter with the new window that is passed
from CameraService
ret = mDisplayAdapter->setPreviewWindow(window);
if(ret!=NO_ERROR)
{
CAMHAL_LOGEB("DisplayAdapter setPreviewWindow returned error %d", ret);
}
if(mPreviewStartInProgress)
{
CAMHAL_LOGDA("setPreviewWindow called when preview running");
// Start the preview since the window is now available
ret = startPreview();
}
} else {//传入的window不是null,并且displaadaper已经创建好,那么这里只需要将新的window与已经创建好的displayadapter关联即可
// Update the display adapter with the new window that is passed
from CameraService
ret = mDisplayAdapter->setPreviewWindow(window);
if ( (NO_ERROR == ret) && previewEnabled() ) {
restartPreview();
} else if (ret == ALREADY_EXISTS) {
// ALREADY_EXISTS should be treated as a noop in this case
ret = NO_ERROR;
}
}
LOG_FUNCTION_NAME_EXIT;
return ret;
}
这里我们重点看看新建displayadapter的过程:
1.实例化一个ANativeWindowDisplayAdapter对象
2.mDisplayAdapter->initialize()
3.mDisplayAdapter->setFrameProvider(mCameraAdapter)//这一步是关键,之后会遇到的
4.mDisplayAdapter->setErrorHandler(mAppCallbackNotifier.get())
5.mDisplayAdapter->setPreviewWindow(window);
做完了上面这些步骤之后,就是startpreview了
/**
@brief Start preview mode.
@param none
@return NO_ERROR Camera switched to VF mode
@todo Update function header with the different errors that are possible
*/
status_t CameraHal::startPreview() {
LOG_FUNCTION_NAME;
// When tunneling is enabled during VTC, startPreview
happens in 2 steps:
// When the application sends the command CAMERA_CMD_PREVIEW_INITIALIZATION,
// cameraPreviewInitialization() is called, which in turn
causes the CameraAdapter
// to move from loaded to idle
state. And when the application calls startPreview,
// the CameraAdapter moves from idle to executing state.
//
// If the application calls startPreview() without
sending the command
// CAMERA_CMD_PREVIEW_INITIALIZATION, then the function cameraPreviewInitialization()
// AND startPreview() are
executed. In other words, if the
application calls
// startPreview() without
sending the command CAMERA_CMD_PREVIEW_INITIALIZATION,
// then the CameraAdapter moves from loaded to idle to executing
state in one shot.
status_t ret = cameraPreviewInitialization();
// The flag mPreviewInitializationDone is set to true at
the end of the function
// cameraPreviewInitialization(). Therefore, if everything
goes alright, then the
// flag will be set. Sometimes, the function cameraPreviewInitialization() may
// return prematurely if all the resources are not available for starting
preview.
// For example, if the
preview window is not set, then it
would return NO_ERROR.
// Under such circumstances, one should return from startPreview as
well and should
// not continue execution. That is why, we
check the flag and not the return value.
if (!mPreviewInitializationDone) return
ret;
// Once startPreview is called, there is no
need to continue to remember whether
// the function cameraPreviewInitialization() was
called earlier or not. And so
// the flag mPreviewInitializationDone is reset here. Plus, this
preserves the
// current behavior of startPreview under the circumstances where the application
// calls startPreview twice or more.
mPreviewInitializationDone = false;
///Enable
the display adapter if present, actual
overlay enable happens when we post the buffer
if(mDisplayAdapter.get() != NULL) {
CAMHAL_LOGDA("Enabling display");
int width, height;
mParameters.getPreviewSize(&width, &height);
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
ret = mDisplayAdapter->enableDisplay(width, height, &mStartPreview);
#else
ret = mDisplayAdapter->enableDisplay(width, height, NULL);
#endif
if ( ret != NO_ERROR ) {
CAMHAL_LOGEA("Couldn't enable display");
// FIXME: At this stage mStateSwitchLock is locked and unlock is supposed to be
called
// only from mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_PREVIEW)
// below. But this will never happen because of goto error. Thus
at next
// startPreview() call CameraHAL
will be deadlocked.
// Need to revisit mStateSwitch lock, for now just
abort the process.
CAMHAL_ASSERT_X(false,
"At this stage mCameraAdapter->mStateSwitchLock is still locked, "
"deadlock is guaranteed");
goto error;
}
}
///Send START_PREVIEW command to adapter
CAMHAL_LOGDA("Starting CameraAdapter preview mode");
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_PREVIEW);
if(ret!=NO_ERROR) {
CAMHAL_LOGEA("Couldn't start preview w/ CameraAdapter");
goto error;
}
CAMHAL_LOGDA("Started preview");
mPreviewEnabled = true;
mPreviewStartInProgress = false;
return ret;
error:
CAMHAL_LOGEA("Performing cleanup after error");
//Do all the cleanup
freePreviewBufs();
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_PREVIEW);
if(mDisplayAdapter.get() != NULL) {
mDisplayAdapter->disableDisplay(false);
}
mAppCallbackNotifier->stop();
mPreviewStartInProgress = false;
mPreviewEnabled = false;
LOG_FUNCTION_NAME_EXIT;
return ret;
}
上面标出的cameraPreviewInitialization()方法也十分关键,之前已经说过,之后如果需要会再做说明
Enable the display adapter if present, actual
overlay enable happens when we post the buffer
说明如果display adapter不是null,这里会enable,overlay方式就启动了
我们接着往下看,看看driver获取的数据到底是怎样处理的,startpreview会通过camerahal-->cameraapapter-->V4Lcameradapter
调用到v4l2层的startpreview,下面看看他的具体是实现
status_t V4LCameraAdapter::startPreview()
{
status_t ret = NO_ERROR;
LOG_FUNCTION_NAME;
Mutex::Autolock lock(mPreviewBufsLock);
if(mPreviewing) {
ret = BAD_VALUE;
goto EXIT;
}
for (int i = 0; i < mPreviewBufferCountQueueable; i++) {
mVideoInfo->buf.index = i;
mVideoInfo->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
mVideoInfo->buf.memory = V4L2_MEMORY_MMAP;
ret = v4lIoctl(mCameraHandle, VIDIOC_QBUF, &mVideoInfo->buf);//请求分配内存
if (ret < 0) {
CAMHAL_LOGEA("VIDIOC_QBUF Failed");
goto EXIT;
}
nQueued++;
}
ret = v4lStartStreaming();
// Create and start preview thread for receiving
buffers from V4L Camera
if(!mCapturing) {
mPreviewThread = new
PreviewThread(this);//开启PreviewThread
CAMHAL_LOGDA("Created preview thread");
}
//Update the flag to indicate we are previewing
mPreviewing = true;
mCapturing = false;
EXIT:
LOG_FUNCTION_NAME_EXIT;
return ret;
}
int V4LCameraAdapter::previewThread()
{
status_t ret = NO_ERROR;
int width, height;
CameraFrame frame;
void *y_uv[2];
int index = 0;
int stride = 4096;
char *fp = NULL;
mParams.getPreviewSize(&width, &height);
if (mPreviewing) {
fp = this->GetFrame(index);
if(!fp) {
ret = BAD_VALUE;
goto EXIT;
}
CameraBuffer *buffer = mPreviewBufs.keyAt(index);//获取camerabuffer
CameraFrame *lframe = (CameraFrame *)mFrameQueue.valueFor(buffer);//获取cameraframe
if (!lframe) {
ret = BAD_VALUE;
goto EXIT;
}
debugShowFPS();
if ( mFrameSubscribers.size() == 0 ) {
ret = BAD_VALUE;
goto EXIT;
}
y_uv[0] = (void*) lframe->mYuv[0];
//y_uv[1] = (void*) lframe->mYuv[1];
//y_uv[1] = (void*) (lframe->mYuv[0] + height*stride);
convertYUV422ToNV12Tiler ( (unsigned char*)fp, (unsigned
char*)y_uv[0], width, height);//convert
the data
CAMHAL_LOGVB("##...index= %d.;camera buffer= 0x%x; y= 0x%x; UV= 0x%x.",index, buffer, y_uv[0], y_uv[1] );
#ifdef SAVE_RAW_FRAMES
unsigned char* nv12_buff = (unsigned char*) malloc(width*height*3/2);
//Convert yuv422i to yuv420sp(NV12) & dump
the frame to a file
convertYUV422ToNV12 ( (unsigned char*)fp, nv12_buff, width, height);
saveFile( nv12_buff, ((width*height)*3/2) );//if
you want to save the data,save it
free (nv12_buff);
#endif
//填充frame结构,用于数据处理
frame.mFrameType = CameraFrame::PREVIEW_FRAME_SYNC;
frame.mBuffer = buffer;
frame.mLength = width*height*3/2;
frame.mAlignment = stride;
frame.mOffset = 0;
frame.mTimestamp = systemTime(SYSTEM_TIME_MONOTONIC);
frame.mFrameMask = (unsigned int)CameraFrame::PREVIEW_FRAME_SYNC;
if (mRecording)
{
frame.mFrameMask |= (unsigned int)CameraFrame::VIDEO_FRAME_SYNC;
mFramesWithEncoder++;
}
//这里是重点,数据回调,或者使用overlay方式显示这里是决定性调用
ret = setInitFrameRefCount(frame.mBuffer, frame.mFrameMask);
if (ret != NO_ERROR) {
CAMHAL_LOGDB("Error in setInitFrameRefCount %d", ret);
} else {
ret = sendFrameToSubscribers(&frame);
}
}
EXIT:
return ret;
}
现在就开始看看setInitFrameCount方法都做了些什么
int BaseCameraAdapter::setInitFrameRefCount(CameraBuffer * buf, unsigned int mask)
{
int ret = NO_ERROR;
unsigned int lmask;
LOG_FUNCTION_NAME;
if (buf == NULL)
{
return -EINVAL;
}
for( lmask = 1; lmask < CameraFrame::ALL_FRAMES; lmask <<= 1){
if( lmask & mask ){
switch( lmask ){
case CameraFrame::IMAGE_FRAME:
{
setFrameRefCount(buf, CameraFrame::IMAGE_FRAME, (int) mImageSubscribers.size());
}
break;
case CameraFrame::RAW_FRAME:
{
setFrameRefCount(buf, CameraFrame::RAW_FRAME, mRawSubscribers.size());
}
break;
case CameraFrame::PREVIEW_FRAME_SYNC:
{
setFrameRefCount(buf, CameraFrame::PREVIEW_FRAME_SYNC, mFrameSubscribers.size());//这里这个mFrameSubscribers对应的key上保存着响应的callback方法
}
break;
case CameraFrame::SNAPSHOT_FRAME:
{
setFrameRefCount(buf, CameraFrame::SNAPSHOT_FRAME, mSnapshotSubscribers.size());
}
break;
case CameraFrame::VIDEO_FRAME_SYNC:
{
setFrameRefCount(buf,CameraFrame::VIDEO_FRAME_SYNC, mVideoSubscribers.size());
}
break;
case CameraFrame::FRAME_DATA_SYNC:
{
setFrameRefCount(buf, CameraFrame::FRAME_DATA_SYNC, mFrameDataSubscribers.size());
}
break;
case CameraFrame::REPROCESS_INPUT_FRAME:
{
setFrameRefCount(buf,CameraFrame::REPROCESS_INPUT_FRAME, mVideoInSubscribers.size());
}
break;
default:
CAMHAL_LOGEB("FRAMETYPE NOT SUPPORTED 0x%x", lmask);
break;
}//SWITCH
mask &= ~lmask;
}//IF
}//FOR
LOG_FUNCTION_NAME_EXIT;
return ret;
}
上面我标注的部分通过enableMsgType方法实现mFrameSubscribers.add的,经callback添加到对应的key处,算是实现关联,
同样的通过disableMsgType方法实现mFrameSubscribers.removeItem的,具体在哪里调用enableMsgType和disableMsgType之后再给予说明
void BaseCameraAdapter::setFrameRefCount(CameraBuffer * frameBuf, CameraFrame::FrameType
frameType, int refCount)
{
LOG_FUNCTION_NAME;
switch ( frameType )
{
case CameraFrame::IMAGE_FRAME:
case CameraFrame::RAW_FRAME:
{
Mutex::Autolock lock(mCaptureBufferLock);
mCaptureBuffersAvailable.replaceValueFor(frameBuf, refCount);
}
break;
case CameraFrame::SNAPSHOT_FRAME:
{
Mutex::Autolock lock(mSnapshotBufferLock);
mSnapshotBuffersAvailable.replaceValueFor( ( unsigned int ) frameBuf, refCount);
}
break;
case CameraFrame::PREVIEW_FRAME_SYNC:
{
Mutex::Autolock lock(mPreviewBufferLock)
mPreviewBuffersAvailable.replaceValueFor(frameBuf, refCount);//这里我的理解是refCount和frameBuf实现了绑定,即camerabuf保存在mPreviewBuffersAvailable对应的key处
}
break;
case CameraFrame::FRAME_DATA_SYNC:
{
Mutex::Autolock lock(mPreviewDataBufferLock);
mPreviewDataBuffersAvailable.replaceValueFor(frameBuf, refCount);
}
break;
case CameraFrame::VIDEO_FRAME_SYNC:
{
Mutex::Autolock lock(mVideoBufferLock);
mVideoBuffersAvailable.replaceValueFor(frameBuf, refCount);
}
break;
case CameraFrame::REPROCESS_INPUT_FRAME: {
Mutex::Autolock lock(mVideoInBufferLock);
mVideoInBuffersAvailable.replaceValueFor(frameBuf, refCount);
}
break;
default:
break;
};
LOG_FUNCTION_NAME_EXIT;
}
接下我们看看sendFrameToSubscribers方法的具体实现过程
status_t BaseCameraAdapter::sendFrameToSubscribers(CameraFrame *frame)
{
status_t ret = NO_ERROR;
unsigned int mask;
if ( NULL == frame )
{
CAMHAL_LOGEA("Invalid CameraFrame");
return -EINVAL;
}
for( mask = 1; mask < CameraFrame::ALL_FRAMES; mask <<= 1){
if( mask & frame->mFrameMask ){
switch( mask ){
case CameraFrame::IMAGE_FRAME:
{
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
CameraHal::PPM("Shot
to Jpeg: ", &mStartCapture);
#endif
ret = __sendFrameToSubscribers(frame, &mImageSubscribers, CameraFrame::IMAGE_FRAME);
}
break;
case CameraFrame::RAW_FRAME:
{
ret = __sendFrameToSubscribers(frame, &mRawSubscribers, CameraFrame::RAW_FRAME);
}
break;
case CameraFrame::PREVIEW_FRAME_SYNC:
{
ret = __sendFrameToSubscribers(frame, &mFrameSubscribers, CameraFrame::PREVIEW_FRAME_SYNC);
}
break;
case CameraFrame::SNAPSHOT_FRAME:
{
ret = __sendFrameToSubscribers(frame, &mSnapshotSubscribers, CameraFrame::SNAPSHOT_FRAME);
}
break;
case CameraFrame::VIDEO_FRAME_SYNC:
{
ret = __sendFrameToSubscribers(frame, &mVideoSubscribers, CameraFrame::VIDEO_FRAME_SYNC);
}
break;
case CameraFrame::FRAME_DATA_SYNC:
{
ret = __sendFrameToSubscribers(frame, &mFrameDataSubscribers, CameraFrame::FRAME_DATA_SYNC);
}
break;
case CameraFrame::REPROCESS_INPUT_FRAME:
{
ret = __sendFrameToSubscribers(frame, &mVideoInSubscribers, CameraFrame::REPROCESS_INPUT_FRAME);
}
break;
default:
CAMHAL_LOGEB("FRAMETYPE NOT SUPPORTED 0x%x", mask);
break;
}//SWITCH
frame->mFrameMask &= ~mask;
if (ret != NO_ERROR) {
goto EXIT;
}
}//IF
}//FOR
EXIT:
return ret;
}
status_t BaseCameraAdapter::__sendFrameToSubscribers(CameraFrame* frame,
KeyedVector<int, frame_callback> *subscribers,
CameraFrame::FrameType frameType)
{
size_t refCount = 0;
status_t ret = NO_ERROR;
frame_callback callback = NULL;
frame->mFrameType = frameType;
if ( (frameType == CameraFrame::PREVIEW_FRAME_SYNC) ||
(frameType == CameraFrame::VIDEO_FRAME_SYNC) ||
(frameType == CameraFrame::SNAPSHOT_FRAME) ){
if (mFrameQueue.size() > 0){
CameraFrame *lframe = (CameraFrame *)mFrameQueue.valueFor(frame->mBuffer);
frame->mYuv[0] = lframe->mYuv[0];
frame->mYuv[1] = frame->mYuv[0] + (frame->mLength + frame->mOffset)*2/3;
}
else{
CAMHAL_LOGDA("Empty Frame Queue");
return -EINVAL;
}
}
if (NULL != subscribers) {
refCount = getFrameRefCount(frame->mBuffer, frameType);//通过这个refCount可以找到对应的callback方法
if (refCount == 0) {
CAMHAL_LOGDA("Invalid ref count of 0");
return -EINVAL;
}
if (refCount > subscribers->size()) {
CAMHAL_LOGEB("Invalid ref count for frame type: 0x%x", frameType);
return -EINVAL;
}
CAMHAL_LOGVB("Type of Frame: 0x%x address: 0x%x refCount start %d",
frame->mFrameType,
( uint32_t ) frame->mBuffer,
refCount);
for ( unsigned int i = 0 ; i < refCount; i++ ) {
frame->mCookie = ( void * ) subscribers->keyAt(i);
callback = (frame_callback) subscribers->valueAt(i);
if (!callback) {
CAMHAL_LOGEB("callback not set for frame type: 0x%x", frameType);
return -EINVAL;
}
callback(frame);
}
} else {
CAMHAL_LOGEA("Subscribers is null??");
return -EINVAL;
}
return ret;
}
这里别的我们先暂且不分析,但是callback到底是从哪里来的,这个我们必须说清楚
上面在实例化displayadapter时有这样一步:3.mDisplayAdapter->setFrameProvider(mCameraAdapter)//这一步是关键,之后会遇到的
我们看看setFrameProvider这个方法的实现:
int ANativeWindowDisplayAdapter::setFrameProvider(FrameNotifier *frameProvider)
{
LOG_FUNCTION_NAME;
// Check for NULL pointer
if ( !frameProvider ) {
CAMHAL_LOGEA("NULL passed for frame provider");
LOG_FUNCTION_NAME_EXIT;
return BAD_VALUE;
}
//Release any previous frame providers
if ( NULL != mFrameProvider ) {
delete mFrameProvider;
}
/** Dont do anything
here, Just save the pointer for use when display is
actually enabled or disabled
*/
mFrameProvider = new
FrameProvider(frameProvider, this, frameCallbackRelay);//实例化一个FrameProvider,这其中有一个参数非常重要:frameCallbackRelay,他的定义在下面给出
LOG_FUNCTION_NAME_EXIT;
return NO_ERROR;
}
void ANativeWindowDisplayAdapter::frameCallbackRelay(CameraFrame* caFrame)
{
if ( NULL != caFrame )
{
if ( NULL != caFrame->mCookie )
{
ANativeWindowDisplayAdapter *da = (ANativeWindowDisplayAdapter*) caFrame->mCookie;
da->frameCallback(caFrame);
}
else
{
CAMHAL_LOGEB("Invalid Cookie in Camera Frame = %p, Cookie = %p", caFrame, caFrame->mCookie);
}
}
else
{
CAMHAL_LOGEB("Invalid Camera Frame = %p", caFrame);
}
}
void ANativeWindowDisplayAdapter::frameCallback(CameraFrame* caFrame)
{
///Call queueBuffer
of overlay in the context of the callback thread
DisplayFrame df;
df.mBuffer = caFrame->mBuffer;
df.mType = (CameraFrame::FrameType) caFrame->mFrameType;
df.mOffset = caFrame->mOffset;
df.mWidthStride = caFrame->mAlignment;
df.mLength = caFrame->mLength;
df.mWidth = caFrame->mWidth;
df.mHeight = caFrame->mHeight;
PostFrame(df);
}
这个回调函数在这里设置,等待数据回调,我们很有必要去看看FrameProvider这个类的构造函数,他是怎样让其他方法调用到这个回调函数的呢
FrameProvider(FrameNotifier *fn, void* cookie, frame_callback
frameCallback)
:mFrameNotifier(fn), mCookie(cookie),mFrameCallback(frameCallback) { }
这个构造函数还是很有意思,没有任何实现,只是通过传入的三个参数实例化了三个对象而已
1.mFrameNotifier(fn), //这里mFrameNotifier就是camerasdapter
2.mCookie(cookie),
3.mFrameCallback(frameCallback)//mFrameCallback指向我们定义好的callback方法
我们接着就需要到之前已经提到过的startPreview方法中cameraPreviewInitialization的方法中去看了
////////////
/**
@brief Set preview mode related initialization
-> Camera Adapter set params
-> Allocate buffers
-> Set use buffers for preview
@param none
@return NO_ERROR
@todo Update function header with the different errors that are possible
*/
status_t CameraHal::cameraPreviewInitialization()
{
status_t ret = NO_ERROR;
CameraAdapter::BuffersDescriptor desc;
CameraFrame frame;
unsigned int required_buffer_count;
unsigned int max_queueble_buffers;
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
gettimeofday(&mStartPreview, NULL);
#endif
LOG_FUNCTION_NAME;
if (mPreviewInitializationDone) {
return NO_ERROR;
}
if ( mPreviewEnabled ){
CAMHAL_LOGDA("Preview already running");
LOG_FUNCTION_NAME_EXIT;
return ALREADY_EXISTS;
}
if ( NULL != mCameraAdapter ) {
ret = mCameraAdapter->setParameters(mParameters);
}
if ((mPreviewStartInProgress == false) && (mDisplayPaused == false)){
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_RESOLUTION_PREVIEW,( int ) &frame);
if ( NO_ERROR != ret ){
CAMHAL_LOGEB("Error: CAMERA_QUERY_RESOLUTION_PREVIEW %d", ret);
return ret;
}
///Update the current preview width and height
mPreviewWidth = frame.mWidth;
mPreviewHeight = frame.mHeight;
}
///If we
don't have the preview callback enabled and display adapter,
if(!mSetPreviewWindowCalled || (mDisplayAdapter.get() == NULL)){
CAMHAL_LOGD("Preview not started. Preview in progress flag set");
mPreviewStartInProgress = true;
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_SWITCH_TO_EXECUTING);
if ( NO_ERROR != ret ){
CAMHAL_LOGEB("Error: CAMERA_SWITCH_TO_EXECUTING %d", ret);
return ret;
}
return NO_ERROR;
}
if( (mDisplayAdapter.get() != NULL) && ( !mPreviewEnabled ) && ( mDisplayPaused ) )
{
CAMHAL_LOGDA("Preview is in paused state");
mDisplayPaused = false;
mPreviewEnabled = true;
if ( NO_ERROR == ret )
{
ret = mDisplayAdapter->pauseDisplay(mDisplayPaused);
if ( NO_ERROR != ret )
{
CAMHAL_LOGEB("Display adapter resume failed %x", ret);
}
}
//restart preview callbacks
if(mMsgEnabled & CAMERA_MSG_PREVIEW_FRAME)
{
mAppCallbackNotifier->enableMsgType (CAMERA_MSG_PREVIEW_FRAME);
}
signalEndImageCapture();
return ret;
}
required_buffer_count = atoi(mCameraProperties->get(CameraProperties::REQUIRED_PREVIEW_BUFS));
///Allocate the preview buffers
ret = allocPreviewBufs(mPreviewWidth, mPreviewHeight, mParameters.getPreviewFormat(), required_buffer_count, max_queueble_buffers);
if ( NO_ERROR != ret )
{
CAMHAL_LOGEA("Couldn't allocate buffers for Preview");
goto error;
}
if ( mMeasurementEnabled )
{
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_BUFFER_SIZE_PREVIEW_DATA,
( int ) &frame,
required_buffer_count);
if ( NO_ERROR != ret )
{
return ret;
}
///Allocate the preview data buffers
ret = allocPreviewDataBufs(frame.mLength, required_buffer_count);
if ( NO_ERROR != ret ) {
CAMHAL_LOGEA("Couldn't allocate preview data buffers");
goto error;
}
if ( NO_ERROR == ret )
{
desc.mBuffers = mPreviewDataBuffers;
desc.mOffsets = mPreviewDataOffsets;
desc.mFd = mPreviewDataFd;
desc.mLength = mPreviewDataLength;
desc.mCount = ( size_t ) required_buffer_count;
desc.mMaxQueueable = (size_t) required_buffer_count;
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW_DATA,
( int ) &desc);
}
}
///Pass the buffers to Camera
Adapter
desc.mBuffers = mPreviewBuffers;
desc.mOffsets = mPreviewOffsets;
desc.mFd = mPreviewFd;
desc.mLength = mPreviewLength;
desc.mCount = ( size_t ) required_buffer_count;
desc.mMaxQueueable = (size_t) max_queueble_buffers;
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW,
( int ) &desc);
if ( NO_ERROR != ret )
{
CAMHAL_LOGEB("Failed to register preview buffers: 0x%x", ret);
freePreviewBufs();
return ret;
}
mAppCallbackNotifier->startPreviewCallbacks(mParameters, mPreviewBuffers, mPreviewOffsets, mPreviewFd, mPreviewLength, required_buffer_count);
///Start the callback notifier
ret = mAppCallbackNotifier->start();
if( ALREADY_EXISTS == ret )
{
//Already running, do nothing
CAMHAL_LOGDA("AppCallbackNotifier already running");
ret = NO_ERROR;
}
else if ( NO_ERROR == ret ) {
CAMHAL_LOGDA("Started AppCallbackNotifier..");
mAppCallbackNotifier->setMeasurements(mMeasurementEnabled);
}
else
{
CAMHAL_LOGDA("Couldn't start AppCallbackNotifier");
goto error;
}
if (ret == NO_ERROR) mPreviewInitializationDone = true;
return ret;
error:
CAMHAL_LOGEA("Performing cleanup after error");
//Do all the cleanup
freePreviewBufs();
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_PREVIEW);
if(mDisplayAdapter.get() != NULL)
{
mDisplayAdapter->disableDisplay(false);
}
mAppCallbackNotifier->stop();
mPreviewStartInProgress = false;
mPreviewEnabled = false;
LOG_FUNCTION_NAME_EXIT;
return ret;
}
我们就看看这个方法的是实现吧:mAppCallbackNotifier->enableMsgType (CAMERA_MSG_PREVIEW_FRAME);
status_t AppCallbackNotifier::enableMsgType(int32_t
msgType)
{
if( msgType & CAMERA_MSG_PREVIEW_FRAME ) {
mFrameProvider->enableFrameNotification(CameraFrame::PREVIEW_FRAME_SYNC);
}
if( msgType & CAMERA_MSG_POSTVIEW_FRAME ) {
mFrameProvider->enableFrameNotification(CameraFrame::SNAPSHOT_FRAME);
}
if(msgType & CAMERA_MSG_RAW_IMAGE) {
mFrameProvider->enableFrameNotification(CameraFrame::RAW_FRAME);
}
return NO_ERROR;
}
int FrameProvider::enableFrameNotification(int32_t
frameTypes)
{
LOG_FUNCTION_NAME;
status_t ret = NO_ERROR;
///Enable the frame notification to CameraAdapter (which
implements FrameNotifier interface)
mFrameNotifier->enableMsgType(frameTypes<<MessageNotifier::FRAME_BIT_FIELD_POSITION, mFrameCallback, NULL, mCookie);
LOG_FUNCTION_NAME_EXIT;
return ret;
}
这里这个enableMsgType其实就是前面已经提到过的那个enableMsgType方法,实现callback方法add到响应的key上
这里这个mFrameNotifier是FrameNotifier的对象,FrameNotifier这个类继承于MessageNotifier
而BaseCameraAdapter继承于CameraAdapter,CameraAdapter又继承于FrameNotifier,所以mFrameNotifier对象调用的enableMsgType方法其实是一个虚函数,
最终调用的是BaseCameraAdapter这个类中定义的enableMsgType方法,我们来看一看他的实现:
void BaseCameraAdapter::enableMsgType(int32_t
msgs, frame_callback callback, event_callback eventCb, void* cookie)
{
Mutex::Autolock lock(mSubscriberLock);
LOG_FUNCTION_NAME;
int32_t frameMsg = ((msgs >> MessageNotifier::FRAME_BIT_FIELD_POSITION) & EVENT_MASK);
int32_t eventMsg = ((msgs >> MessageNotifier::EVENT_BIT_FIELD_POSITION) & EVENT_MASK);
if ( frameMsg != 0 )
{
CAMHAL_LOGVB("Frame message type id=0x%x subscription request", frameMsg);
switch ( frameMsg )
{
case CameraFrame::PREVIEW_FRAME_SYNC:
mFrameSubscribers.add((int) cookie, callback);
break;
case CameraFrame::FRAME_DATA_SYNC:
mFrameDataSubscribers.add((int) cookie, callback);
break;
case CameraFrame::SNAPSHOT_FRAME:
mSnapshotSubscribers.add((int) cookie, callback);
break;
case CameraFrame::IMAGE_FRAME:
mImageSubscribers.add((int) cookie, callback);
break;
case CameraFrame::RAW_FRAME:
mRawSubscribers.add((int) cookie, callback);
break;
case CameraFrame::VIDEO_FRAME_SYNC:
mVideoSubscribers.add((int) cookie, callback);
break;
case CameraFrame::REPROCESS_INPUT_FRAME:
mVideoInSubscribers.add((int) cookie, callback);
break;
default:
CAMHAL_LOGEA("Frame message type id=0x%x subscription no supported yet!", frameMsg);
break;
}
}
if ( eventMsg != 0)
{
CAMHAL_LOGVB("Event message type id=0x%x subscription request", eventMsg);
if ( CameraHalEvent::ALL_EVENTS == eventMsg )
{
mFocusSubscribers.add((int) cookie, eventCb);
mShutterSubscribers.add((int) cookie, eventCb);
mZoomSubscribers.add((int) cookie, eventCb);
mMetadataSubscribers.add((int) cookie, eventCb);
}
else
{
CAMHAL_LOGEA("Event message type id=0x%x subscription no supported yet!", eventMsg);
}
}
LOG_FUNCTION_NAME_EXIT;
}
这里通过mFrameSubscribers.add((int) cookie, callback)这个方法将mFrameCallback回调函数与key相关联
所以上面可以通过callback = (frame_callback) subscribers->valueAt(i);
这个方法获取callback的实现,因为上面已经实现了关联,所以数据最终是通过上面分析道的方法继续进行数据流显示
void ANativeWindowDisplayAdapter::frameCallback(CameraFrame* caFrame)
{
///Call queueBuffer
of overlay in the context of the callback thread
DisplayFrame df;
df.mBuffer = caFrame->mBuffer;
df.mType = (CameraFrame::FrameType) caFrame->mFrameType;
df.mOffset = caFrame->mOffset;
df.mWidthStride = caFrame->mAlignment;
df.mLength = caFrame->mLength;
df.mWidth = caFrame->mWidth;
df.mHeight = caFrame->mHeight;
PostFrame(df);//这里填充了DisplayFrame这个结构,并调用PostFrome实现显示
}
这里PostFrame成了我要研究的主要内容,将数据以DisplayFrame结构的方式打包之后到底是怎么实现显示的呢??
status_t ANativeWindowDisplayAdapter::PostFrame(ANativeWindowDisplayAdapter::DisplayFrame &dispFrame)
{
status_t ret = NO_ERROR;
uint32_t actualFramesWithDisplay = 0;
android_native_buffer_t *buffer = NULL;
GraphicBufferMapper &mapper = GraphicBufferMapper::get();
int i;
///@todo Do cropping
based on the stabilized frame coordinates
///@todo
Insert logic to drop frames here based on refresh rate of
///display or rendering
rate whichever is lower
///Queue the buffer to overlay
if ( NULL == mANativeWindow ) {
return NO_INIT;
}
if (!mBuffers || !dispFrame.mBuffer) {
CAMHAL_LOGEA("NULL sent to PostFrame");
return BAD_VALUE;
}
for ( i = 0; i < mBufferCount; i++ )
{
if ( dispFrame.mBuffer == &mBuffers[i] )
{
break;
}
}
mFramesType.add( (int)mBuffers[i].opaque ,dispFrame.mType );
if ( mDisplayState == ANativeWindowDisplayAdapter::DISPLAY_STARTED &&
(!mPaused || CameraFrame::CameraFrame::SNAPSHOT_FRAME == dispFrame.mType) &&
!mSuspend)
{
Mutex::Autolock lock(mLock);
uint32_t xOff = (dispFrame.mOffset% PAGE_SIZE);
uint32_t yOff = (dispFrame.mOffset / PAGE_SIZE);
// Set crop only if current
x and y offsets do not match with frame offsets
if((mXOff!=xOff) || (mYOff!=yOff))
{
CAMHAL_LOGDB("Offset %d xOff = %d, yOff = %d", dispFrame.mOffset, xOff, yOff);
uint8_t bytesPerPixel;
///Calculate bytes per pixel based on the
pixel format
if(strcmp(mPixelFormat, (const char *) CameraParameters::PIXEL_FORMAT_YUV422I) == 0)
{
bytesPerPixel = 2;
}
else if(strcmp(mPixelFormat, (const char *) CameraParameters::PIXEL_FORMAT_RGB565) == 0)
{
bytesPerPixel = 2;
}
else if(strcmp(mPixelFormat, (const char *) CameraParameters::PIXEL_FORMAT_YUV420SP) == 0)
{
bytesPerPixel = 1;
}
else
{
bytesPerPixel = 1;
}
CAMHAL_LOGVB(" crop.left = %d crop.top = %d crop.right = %d crop.bottom = %d",
xOff/bytesPerPixel, yOff , (xOff/bytesPerPixel)+mPreviewWidth, yOff+mPreviewHeight);
// We'll ignore any errors here, if the
surface is
// already invalid, we'll
know soon enough.
mANativeWindow->set_crop(mANativeWindow, xOff/bytesPerPixel, yOff,
(xOff/bytesPerPixel)+mPreviewWidth, yOff+mPreviewHeight);
///Update the current x and y
offsets
mXOff = xOff;
mYOff = yOff;
}
{
buffer_handle_t *handle = (buffer_handle_t *) mBuffers[i].opaque;
// unlock buffer before sending to display
mapper.unlock(*handle);
ret = mANativeWindow->enqueue_buffer(mANativeWindow, handle);
}
if ( NO_ERROR != ret ) {
CAMHAL_LOGE("Surface::queueBuffer returned error %d", ret);
}
mFramesWithCameraAdapterMap.removeItem((buffer_handle_t *) dispFrame.mBuffer->opaque);
// HWComposer has not minimum buffer requirement. We
should be able to dequeue
// the buffer immediately
TIUTILS::Message msg;
mDisplayQ.put(&msg);
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
if ( mMeasureStandby )
{
CameraHal::PPM("Standby
to first shot: Sensor Change completed - ", &mStandbyToShot);
mMeasureStandby = false;
}
else if (CameraFrame::CameraFrame::SNAPSHOT_FRAME == dispFrame.mType)
{
CameraHal::PPM("Shot
to snapshot: ", &mStartCapture);
mShotToShot = true;
}
else if ( mShotToShot )
{
CameraHal::PPM("Shot
to shot: ", &mStartCapture);
mShotToShot = false;
}
#endif
}
else
{
Mutex::Autolock lock(mLock);
buffer_handle_t *handle = (buffer_handle_t *) mBuffers[i].opaque;
// unlock buffer before giving it up
mapper.unlock(*handle);
// cancel buffer and dequeue another one
ret = mANativeWindow->cancel_buffer(mANativeWindow, handle);
if ( NO_ERROR != ret ) {
CAMHAL_LOGE("Surface::cancelBuffer returned error %d", ret);
}
mFramesWithCameraAdapterMap.removeItem((buffer_handle_t *) dispFrame.mBuffer->opaque);
TIUTILS::Message msg;
mDisplayQ.put(&msg);
ret = NO_ERROR;
}
return ret;
}
这个显示的过程相对来说还是比较复杂的,之后还需要花点时间研究一下
待续。。。。。
Camera数据流分析全程记录(overlay方式)
这里为什么要研究overlay方式呢?android camera需要driver和app层需要有大量数据需要传输,如果使用非overlay方式进行数据从driver到app层的传输,使系统性能受到很到影响,使系统速度变慢,同时会影响功耗等,而在camera preview module时,通常我们是不必要将采集的数据保存下来的,而不像录像module下,需要将数据保存下来,所以overlay方式就是不经过数据回传,直接显示从driver的数据方式,采用这种方式app从无法获取到数据,所以这种方式应用在preview方式下
这里我是针对android4.0版本的,相对android2.x版本的overlay已经发生了很大的变化,想要研究这方面的可以自己去了解一下,这里不再多说了
开始部分我就直接在这里带过了,系统初始打开camera时,调用到app的onCreate方法,这里主要做了一下工作:
1.开始一个openCamera线程打开camera
2.实例化很多的对象,用于camera工作使用
3.实例化surfaceview和surfaceholder,并且填充了其中的surfacechanged,surfacedestoryed和surfacecreated这三个方式
4.开始一个preview线程用于preview过程
这其中3.4是我们这里要关注的重点,上面实例化了这个surfaceview将决定了我们到底是否使用overlay方式
在这里第三遍完成之后,系统会自动执行surfacechanged这个方式,每次显示区域发生改变都会自动调用这个方法,刚开始打开camera时,显示区域从无到有,因此必要这里会想调用到surfacechanged方法
我们就还是看看在这里都做了些什么事情
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
// Make sure we have a surface in the holder before proceeding.
if (holder.getSurface() == null) {
Log.d(TAG, "holder.getSurface()
== null");
return;
}
Log.v(TAG, "surfaceChanged.
w=" + w + ". h=" + h);
// We need to save the holder for later
use, even when the mCameraDevice
// is null. This
could happen if onResume() is invoked
after this
// function.
mSurfaceHolder = holder;
// The mCameraDevice will be null if it
fails to connect to the camera
// hardware. In this case we
will show a dialog and then finish the
// activity, so it's
OK to ignore it.
if (mCameraDevice == null) return;
// Sometimes surfaceChanged is called after onPause or before
onResume.
// Ignore it.
if (mPausing || isFinishing()) return;
setSurfaceLayout();
// Set preview display if the
surface is being created. Preview was
// already started. Also restart the preview if display
rotation has
// changed. Sometimes this happens when the device is held in portrait
// and camera app is opened. Rotation
animation takes some time and
// display rotation in onCreate may not be
what we want.
if (mCameraState == PREVIEW_STOPPED) {//这里表示第一次打开camera时,那么调用startpreview
startPreview(true);
startFaceDetection();
} else {//这里则表示camera已经打开过程中发生的显示变化,比如横屏竖频转换,所以zheli只需要重新设置previewdisplay
if (Util.getDisplayRotation(this) != mDisplayRotation) {
setDisplayOrientation();
}
if (holder.isCreating()) {
// Set preview display if the
surface is being created and preview
// was already started. That means preview display was set to null
// and we need to set it now.
setPreviewDisplay(holder);
}
}
// If first time initialization is not finished, send
a message to do
// it later. We want to finish
surfaceChanged as soon as possible to let
// user see preview first.
if (!mFirstTimeInitialized) {
mHandler.sendEmptyMessage(FIRST_TIME_INIT);
} else {
initializeSecondTime();
}
SurfaceView preview = (SurfaceView) findViewById(R.id.camera_preview);
CameraInfo info = CameraHolder.instance().getCameraInfo()[mCameraId];
boolean mirror = (info.facing == CameraInfo.CAMERA_FACING_FRONT);
int displayRotation = Util.getDisplayRotation(this);
int displayOrientation = Util.getDisplayOrientation(displayRotation, mCameraId);
mTouchManager.initialize(preview.getHeight() / 3, preview.getHeight() / 3,
preview, this, mirror, displayOrientation);
}
从上面代码我们必须知道,在surface发生变化时必须调用setPreviewDisplay,根据之后的学习,在startpreview方式中真正startpreview之前同样要调用setPreviewDisplay,在setPreviewDisplay的方法中完成了很多初始化,也是在这里决定是否使用overlay方式的,我们就先看看startpreview这个方法吧
private void startPreview(boolean updateAll) {
if (mPausing || isFinishing()) return;
mFocusManager.resetTouchFocus();
mCameraDevice.setErrorCallback(mErrorCallback);
// If we're
previewing already, stop the preview first (this will blank
// the screen).
if (mCameraState != PREVIEW_STOPPED) stopPreview();
setPreviewDisplay(mSurfaceHolder);
setDisplayOrientation();
if (!mSnapshotOnIdle) {
// If the focus mode is continuous
autofocus, call cancelAutoFocus to
// resume it because it may have been paused by autoFocus call.
if (Parameters.FOCUS_MODE_CONTINUOUS_PICTURE.equals(mFocusManager.getFocusMode())) {
mCameraDevice.cancelAutoFocus();
}
mFocusManager.setAeAwbLock(false); // Unlock
AE and AWB.
}
if ( updateAll ) {
Log.v(TAG, "Updating
all parameters!");
setCameraParameters(UPDATE_PARAM_INITIALIZE | UPDATE_PARAM_ZOOM | UPDATE_PARAM_PREFERENCE);
} else {
setCameraParameters(UPDATE_PARAM_MODE);
}
//setCameraParameters(UPDATE_PARAM_ALL);
// Inform the mainthread to go on the
UI initialization.
if (mCameraPreviewThread != null) {
synchronized (mCameraPreviewThread) {
mCameraPreviewThread.notify();
}
}
try {
Log.v(TAG, "startPreview");
mCameraDevice.startPreview();
} catch (Throwable ex) {
closeCamera();
throw new RuntimeException("startPreview failed", ex);
}
mZoomState = ZOOM_STOPPED;
setCameraState(IDLE);
mFocusManager.onPreviewStarted();
if ( mTempBracketingEnabled ) {
mFocusManager.setTempBracketingState(FocusManager.TempBracketingStates.ACTIVE);
}
if (mSnapshotOnIdle) {
mHandler.post(mDoSnapRunnable);
}
}
上面大家看到了,先调用了setPreviewDisplay,最后调用mCameraDevice.startPreview()开始preview
这里过程如下:app-->frameworks-->JNI-->camera client-->camera service-->hardware interface-->HAL
1.setPreviewDisplay方法调用时在app层最初的传入的参数是surfaceholder结构
2.到了JNI层setPreviewDisplay方法传入的参数已经是surface结构了
3.到了camera service层
sp binder(surface != 0 ? surface->asBinder() : 0);
sp window(surface);
return setPreviewWindow(binder, window);
通过上面的转换调用同名不同参数的另外一个方法,到这里调用的参数已经转变为IBinder和ANativeWindow
4.调用hardware interface的setPreviewWindow(window),这里只有一个ANativeWindow类型的参数
5.到了camerahal_module中转站时又发生了变化,看看下面的定义,参数变为preview_stream_ops 这个类型的结构
int camera_set_preview_window(struct camera_device * device, struct preview_stream_ops *window)
上面过程参数类型一直在变化,不过从app层一直传到这里,其实是对同一个内存地址的传输,就像张三换了身衣服,但是他还是张三一样
现在我们就直接看看HAL层的实现
/**
@brief Sets ANativeWindow object.
Preview buffers provided to CameraHal via this object. DisplayAdapter will be interfacing with it
to render buffers to display.
@param[in] window The
ANativeWindow object created by Surface flinger
@return NO_ERROR If the ANativeWindow object passes validation criteria
@todo Define validation criteria for ANativeWindow object. Define error codes for scenarios
*/
status_t CameraHal::setPreviewWindow(struct preview_stream_ops *window)
{
status_t ret = NO_ERROR;
CameraAdapter::BuffersDescriptor desc;
LOG_FUNCTION_NAME;
mSetPreviewWindowCalled = true;
///If the
Camera service passes a null window, we destroy existing window and free
the DisplayAdapter
if(!window)//这种情况下,window是null,表示不采用overlay方式,则不需要新建displayadapter
{
if(mDisplayAdapter.get() != NULL)
{
///NULL window passed, destroy
the display adapter if present
CAMHAL_LOGD("NULL window passed, destroying display adapter");
mDisplayAdapter.clear();
///@remarks If there
was a window previously existing, we usually expect another valid window to be
passed by the client
///@remarks
so, we will wait until it passes a valid window to begin
the preview again
mSetPreviewWindowCalled = false;
}
CAMHAL_LOGD("NULL ANativeWindow passed to setPreviewWindow");
return NO_ERROR;
}else if(mDisplayAdapter.get() == NULL)//传入的window不是null,但是还没有未使用overlay方式创建displayadapter,创建displayadapter
{
// Need to create the display adapter since it has not been
created
// Create display adapter
mDisplayAdapter = new ANativeWindowDisplayAdapter();
ret = NO_ERROR;
if(!mDisplayAdapter.get() || ((ret=mDisplayAdapter->initialize())!=NO_ERROR))
{
if(ret!=NO_ERROR)
{
mDisplayAdapter.clear();
CAMHAL_LOGEA("DisplayAdapter initialize failed");
LOG_FUNCTION_NAME_EXIT;
return ret;
}
else
{
CAMHAL_LOGEA("Couldn't create DisplayAdapter");
LOG_FUNCTION_NAME_EXIT;
return NO_MEMORY;
}
}
// DisplayAdapter needs to know where to get the
CameraFrames from inorder to display
// Since CameraAdapter is the one that provides the frames, set it
as the frame provider for DisplayAdapter
mDisplayAdapter->setFrameProvider(mCameraAdapter);
// Any dynamic errors that happen during the camera use case has to be
propagated back to the application
// via CAMERA_MSG_ERROR. AppCallbackNotifier is the class that
notifies such errors to the application
// Set it as the error handler for the
DisplayAdapter
mDisplayAdapter->setErrorHandler(mAppCallbackNotifier.get());
// Update the display adapter with the new window that is passed
from CameraService
ret = mDisplayAdapter->setPreviewWindow(window);
if(ret!=NO_ERROR)
{
CAMHAL_LOGEB("DisplayAdapter setPreviewWindow returned error %d", ret);
}
if(mPreviewStartInProgress)
{
CAMHAL_LOGDA("setPreviewWindow called when preview running");
// Start the preview since the window is now available
ret = startPreview();
}
} else {//传入的window不是null,并且displaadaper已经创建好,那么这里只需要将新的window与已经创建好的displayadapter关联即可
// Update the display adapter with the new window that is passed
from CameraService
ret = mDisplayAdapter->setPreviewWindow(window);
if ( (NO_ERROR == ret) && previewEnabled() ) {
restartPreview();
} else if (ret == ALREADY_EXISTS) {
// ALREADY_EXISTS should be treated as a noop in this case
ret = NO_ERROR;
}
}
LOG_FUNCTION_NAME_EXIT;
return ret;
}
这里我们重点看看新建displayadapter的过程:
1.实例化一个ANativeWindowDisplayAdapter对象
2.mDisplayAdapter->initialize()
3.mDisplayAdapter->setFrameProvider(mCameraAdapter)//这一步是关键,之后会遇到的
4.mDisplayAdapter->setErrorHandler(mAppCallbackNotifier.get())
5.mDisplayAdapter->setPreviewWindow(window);
做完了上面这些步骤之后,就是startpreview了
/**
@brief Start preview mode.
@param none
@return NO_ERROR Camera switched to VF mode
@todo Update function header with the different errors that are possible
*/
status_t CameraHal::startPreview() {
LOG_FUNCTION_NAME;
// When tunneling is enabled during VTC, startPreview
happens in 2 steps:
// When the application sends the command CAMERA_CMD_PREVIEW_INITIALIZATION,
// cameraPreviewInitialization() is called, which in turn
causes the CameraAdapter
// to move from loaded to idle
state. And when the application calls startPreview,
// the CameraAdapter moves from idle to executing state.
//
// If the application calls startPreview() without
sending the command
// CAMERA_CMD_PREVIEW_INITIALIZATION, then the function cameraPreviewInitialization()
// AND startPreview() are
executed. In other words, if the
application calls
// startPreview() without
sending the command CAMERA_CMD_PREVIEW_INITIALIZATION,
// then the CameraAdapter moves from loaded to idle to executing
state in one shot.
status_t ret = cameraPreviewInitialization();
// The flag mPreviewInitializationDone is set to true at
the end of the function
// cameraPreviewInitialization(). Therefore, if everything
goes alright, then the
// flag will be set. Sometimes, the function cameraPreviewInitialization() may
// return prematurely if all the resources are not available for starting
preview.
// For example, if the
preview window is not set, then it
would return NO_ERROR.
// Under such circumstances, one should return from startPreview as
well and should
// not continue execution. That is why, we
check the flag and not the return value.
if (!mPreviewInitializationDone) return
ret;
// Once startPreview is called, there is no
need to continue to remember whether
// the function cameraPreviewInitialization() was
called earlier or not. And so
// the flag mPreviewInitializationDone is reset here. Plus, this
preserves the
// current behavior of startPreview under the circumstances where the application
// calls startPreview twice or more.
mPreviewInitializationDone = false;
///Enable
the display adapter if present, actual
overlay enable happens when we post the buffer
if(mDisplayAdapter.get() != NULL) {
CAMHAL_LOGDA("Enabling display");
int width, height;
mParameters.getPreviewSize(&width, &height);
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
ret = mDisplayAdapter->enableDisplay(width, height, &mStartPreview);
#else
ret = mDisplayAdapter->enableDisplay(width, height, NULL);
#endif
if ( ret != NO_ERROR ) {
CAMHAL_LOGEA("Couldn't enable display");
// FIXME: At this stage mStateSwitchLock is locked and unlock is supposed to be
called
// only from mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_PREVIEW)
// below. But this will never happen because of goto error. Thus
at next
// startPreview() call CameraHAL
will be deadlocked.
// Need to revisit mStateSwitch lock, for now just
abort the process.
CAMHAL_ASSERT_X(false,
"At this stage mCameraAdapter->mStateSwitchLock is still locked, "
"deadlock is guaranteed");
goto error;
}
}
///Send START_PREVIEW command to adapter
CAMHAL_LOGDA("Starting CameraAdapter preview mode");
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_PREVIEW);
if(ret!=NO_ERROR) {
CAMHAL_LOGEA("Couldn't start preview w/ CameraAdapter");
goto error;
}
CAMHAL_LOGDA("Started preview");
mPreviewEnabled = true;
mPreviewStartInProgress = false;
return ret;
error:
CAMHAL_LOGEA("Performing cleanup after error");
//Do all the cleanup
freePreviewBufs();
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_PREVIEW);
if(mDisplayAdapter.get() != NULL) {
mDisplayAdapter->disableDisplay(false);
}
mAppCallbackNotifier->stop();
mPreviewStartInProgress = false;
mPreviewEnabled = false;
LOG_FUNCTION_NAME_EXIT;
return ret;
}
上面标出的cameraPreviewInitialization()方法也十分关键,之前已经说过,之后如果需要会再做说明
Enable the display adapter if present, actual
overlay enable happens when we post the buffer
说明如果display adapter不是null,这里会enable,overlay方式就启动了
我们接着往下看,看看driver获取的数据到底是怎样处理的,startpreview会通过camerahal-->cameraapapter-->V4Lcameradapter
调用到v4l2层的startpreview,下面看看他的具体是实现
status_t V4LCameraAdapter::startPreview()
{
status_t ret = NO_ERROR;
LOG_FUNCTION_NAME;
Mutex::Autolock lock(mPreviewBufsLock);
if(mPreviewing) {
ret = BAD_VALUE;
goto EXIT;
}
for (int i = 0; i < mPreviewBufferCountQueueable; i++) {
mVideoInfo->buf.index = i;
mVideoInfo->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
mVideoInfo->buf.memory = V4L2_MEMORY_MMAP;
ret = v4lIoctl(mCameraHandle, VIDIOC_QBUF, &mVideoInfo->buf);//请求分配内存
if (ret < 0) {
CAMHAL_LOGEA("VIDIOC_QBUF Failed");
goto EXIT;
}
nQueued++;
}
ret = v4lStartStreaming();
// Create and start preview thread for receiving
buffers from V4L Camera
if(!mCapturing) {
mPreviewThread = new
PreviewThread(this);//开启PreviewThread
CAMHAL_LOGDA("Created preview thread");
}
//Update the flag to indicate we are previewing
mPreviewing = true;
mCapturing = false;
EXIT:
LOG_FUNCTION_NAME_EXIT;
return ret;
}
int V4LCameraAdapter::previewThread()
{
status_t ret = NO_ERROR;
int width, height;
CameraFrame frame;
void *y_uv[2];
int index = 0;
int stride = 4096;
char *fp = NULL;
mParams.getPreviewSize(&width, &height);
if (mPreviewing) {
fp = this->GetFrame(index);
if(!fp) {
ret = BAD_VALUE;
goto EXIT;
}
CameraBuffer *buffer = mPreviewBufs.keyAt(index);//获取camerabuffer
CameraFrame *lframe = (CameraFrame *)mFrameQueue.valueFor(buffer);//获取cameraframe
if (!lframe) {
ret = BAD_VALUE;
goto EXIT;
}
debugShowFPS();
if ( mFrameSubscribers.size() == 0 ) {
ret = BAD_VALUE;
goto EXIT;
}
y_uv[0] = (void*) lframe->mYuv[0];
//y_uv[1] = (void*) lframe->mYuv[1];
//y_uv[1] = (void*) (lframe->mYuv[0] + height*stride);
convertYUV422ToNV12Tiler ( (unsigned char*)fp, (unsigned
char*)y_uv[0], width, height);//convert
the data
CAMHAL_LOGVB("##...index= %d.;camera buffer= 0x%x; y= 0x%x; UV= 0x%x.",index, buffer, y_uv[0], y_uv[1] );
#ifdef SAVE_RAW_FRAMES
unsigned char* nv12_buff = (unsigned char*) malloc(width*height*3/2);
//Convert yuv422i to yuv420sp(NV12) & dump
the frame to a file
convertYUV422ToNV12 ( (unsigned char*)fp, nv12_buff, width, height);
saveFile( nv12_buff, ((width*height)*3/2) );//if
you want to save the data,save it
free (nv12_buff);
#endif
//填充frame结构,用于数据处理
frame.mFrameType = CameraFrame::PREVIEW_FRAME_SYNC;
frame.mBuffer = buffer;
frame.mLength = width*height*3/2;
frame.mAlignment = stride;
frame.mOffset = 0;
frame.mTimestamp = systemTime(SYSTEM_TIME_MONOTONIC);
frame.mFrameMask = (unsigned int)CameraFrame::PREVIEW_FRAME_SYNC;
if (mRecording)
{
frame.mFrameMask |= (unsigned int)CameraFrame::VIDEO_FRAME_SYNC;
mFramesWithEncoder++;
}
//这里是重点,数据回调,或者使用overlay方式显示这里是决定性调用
ret = setInitFrameRefCount(frame.mBuffer, frame.mFrameMask);
if (ret != NO_ERROR) {
CAMHAL_LOGDB("Error in setInitFrameRefCount %d", ret);
} else {
ret = sendFrameToSubscribers(&frame);
}
}
EXIT:
return ret;
}
现在就开始看看setInitFrameCount方法都做了些什么
int BaseCameraAdapter::setInitFrameRefCount(CameraBuffer * buf, unsigned int mask)
{
int ret = NO_ERROR;
unsigned int lmask;
LOG_FUNCTION_NAME;
if (buf == NULL)
{
return -EINVAL;
}
for( lmask = 1; lmask < CameraFrame::ALL_FRAMES; lmask <<= 1){
if( lmask & mask ){
switch( lmask ){
case CameraFrame::IMAGE_FRAME:
{
setFrameRefCount(buf, CameraFrame::IMAGE_FRAME, (int) mImageSubscribers.size());
}
break;
case CameraFrame::RAW_FRAME:
{
setFrameRefCount(buf, CameraFrame::RAW_FRAME, mRawSubscribers.size());
}
break;
case CameraFrame::PREVIEW_FRAME_SYNC:
{
setFrameRefCount(buf, CameraFrame::PREVIEW_FRAME_SYNC, mFrameSubscribers.size());//这里这个mFrameSubscribers对应的key上保存着响应的callback方法
}
break;
case CameraFrame::SNAPSHOT_FRAME:
{
setFrameRefCount(buf, CameraFrame::SNAPSHOT_FRAME, mSnapshotSubscribers.size());
}
break;
case CameraFrame::VIDEO_FRAME_SYNC:
{
setFrameRefCount(buf,CameraFrame::VIDEO_FRAME_SYNC, mVideoSubscribers.size());
}
break;
case CameraFrame::FRAME_DATA_SYNC:
{
setFrameRefCount(buf, CameraFrame::FRAME_DATA_SYNC, mFrameDataSubscribers.size());
}
break;
case CameraFrame::REPROCESS_INPUT_FRAME:
{
setFrameRefCount(buf,CameraFrame::REPROCESS_INPUT_FRAME, mVideoInSubscribers.size());
}
break;
default:
CAMHAL_LOGEB("FRAMETYPE NOT SUPPORTED 0x%x", lmask);
break;
}//SWITCH
mask &= ~lmask;
}//IF
}//FOR
LOG_FUNCTION_NAME_EXIT;
return ret;
}
上面我标注的部分通过enableMsgType方法实现mFrameSubscribers.add的,经callback添加到对应的key处,算是实现关联,
同样的通过disableMsgType方法实现mFrameSubscribers.removeItem的,具体在哪里调用enableMsgType和disableMsgType之后再给予说明
void BaseCameraAdapter::setFrameRefCount(CameraBuffer * frameBuf, CameraFrame::FrameType
frameType, int refCount)
{
LOG_FUNCTION_NAME;
switch ( frameType )
{
case CameraFrame::IMAGE_FRAME:
case CameraFrame::RAW_FRAME:
{
Mutex::Autolock lock(mCaptureBufferLock);
mCaptureBuffersAvailable.replaceValueFor(frameBuf, refCount);
}
break;
case CameraFrame::SNAPSHOT_FRAME:
{
Mutex::Autolock lock(mSnapshotBufferLock);
mSnapshotBuffersAvailable.replaceValueFor( ( unsigned int ) frameBuf, refCount);
}
break;
case CameraFrame::PREVIEW_FRAME_SYNC:
{
Mutex::Autolock lock(mPreviewBufferLock)
mPreviewBuffersAvailable.replaceValueFor(frameBuf, refCount);//这里我的理解是refCount和frameBuf实现了绑定,即camerabuf保存在mPreviewBuffersAvailable对应的key处
}
break;
case CameraFrame::FRAME_DATA_SYNC:
{
Mutex::Autolock lock(mPreviewDataBufferLock);
mPreviewDataBuffersAvailable.replaceValueFor(frameBuf, refCount);
}
break;
case CameraFrame::VIDEO_FRAME_SYNC:
{
Mutex::Autolock lock(mVideoBufferLock);
mVideoBuffersAvailable.replaceValueFor(frameBuf, refCount);
}
break;
case CameraFrame::REPROCESS_INPUT_FRAME: {
Mutex::Autolock lock(mVideoInBufferLock);
mVideoInBuffersAvailable.replaceValueFor(frameBuf, refCount);
}
break;
default:
break;
};
LOG_FUNCTION_NAME_EXIT;
}
接下我们看看sendFrameToSubscribers方法的具体实现过程
status_t BaseCameraAdapter::sendFrameToSubscribers(CameraFrame *frame)
{
status_t ret = NO_ERROR;
unsigned int mask;
if ( NULL == frame )
{
CAMHAL_LOGEA("Invalid CameraFrame");
return -EINVAL;
}
for( mask = 1; mask < CameraFrame::ALL_FRAMES; mask <<= 1){
if( mask & frame->mFrameMask ){
switch( mask ){
case CameraFrame::IMAGE_FRAME:
{
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
CameraHal::PPM("Shot
to Jpeg: ", &mStartCapture);
#endif
ret = __sendFrameToSubscribers(frame, &mImageSubscribers, CameraFrame::IMAGE_FRAME);
}
break;
case CameraFrame::RAW_FRAME:
{
ret = __sendFrameToSubscribers(frame, &mRawSubscribers, CameraFrame::RAW_FRAME);
}
break;
case CameraFrame::PREVIEW_FRAME_SYNC:
{
ret = __sendFrameToSubscribers(frame, &mFrameSubscribers, CameraFrame::PREVIEW_FRAME_SYNC);
}
break;
case CameraFrame::SNAPSHOT_FRAME:
{
ret = __sendFrameToSubscribers(frame, &mSnapshotSubscribers, CameraFrame::SNAPSHOT_FRAME);
}
break;
case CameraFrame::VIDEO_FRAME_SYNC:
{
ret = __sendFrameToSubscribers(frame, &mVideoSubscribers, CameraFrame::VIDEO_FRAME_SYNC);
}
break;
case CameraFrame::FRAME_DATA_SYNC:
{
ret = __sendFrameToSubscribers(frame, &mFrameDataSubscribers, CameraFrame::FRAME_DATA_SYNC);
}
break;
case CameraFrame::REPROCESS_INPUT_FRAME:
{
ret = __sendFrameToSubscribers(frame, &mVideoInSubscribers, CameraFrame::REPROCESS_INPUT_FRAME);
}
break;
default:
CAMHAL_LOGEB("FRAMETYPE NOT SUPPORTED 0x%x", mask);
break;
}//SWITCH
frame->mFrameMask &= ~mask;
if (ret != NO_ERROR) {
goto EXIT;
}
}//IF
}//FOR
EXIT:
return ret;
}
status_t BaseCameraAdapter::__sendFrameToSubscribers(CameraFrame* frame,
KeyedVector<int, frame_callback> *subscribers,
CameraFrame::FrameType frameType)
{
size_t refCount = 0;
status_t ret = NO_ERROR;
frame_callback callback = NULL;
frame->mFrameType = frameType;
if ( (frameType == CameraFrame::PREVIEW_FRAME_SYNC) ||
(frameType == CameraFrame::VIDEO_FRAME_SYNC) ||
(frameType == CameraFrame::SNAPSHOT_FRAME) ){
if (mFrameQueue.size() > 0){
CameraFrame *lframe = (CameraFrame *)mFrameQueue.valueFor(frame->mBuffer);
frame->mYuv[0] = lframe->mYuv[0];
frame->mYuv[1] = frame->mYuv[0] + (frame->mLength + frame->mOffset)*2/3;
}
else{
CAMHAL_LOGDA("Empty Frame Queue");
return -EINVAL;
}
}
if (NULL != subscribers) {
refCount = getFrameRefCount(frame->mBuffer, frameType);//通过这个refCount可以找到对应的callback方法
if (refCount == 0) {
CAMHAL_LOGDA("Invalid ref count of 0");
return -EINVAL;
}
if (refCount > subscribers->size()) {
CAMHAL_LOGEB("Invalid ref count for frame type: 0x%x", frameType);
return -EINVAL;
}
CAMHAL_LOGVB("Type of Frame: 0x%x address: 0x%x refCount start %d",
frame->mFrameType,
( uint32_t ) frame->mBuffer,
refCount);
for ( unsigned int i = 0 ; i < refCount; i++ ) {
frame->mCookie = ( void * ) subscribers->keyAt(i);
callback = (frame_callback) subscribers->valueAt(i);
if (!callback) {
CAMHAL_LOGEB("callback not set for frame type: 0x%x", frameType);
return -EINVAL;
}
callback(frame);
}
} else {
CAMHAL_LOGEA("Subscribers is null??");
return -EINVAL;
}
return ret;
}
这里别的我们先暂且不分析,但是callback到底是从哪里来的,这个我们必须说清楚
上面在实例化displayadapter时有这样一步:3.mDisplayAdapter->setFrameProvider(mCameraAdapter)//这一步是关键,之后会遇到的
我们看看setFrameProvider这个方法的实现:
int ANativeWindowDisplayAdapter::setFrameProvider(FrameNotifier *frameProvider)
{
LOG_FUNCTION_NAME;
// Check for NULL pointer
if ( !frameProvider ) {
CAMHAL_LOGEA("NULL passed for frame provider");
LOG_FUNCTION_NAME_EXIT;
return BAD_VALUE;
}
//Release any previous frame providers
if ( NULL != mFrameProvider ) {
delete mFrameProvider;
}
/** Dont do anything
here, Just save the pointer for use when display is
actually enabled or disabled
*/
mFrameProvider = new
FrameProvider(frameProvider, this, frameCallbackRelay);//实例化一个FrameProvider,这其中有一个参数非常重要:frameCallbackRelay,他的定义在下面给出
LOG_FUNCTION_NAME_EXIT;
return NO_ERROR;
}
void ANativeWindowDisplayAdapter::frameCallbackRelay(CameraFrame* caFrame)
{
if ( NULL != caFrame )
{
if ( NULL != caFrame->mCookie )
{
ANativeWindowDisplayAdapter *da = (ANativeWindowDisplayAdapter*) caFrame->mCookie;
da->frameCallback(caFrame);
}
else
{
CAMHAL_LOGEB("Invalid Cookie in Camera Frame = %p, Cookie = %p", caFrame, caFrame->mCookie);
}
}
else
{
CAMHAL_LOGEB("Invalid Camera Frame = %p", caFrame);
}
}
void ANativeWindowDisplayAdapter::frameCallback(CameraFrame* caFrame)
{
///Call queueBuffer
of overlay in the context of the callback thread
DisplayFrame df;
df.mBuffer = caFrame->mBuffer;
df.mType = (CameraFrame::FrameType) caFrame->mFrameType;
df.mOffset = caFrame->mOffset;
df.mWidthStride = caFrame->mAlignment;
df.mLength = caFrame->mLength;
df.mWidth = caFrame->mWidth;
df.mHeight = caFrame->mHeight;
PostFrame(df);
}
这个回调函数在这里设置,等待数据回调,我们很有必要去看看FrameProvider这个类的构造函数,他是怎样让其他方法调用到这个回调函数的呢
FrameProvider(FrameNotifier *fn, void* cookie, frame_callback
frameCallback)
:mFrameNotifier(fn), mCookie(cookie),mFrameCallback(frameCallback) { }
这个构造函数还是很有意思,没有任何实现,只是通过传入的三个参数实例化了三个对象而已
1.mFrameNotifier(fn), //这里mFrameNotifier就是camerasdapter
2.mCookie(cookie),
3.mFrameCallback(frameCallback)//mFrameCallback指向我们定义好的callback方法
我们接着就需要到之前已经提到过的startPreview方法中cameraPreviewInitialization的方法中去看了
////////////
/**
@brief Set preview mode related initialization
-> Camera Adapter set params
-> Allocate buffers
-> Set use buffers for preview
@param none
@return NO_ERROR
@todo Update function header with the different errors that are possible
*/
status_t CameraHal::cameraPreviewInitialization()
{
status_t ret = NO_ERROR;
CameraAdapter::BuffersDescriptor desc;
CameraFrame frame;
unsigned int required_buffer_count;
unsigned int max_queueble_buffers;
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
gettimeofday(&mStartPreview, NULL);
#endif
LOG_FUNCTION_NAME;
if (mPreviewInitializationDone) {
return NO_ERROR;
}
if ( mPreviewEnabled ){
CAMHAL_LOGDA("Preview already running");
LOG_FUNCTION_NAME_EXIT;
return ALREADY_EXISTS;
}
if ( NULL != mCameraAdapter ) {
ret = mCameraAdapter->setParameters(mParameters);
}
if ((mPreviewStartInProgress == false) && (mDisplayPaused == false)){
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_RESOLUTION_PREVIEW,( int ) &frame);
if ( NO_ERROR != ret ){
CAMHAL_LOGEB("Error: CAMERA_QUERY_RESOLUTION_PREVIEW %d", ret);
return ret;
}
///Update the current preview width and height
mPreviewWidth = frame.mWidth;
mPreviewHeight = frame.mHeight;
}
///If we
don't have the preview callback enabled and display adapter,
if(!mSetPreviewWindowCalled || (mDisplayAdapter.get() == NULL)){
CAMHAL_LOGD("Preview not started. Preview in progress flag set");
mPreviewStartInProgress = true;
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_SWITCH_TO_EXECUTING);
if ( NO_ERROR != ret ){
CAMHAL_LOGEB("Error: CAMERA_SWITCH_TO_EXECUTING %d", ret);
return ret;
}
return NO_ERROR;
}
if( (mDisplayAdapter.get() != NULL) && ( !mPreviewEnabled ) && ( mDisplayPaused ) )
{
CAMHAL_LOGDA("Preview is in paused state");
mDisplayPaused = false;
mPreviewEnabled = true;
if ( NO_ERROR == ret )
{
ret = mDisplayAdapter->pauseDisplay(mDisplayPaused);
if ( NO_ERROR != ret )
{
CAMHAL_LOGEB("Display adapter resume failed %x", ret);
}
}
//restart preview callbacks
if(mMsgEnabled & CAMERA_MSG_PREVIEW_FRAME)
{
mAppCallbackNotifier->enableMsgType (CAMERA_MSG_PREVIEW_FRAME);
}
signalEndImageCapture();
return ret;
}
required_buffer_count = atoi(mCameraProperties->get(CameraProperties::REQUIRED_PREVIEW_BUFS));
///Allocate the preview buffers
ret = allocPreviewBufs(mPreviewWidth, mPreviewHeight, mParameters.getPreviewFormat(), required_buffer_count, max_queueble_buffers);
if ( NO_ERROR != ret )
{
CAMHAL_LOGEA("Couldn't allocate buffers for Preview");
goto error;
}
if ( mMeasurementEnabled )
{
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_BUFFER_SIZE_PREVIEW_DATA,
( int ) &frame,
required_buffer_count);
if ( NO_ERROR != ret )
{
return ret;
}
///Allocate the preview data buffers
ret = allocPreviewDataBufs(frame.mLength, required_buffer_count);
if ( NO_ERROR != ret ) {
CAMHAL_LOGEA("Couldn't allocate preview data buffers");
goto error;
}
if ( NO_ERROR == ret )
{
desc.mBuffers = mPreviewDataBuffers;
desc.mOffsets = mPreviewDataOffsets;
desc.mFd = mPreviewDataFd;
desc.mLength = mPreviewDataLength;
desc.mCount = ( size_t ) required_buffer_count;
desc.mMaxQueueable = (size_t) required_buffer_count;
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW_DATA,
( int ) &desc);
}
}
///Pass the buffers to Camera
Adapter
desc.mBuffers = mPreviewBuffers;
desc.mOffsets = mPreviewOffsets;
desc.mFd = mPreviewFd;
desc.mLength = mPreviewLength;
desc.mCount = ( size_t ) required_buffer_count;
desc.mMaxQueueable = (size_t) max_queueble_buffers;
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW,
( int ) &desc);
if ( NO_ERROR != ret )
{
CAMHAL_LOGEB("Failed to register preview buffers: 0x%x", ret);
freePreviewBufs();
return ret;
}
mAppCallbackNotifier->startPreviewCallbacks(mParameters, mPreviewBuffers, mPreviewOffsets, mPreviewFd, mPreviewLength, required_buffer_count);
///Start the callback notifier
ret = mAppCallbackNotifier->start();
if( ALREADY_EXISTS == ret )
{
//Already running, do nothing
CAMHAL_LOGDA("AppCallbackNotifier already running");
ret = NO_ERROR;
}
else if ( NO_ERROR == ret ) {
CAMHAL_LOGDA("Started AppCallbackNotifier..");
mAppCallbackNotifier->setMeasurements(mMeasurementEnabled);
}
else
{
CAMHAL_LOGDA("Couldn't start AppCallbackNotifier");
goto error;
}
if (ret == NO_ERROR) mPreviewInitializationDone = true;
return ret;
error:
CAMHAL_LOGEA("Performing cleanup after error");
//Do all the cleanup
freePreviewBufs();
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_PREVIEW);
if(mDisplayAdapter.get() != NULL)
{
mDisplayAdapter->disableDisplay(false);
}
mAppCallbackNotifier->stop();
mPreviewStartInProgress = false;
mPreviewEnabled = false;
LOG_FUNCTION_NAME_EXIT;
return ret;
}
我们就看看这个方法的是实现吧:mAppCallbackNotifier->enableMsgType (CAMERA_MSG_PREVIEW_FRAME);
status_t AppCallbackNotifier::enableMsgType(int32_t
msgType)
{
if( msgType & CAMERA_MSG_PREVIEW_FRAME ) {
mFrameProvider->enableFrameNotification(CameraFrame::PREVIEW_FRAME_SYNC);
}
if( msgType & CAMERA_MSG_POSTVIEW_FRAME ) {
mFrameProvider->enableFrameNotification(CameraFrame::SNAPSHOT_FRAME);
}
if(msgType & CAMERA_MSG_RAW_IMAGE) {
mFrameProvider->enableFrameNotification(CameraFrame::RAW_FRAME);
}
return NO_ERROR;
}
int FrameProvider::enableFrameNotification(int32_t
frameTypes)
{
LOG_FUNCTION_NAME;
status_t ret = NO_ERROR;
///Enable the frame notification to CameraAdapter (which
implements FrameNotifier interface)
mFrameNotifier->enableMsgType(frameTypes<<MessageNotifier::FRAME_BIT_FIELD_POSITION, mFrameCallback, NULL, mCookie);
LOG_FUNCTION_NAME_EXIT;
return ret;
}
这里这个enableMsgType其实就是前面已经提到过的那个enableMsgType方法,实现callback方法add到响应的key上
这里这个mFrameNotifier是FrameNotifier的对象,FrameNotifier这个类继承于MessageNotifier
而BaseCameraAdapter继承于CameraAdapter,CameraAdapter又继承于FrameNotifier,所以mFrameNotifier对象调用的enableMsgType方法其实是一个虚函数,
最终调用的是BaseCameraAdapter这个类中定义的enableMsgType方法,我们来看一看他的实现:
void BaseCameraAdapter::enableMsgType(int32_t
msgs, frame_callback callback, event_callback eventCb, void* cookie)
{
Mutex::Autolock lock(mSubscriberLock);
LOG_FUNCTION_NAME;
int32_t frameMsg = ((msgs >> MessageNotifier::FRAME_BIT_FIELD_POSITION) & EVENT_MASK);
int32_t eventMsg = ((msgs >> MessageNotifier::EVENT_BIT_FIELD_POSITION) & EVENT_MASK);
if ( frameMsg != 0 )
{
CAMHAL_LOGVB("Frame message type id=0x%x subscription request", frameMsg);
switch ( frameMsg )
{
case CameraFrame::PREVIEW_FRAME_SYNC:
mFrameSubscribers.add((int) cookie, callback);
break;
case CameraFrame::FRAME_DATA_SYNC:
mFrameDataSubscribers.add((int) cookie, callback);
break;
case CameraFrame::SNAPSHOT_FRAME:
mSnapshotSubscribers.add((int) cookie, callback);
break;
case CameraFrame::IMAGE_FRAME:
mImageSubscribers.add((int) cookie, callback);
break;
case CameraFrame::RAW_FRAME:
mRawSubscribers.add((int) cookie, callback);
break;
case CameraFrame::VIDEO_FRAME_SYNC:
mVideoSubscribers.add((int) cookie, callback);
break;
case CameraFrame::REPROCESS_INPUT_FRAME:
mVideoInSubscribers.add((int) cookie, callback);
break;
default:
CAMHAL_LOGEA("Frame message type id=0x%x subscription no supported yet!", frameMsg);
break;
}
}
if ( eventMsg != 0)
{
CAMHAL_LOGVB("Event message type id=0x%x subscription request", eventMsg);
if ( CameraHalEvent::ALL_EVENTS == eventMsg )
{
mFocusSubscribers.add((int) cookie, eventCb);
mShutterSubscribers.add((int) cookie, eventCb);
mZoomSubscribers.add((int) cookie, eventCb);
mMetadataSubscribers.add((int) cookie, eventCb);
}
else
{
CAMHAL_LOGEA("Event message type id=0x%x subscription no supported yet!", eventMsg);
}
}
LOG_FUNCTION_NAME_EXIT;
}
这里通过mFrameSubscribers.add((int) cookie, callback)这个方法将mFrameCallback回调函数与key相关联
所以上面可以通过callback = (frame_callback) subscribers->valueAt(i);
这个方法获取callback的实现,因为上面已经实现了关联,所以数据最终是通过上面分析道的方法继续进行数据流显示
void ANativeWindowDisplayAdapter::frameCallback(CameraFrame* caFrame)
{
///Call queueBuffer
of overlay in the context of the callback thread
DisplayFrame df;
df.mBuffer = caFrame->mBuffer;
df.mType = (CameraFrame::FrameType) caFrame->mFrameType;
df.mOffset = caFrame->mOffset;
df.mWidthStride = caFrame->mAlignment;
df.mLength = caFrame->mLength;
df.mWidth = caFrame->mWidth;
df.mHeight = caFrame->mHeight;
PostFrame(df);//这里填充了DisplayFrame这个结构,并调用PostFrome实现显示
}
这里PostFrame成了我要研究的主要内容,将数据以DisplayFrame结构的方式打包之后到底是怎么实现显示的呢??
status_t ANativeWindowDisplayAdapter::PostFrame(ANativeWindowDisplayAdapter::DisplayFrame &dispFrame)
{
status_t ret = NO_ERROR;
uint32_t actualFramesWithDisplay = 0;
android_native_buffer_t *buffer = NULL;
GraphicBufferMapper &mapper = GraphicBufferMapper::get();
int i;
///@todo Do cropping
based on the stabilized frame coordinates
///@todo
Insert logic to drop frames here based on refresh rate of
///display or rendering
rate whichever is lower
///Queue the buffer to overlay
if ( NULL == mANativeWindow ) {
return NO_INIT;
}
if (!mBuffers || !dispFrame.mBuffer) {
CAMHAL_LOGEA("NULL sent to PostFrame");
return BAD_VALUE;
}
for ( i = 0; i < mBufferCount; i++ )
{
if ( dispFrame.mBuffer == &mBuffers[i] )
{
break;
}
}
mFramesType.add( (int)mBuffers[i].opaque ,dispFrame.mType );
if ( mDisplayState == ANativeWindowDisplayAdapter::DISPLAY_STARTED &&
(!mPaused || CameraFrame::CameraFrame::SNAPSHOT_FRAME == dispFrame.mType) &&
!mSuspend)
{
Mutex::Autolock lock(mLock);
uint32_t xOff = (dispFrame.mOffset% PAGE_SIZE);
uint32_t yOff = (dispFrame.mOffset / PAGE_SIZE);
// Set crop only if current
x and y offsets do not match with frame offsets
if((mXOff!=xOff) || (mYOff!=yOff))
{
CAMHAL_LOGDB("Offset %d xOff = %d, yOff = %d", dispFrame.mOffset, xOff, yOff);
uint8_t bytesPerPixel;
///Calculate bytes per pixel based on the
pixel format
if(strcmp(mPixelFormat, (const char *) CameraParameters::PIXEL_FORMAT_YUV422I) == 0)
{
bytesPerPixel = 2;
}
else if(strcmp(mPixelFormat, (const char *) CameraParameters::PIXEL_FORMAT_RGB565) == 0)
{
bytesPerPixel = 2;
}
else if(strcmp(mPixelFormat, (const char *) CameraParameters::PIXEL_FORMAT_YUV420SP) == 0)
{
bytesPerPixel = 1;
}
else
{
bytesPerPixel = 1;
}
CAMHAL_LOGVB(" crop.left = %d crop.top = %d crop.right = %d crop.bottom = %d",
xOff/bytesPerPixel, yOff , (xOff/bytesPerPixel)+mPreviewWidth, yOff+mPreviewHeight);
// We'll ignore any errors here, if the
surface is
// already invalid, we'll
know soon enough.
mANativeWindow->set_crop(mANativeWindow, xOff/bytesPerPixel, yOff,
(xOff/bytesPerPixel)+mPreviewWidth, yOff+mPreviewHeight);
///Update the current x and y
offsets
mXOff = xOff;
mYOff = yOff;
}
{
buffer_handle_t *handle = (buffer_handle_t *) mBuffers[i].opaque;
// unlock buffer before sending to display
mapper.unlock(*handle);
ret = mANativeWindow->enqueue_buffer(mANativeWindow, handle);
}
if ( NO_ERROR != ret ) {
CAMHAL_LOGE("Surface::queueBuffer returned error %d", ret);
}
mFramesWithCameraAdapterMap.removeItem((buffer_handle_t *) dispFrame.mBuffer->opaque);
// HWComposer has not minimum buffer requirement. We
should be able to dequeue
// the buffer immediately
TIUTILS::Message msg;
mDisplayQ.put(&msg);
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
if ( mMeasureStandby )
{
CameraHal::PPM("Standby
to first shot: Sensor Change completed - ", &mStandbyToShot);
mMeasureStandby = false;
}
else if (CameraFrame::CameraFrame::SNAPSHOT_FRAME == dispFrame.mType)
{
CameraHal::PPM("Shot
to snapshot: ", &mStartCapture);
mShotToShot = true;
}
else if ( mShotToShot )
{
CameraHal::PPM("Shot
to shot: ", &mStartCapture);
mShotToShot = false;
}
#endif
}
else
{
Mutex::Autolock lock(mLock);
buffer_handle_t *handle = (buffer_handle_t *) mBuffers[i].opaque;
// unlock buffer before giving it up
mapper.unlock(*handle);
// cancel buffer and dequeue another one
ret = mANativeWindow->cancel_buffer(mANativeWindow, handle);
if ( NO_ERROR != ret ) {
CAMHAL_LOGE("Surface::cancelBuffer returned error %d", ret);
}
mFramesWithCameraAdapterMap.removeItem((buffer_handle_t *) dispFrame.mBuffer->opaque);
TIUTILS::Message msg;
mDisplayQ.put(&msg);
ret = NO_ERROR;
}
return ret;
}
这个显示的过程相对来说还是比较复杂的,之后还需要花点时间研究一下
待续。。。。。
相关文章推荐
- Android Camera数据流分析全程记录(overlay方式二)
- Android Camera数据流分析全程记录(overlay方式)
- Android Camera数据流分析全程记录(overlay方式二)
- Android Camera数据流分析全程记录(overlay方式二)
- Android Camera数据流分析全程记录(overlay方式)
- Android Camera数据流分析全程记录(overlay方式一)
- Android Camera数据流分析全程记录(overlay方式)
- Android Camera数据流分析全程记录(overlay方式)
- Android Camera数据流分析全程记录(非overlay方式)
- SDL实现overlay方式双屏显示的应用流程分析(thinkvd开发日志)
- centos 5.5 安装mysql 5.5 全程详细记录 RPM方式安装
- Android Camera OMX方式数据流分析
- EntityFramework的多种记录日志方式,记录错误并分析执行时间过长原因(系列4)
- EntityFramework的多种记录日志方式,记录错误并分析执行时间过长原因
- SDL实现overlay方式双屏显示的应用流程分析(thinkvd开发日志)
- [转]EntityFramework的多种记录日志方式,记录错误并分析执行时间过长原因(系列4)
- RAC生产数据库RMAN方式恢复到异地单机数据库全程记录
- Android Camera数据流完整分析
- EntityFramework的多种记录日志方式,记录错误并分析执行时间过长原因(系列4)
- Android Camera数据流完整分析