您的位置:首页 > 移动开发 > Android开发

Android Camera TakePicture过程分析

2014-11-22 10:58 351 查看
Camera子系统采用C/S架构,客户端和服务端在两个不同的进程当中,它们使用android中的binder机制进行通信,

本系列文章将从Android Camera应用程序到硬件抽象的实现一步一步对照相机系统进行分析,首先从CameraService初始化过程着手,然后从上层APP打开照相机->进行preview->拍照以及聚焦等功能的实现全面的学习照相机子系统

1、CameraService初始化过程

frameworks\base\media\mediaserver\Main_MediaServer.cpp

CameraService在MediaServer中初始化,代码是MediaServer的main函数,在该函数中初始化照相机服务,已在上一篇文章中講述

CameraService中的instantiate方法用来创建CameraService实例,并进行相应的初始化,这个函数定义在它的父类BinderService中:frameworks/base/include/binder/BinderService.h

相机服务的初始化过程首先是创建CameraService实例,然后将其注册到ServiceManager中,关于它的启动是发生在init.rc中,通过media_server来启动CameraService,具体代码如下:

system/core/rootdir/init.rc

service servicemanager /system/bin/servicemanager

class core

user system

group system

critical

onrestart restart zygote

onrestart restart media

onrestart restart surfaceflinger

onrestart restart drm

service media /system/bin/mediaserver

class main

user media

group audio camera inet net_bt net_bt_admin net_bw_acct drmrpc

ioprio rt 4

在cameraService注册以及启动过程中cameraService自身会执行一些初始化的工作,主要涉及到如下工作

frameworks/base/services/camera/libcameraservice/CameraService.cpp

CameraService::CameraService()

:mSoundRef(0), mModule(0)
{
LOGI("CameraService started (pid=%d)", getpid());
gCameraService = this;
}

void CameraService::onFirstRef()
{
BnCameraService::onFirstRef();

if (hw_get_module(CAMERA_HARDWARE_MODULE_ID,

(const
hw_module_t **)&mModule)
< 0)
{
LOGE("Could not load camera HAL module");
mNumberOfCameras = 0;
}
else {
mNumberOfCameras
= mModule->get_number_of_cameras();
if (mNumberOfCameras
> MAX_CAMERAS)
{
LOGE("Number of cameras(%d) > MAX_CAMERAS(%d).",
mNumberOfCameras, MAX_CAMERAS);
mNumberOfCameras = MAX_CAMERAS;
}
for (int i
= 0; i
< mNumberOfCameras; i++)
{
setCameraFree(i);
}
}
}

在上述初始化代码中,先通过调用HAL硬件抽象层库load camera HAL module,获取支持的摄像头个数并保存到mNumberOfCameras

2、应用程序链接相机服务过程

在camera应用程序启动的时候首先会和CameraService建立连接,camera应用程序代码就不分析了,上一篇文章已經說過,下面这副图是一个简单的流程图



从上面的流程图我们可以看出在请求服务的过程中多出用到了应用框架层的camera类,该类定义在 frameworks/base/core/java/android/hardware/Camera.java文件当中,这一个类正是Camera子系统中APP层和JNI层交换的接口,它对上为应用程序提供了各种操作Camera的方法,向下访问JNI实现它自身的接口Camera类定义如下:

public class Camera {

public static Camera open(int cameraId)
{
return new Camera(cameraId);
}

.................

Camera(int cameraId)
{
Looper looper;
if ((looper
= Looper.myLooper())
!=
null) {
mEventHandler = new EventHandler(this, looper);
} else
if ((looper
= Looper.getMainLooper())
!=
null) {
mEventHandler = new EventHandler(this, looper);
} else
{
mEventHandler =
null;
}
native_setup(new WeakReference<Camera>(this), cameraId);
}
}

下面就开始从app的takepicture逐步分析camera takepicture整个过程

在app中,takepicture是在capture方法中被调用的:packages/apps/OMAPCamera/src/com/ti/omap4/android/camera/Camera.java(apk)

@Override

public boolean capture()
{
synchronized (mCameraStateLock)
{
//
If we are already in the middle of taking a snapshot
then ignore.
if (mCameraState
== SNAPSHOT_IN_PROGRESS
|| mCameraDevice
==
null) {
return false;
}
mCaptureStartTime = System.currentTimeMillis();
mPostViewPictureCallbackTime = 0;
mJpegImageData =
null;

//
Set rotation and gps data.
Util.setRotationParameter(mParameters, mCameraId, mOrientation);
Location loc
= mLocationManager.getCurrentLocation();
Util.setGpsParameters(mParameters, loc);
if (canSetParameters())
{
mCameraDevice.setParameters(mParameters);
}

try {
mCameraDevice.takePicture(mShutterCallback,
mRawPictureCallback,
mPostViewPictureCallback,
new JpegPictureCallback(loc));
} catch
(RuntimeException e )
{
e.printStackTrace();
return false;
}
mFaceDetectionStarted =
false;
setCameraState(SNAPSHOT_IN_PROGRESS);
return true;
}
}

这里调用的takePicture是在framework层中定义:frameworks/base/core/java/android/hardware/Camera.java

public final void takePicture(ShutterCallback shutter, PictureCallback raw,

PictureCallback postview, PictureCallback jpeg)
{
mShutterCallback = shutter;
mRawImageCallback = raw;
mPostviewCallback = postview;
mJpegCallback = jpeg;

//
If callback is
not set,
do not send me callbacks.
int msgType
= 0;
if (mShutterCallback
!=
null) {
msgType |= CAMERA_MSG_SHUTTER;
}
if (mRawImageCallback
!=
null) {
msgType |= CAMERA_MSG_RAW_IMAGE;
}
if (mPostviewCallback
!=
null) {
msgType |= CAMERA_MSG_POSTVIEW_FRAME;
}
if (mJpegCallback
!=
null) {
msgType |= CAMERA_MSG_COMPRESSED_IMAGE;
}

native_takePicture(msgType);
}

在这里设置callback函数,并调用通过JNI调用takepicture方法:frameworks/base/core/jni/android_hardware_Camera.cpp

static void android_hardware_Camera_takePicture(JNIEnv *env, jobject thiz, int msgType)

{
LOGV("takePicture");
JNICameraContext* context;
sp<Camera> camera
= get_native_camera(env, thiz,
&context);
if (camera
== 0) return;

/*
* When CAMERA_MSG_RAW_IMAGE
is requested, if the raw image callback
* buffer is available, CAMERA_MSG_RAW_IMAGE
is enabled to
get the
* notification _and_ the data; otherwise, CAMERA_MSG_RAW_IMAGE_NOTIFY
* is enabled
to receive the callback notification but no data.
*
* Note that CAMERA_MSG_RAW_IMAGE_NOTIFY
is not exposed
to the
* Java application.
*/
if (msgType
& CAMERA_MSG_RAW_IMAGE)
{
LOGV("Enable raw image callback buffer");
if (!context->isRawImageCallbackBufferAvailable())
{
LOGV("Enable raw image notification, since no callback buffer exists");
msgType &=
~CAMERA_MSG_RAW_IMAGE;
msgType |= CAMERA_MSG_RAW_IMAGE_NOTIFY;
}
}

if
(camera->takePicture(msgType)
!= NO_ERROR)
{
jniThrowRuntimeException(env,
"takePicture failed");
return;
}
}

这里调用camera 的takepicture,即camera client 的takepicture方法:frameworks/base/libs/camera/Camera.cpp

status_t Camera::takePicture(int msgType, const String8& params)

{
LOGV("takePicture: 0x%x", msgType);
sp <ICamera> c
= mCamera;
if (c
== 0) return NO_INIT;
return c->takePicture(msgType,
params);
}

这里client 端的takepicture又调用到camera
server端的takepicture:frameworks/base/services/camera/libcameraservice/CameraService.cpp

// take a picture - image is returned in callback

#ifdef OMAP_ENHANCEMENT_CPCAM
status_t CameraService::Client::takePicture(int msgType,
const String8& params)
{
#else
status_t CameraService::Client::takePicture(int msgType)
{
#endif
LOG1("takePicture (pid %d): 0x%x", getCallingPid(),
msgType);

Mutex::Autolock lock(mLock);
status_t result = checkPidAndHardware();
if (result
!= NO_ERROR) return result;

if ((msgType
& CAMERA_MSG_RAW_IMAGE)
&&
(msgType & CAMERA_MSG_RAW_IMAGE_NOTIFY))
{
LOGE("CAMERA_MSG_RAW_IMAGE and CAMERA_MSG_RAW_IMAGE_NOTIFY"
" cannot be both enabled");
return BAD_VALUE;
}

// We only accept picture related message types
//
and ignore other types of messages for takePicture().
int picMsgType
= msgType
&
(CAMERA_MSG_SHUTTER |
CAMERA_MSG_POSTVIEW_FRAME
|
CAMERA_MSG_RAW_IMAGE |
#ifdef OMAP_ENHANCEMENT
CAMERA_MSG_RAW_BURST |
#endif
CAMERA_MSG_RAW_IMAGE_NOTIFY
|
CAMERA_MSG_COMPRESSED_IMAGE);
#ifdef OMAP_ENHANCEMENT
picMsgType |= CAMERA_MSG_COMPRESSED_BURST_IMAGE;
#endif

enableMsgType(picMsgType);

#ifdef OMAP_ENHANCEMENT
// make sure the other capture messages are disabled
picMsgType = ~picMsgType
&
(CAMERA_MSG_SHUTTER
|
CAMERA_MSG_POSTVIEW_FRAME |
CAMERA_MSG_RAW_IMAGE |
CAMERA_MSG_RAW_BURST |
CAMERA_MSG_RAW_IMAGE_NOTIFY |
CAMERA_MSG_COMPRESSED_IMAGE |
CAMERA_MSG_COMPRESSED_BURST_IMAGE);
disableMsgType(picMsgType);
#endif

#ifdef OMAP_ENHANCEMENT_CPCAM
return mHardware->takePicture(params);
#else
return mHardware->takePicture();
#endif
}

一些初始化之后,最终server端的takepicture方法会调用HAL层(硬件接口层)的takepicture方法:frameworks/base/services/camera/libcameraservice/CameraHardwareInterface.h

/**

* Take a picture.
*/
#ifdef OMAP_ENHANCEMENT_CPCAM
status_t takePicture(const ShotParameters
¶ms)
{
LOGV("%s(%s)", __FUNCTION__, mName.string());
if (mDevice->ops->take_picture)
return mDevice->ops->take_picture(mDevice,
params.flatten().string());
return INVALID_OPERATION;
}
#else
status_t takePicture()
{
LOGV("%s(%s)", __FUNCTION__, mName.string());
if (mDevice->ops->take_picture)
return mDevice->ops->take_picture(mDevice);//从这里开始通过V4L2子系统调用到kerner
driver的设备,以后针对这个部分做详细学习
return INVALID_OPERATION;
}
#endif

下面的重点是分析数据回调过程,这个过程是camera的最重点,在我看来也是最难点理解的地方,要多花点时间,努把力了,现在就开始

首先还是必须先追溯到Camera客户端与服务端连接的时候,由我的上一遍初始化的文章知道,Camera客户端与服务端连接的时候,首先调用的是client端的connect方法,

client的connect方法首先getservice,然后调用server端的connect方法,为了方便理解我再次把这部分代码贴出:

server的connect()函数定义在以下路径:frameworks/base/services/camera/libcameraservice/CameraService.cpp

sp<ICamera> CameraService::connect(

const sp<ICameraClient>& cameraClient,
int cameraId)
{
int callingPid
= getCallingPid();
sp<CameraHardwareInterface> hardware
= NULL;

LOG1("CameraService::connect E (pid %d, id %d)", callingPid, cameraId);

if (!mModule)
{
LOGE("Camera HAL module not loaded");
return NULL;
}

sp<Client> client;
if (cameraId
< 0 || cameraId
>= mNumberOfCameras)
{
LOGE("CameraService::connect X (pid %d) rejected (invalid cameraId %d).",
callingPid, cameraId);
return NULL;
}

char value[PROPERTY_VALUE_MAX;
property_get("sys.secpolicy.camera.disabled", value,
"0");
if (strcmp(value,
"1")
== 0)
{
// Camera
is disabled by DevicePolicyManager.
LOGI("Camera is disabled. connect X (pid %d) rejected", callingPid);
return NULL;
}

Mutex::Autolock lock(mServiceLock);
if (mClient[cameraId
!= 0)
{
client = mClient[cameraId.promote();
if (client
!= 0)
{
if (cameraClient->asBinder()
== client->getCameraClient()->asBinder())
{
LOG1("CameraService::connect X (pid %d) (the same client)",
callingPid);
return client;
} else
{
LOGW("CameraService::connect X (pid %d) rejected (existing client).",
callingPid);
return NULL;
}
}
mClient[cameraId.clear();
}

if (mBusy[cameraId)
{
LOGW("CameraService::connect X (pid %d) rejected"
" (camera %d is still busy).", callingPid, cameraId);
return NULL;
}

struct camera_info info;
if (mModule->get_camera_info(cameraId,
&info)
!= OK)
{
LOGE("Invalid camera id %d", cameraId);
return NULL;
}

char camera_device_name[10;
snprintf(camera_device_name, sizeof(camera_device_name),
"%d", cameraId);

hardware = new CameraHardwareInterface(camera_device_name);
if (hardware->initialize(&mModule->common)
!= OK)
{
hardware.clear();
return NULL;
}

client
= new Client(this,
cameraClient, hardware,
cameraId, info.facing,
callingPid);
mClient[cameraId
= client;
LOG1("CameraService::connect X");
return client;
}

最重要的地方在上面标注的绿色部分,这里在connect成功之后会new一个client,这个Client是CamereService类的内部类,

这个时候便会调用client这个内部类的client构造函数,而我们的回调函数也正是在这个时候被设置,看看代码:

CameraService::Client::Client(const sp<CameraService>& cameraService,

const sp<ICameraClient>& cameraClient,
const sp<CameraHardwareInterface>& hardware,
int cameraId,
int cameraFacing,
int clientPid)
{
int callingPid
= getCallingPid();
LOG1("Client::Client E (pid %d)", callingPid);

mCameraService = cameraService;
mCameraClient = cameraClient;
mHardware = hardware;
mCameraId = cameraId;
mCameraFacing = cameraFacing;
mClientPid = clientPid;
mMsgEnabled = 0;
mSurface = 0;
mPreviewWindow = 0;
#ifdef OMAP_ENHANCEMENT_CPCAM
mTapin = 0;
mTapinClient = 0;
mTapout = 0;
mTapoutClient = 0;
#endif

mHardware->setCallbacks(notifyCallback,
dataCallback,
dataCallbackTimestamp,

(void
*)cameraId);

// Enable zoom,
error, focus,
and metadata messages by default
enableMsgType(CAMERA_MSG_ERROR
| CAMERA_MSG_ZOOM | CAMERA_MSG_FOCUS
|
CAMERA_MSG_PREVIEW_METADATA);

// Callback
is disabled by default
mPreviewCallbackFlag = CAMERA_FRAME_CALLBACK_FLAG_NOOP;
mOrientation = getOrientation(0, mCameraFacing
== CAMERA_FACING_FRONT);
mPlayShutterSound =
true;
cameraService->setCameraBusy(cameraId);
cameraService->loadSound();
LOG1("Client::Client X (pid %d)", callingPid);
}

上面就对camera设置了notifyCallback、dataCallback、dataCallbackTimestamp三个回调函数,用于返回底层数据用于处理,看下它的处理方法:

这里先针对其中的dataCallback回调方法做介绍,其他的回调方法以此类推,所以我们就先看一下dataCallback方法中都做了些什么事情:

这里的回调函数是camera server层的回调函数:frameworks/base/services/camera/libcameraservice/CameraService.cpp

void CameraService::Client::dataCallback(int32_t msgType,

const sp<IMemory>& dataPtr, camera_frame_metadata_t
*metadata, void* user)
{
LOG2("dataCallback(%d)", msgType);

sp<Client> client
= getClientFromCookie(user);
if (client
== 0) return;
if (!client->lockIfMessageWanted(msgType))
return;

if (dataPtr
== 0
&& metadata ==
NULL)
{
LOGE("Null data returned in data callback");
client->handleGenericNotify(CAMERA_MSG_ERROR, UNKNOWN_ERROR, 0);
return;
}

switch (msgType
& ~CAMERA_MSG_PREVIEW_METADATA)
{
case CAMERA_MSG_PREVIEW_FRAME:
client->handlePreviewData(msgType,
dataPtr, metadata);
break;
case CAMERA_MSG_POSTVIEW_FRAME:
client->handlePostview(dataPtr);
break;
case CAMERA_MSG_RAW_IMAGE:
client->handleRawPicture(dataPtr);
break;
case CAMERA_MSG_COMPRESSED_IMAGE:
client->handleCompressedPicture(dataPtr);
break;
#ifdef OMAP_ENHANCEMENT
case CAMERA_MSG_COMPRESSED_BURST_IMAGE:
client->handleCompressedBurstPicture(dataPtr);
break;
#endif
default:
client->handleGenericData(msgType, dataPtr, metadata);
break;
}
}

这里进行分类处理,因为preview过程需要大量数据传输,而且容易大家理解,这里就针对preview数据回调过程进行分析:

// preview callback - frame buffer update

void CameraService::Client::handlePreviewData(int32_t msgType,
const sp<IMemory>& mem,
camera_frame_metadata_t
*metadata) {
ssize_t offset;
size_t size;
sp<IMemoryHeap> heap
= mem->getMemory(&offset,
&size);

// local copy of the callback flags
int flags = mPreviewCallbackFlag;

//
is callback enabled?
if (!(flags
& CAMERA_FRAME_CALLBACK_FLAG_ENABLE_MASK))
{
//
If the enable bit is off, the copy-out
and one-shot bits are ignored
LOG2("frame callback is disabled");
mLock.unlock();
return;
}

// hold a strong pointer
to the client
sp<ICameraClient> c
= mCameraClient;

// clear callback flags
if no client or one-shot mode
if (c
== 0 ||
(mPreviewCallbackFlag
& CAMERA_FRAME_CALLBACK_FLAG_ONE_SHOT_MASK))
{
LOG2("Disable preview callback");
mPreviewCallbackFlag &=
~(CAMERA_FRAME_CALLBACK_FLAG_ONE_SHOT_MASK
|
CAMERA_FRAME_CALLBACK_FLAG_COPY_OUT_MASK
|
CAMERA_FRAME_CALLBACK_FLAG_ENABLE_MASK);
disableMsgType(CAMERA_MSG_PREVIEW_FRAME);
}

if (c
!= 0)
{
//
Is the received frame copied out or
not?
if (flags
& CAMERA_FRAME_CALLBACK_FLAG_COPY_OUT_MASK)
{
LOG2("frame is copied");
copyFrameAndPostCopiedFrame(msgType,
c, heap,
offset, size,
metadata);
} else
{
LOG2("frame is forwarded");
mLock.unlock();
c->dataCallback(msgType,
mem, metadata);
}
} else
{
mLock.unlock();
}
}

这里有两个方向

copyFrameAndPostCopiedFrame这个函数执行两个buff区preview数据的投递,通过它的具体实现过程,可以知道最终他也要调用dataCallback方法继续调用客户端client的回调函数

所以这里直接分析copyFrameAndPostCopiedFrame:

void CameraService::Client::copyFrameAndPostCopiedFrame(

int32_t msgType,
const sp<ICameraClient>& client,
const sp<IMemoryHeap>& heap, size_t offset, size_t
size,
camera_frame_metadata_t *metadata)
{
LOG2("copyFrameAndPostCopiedFrame");
// It
is necessary to copy out of pmem before sending this
to
// the callback.
For efficiency, reuse the same MemoryHeapBase
// provided it's big enough. Don't allocate the memory
or
// perform the copy
if there's no callback.
// hold the preview lock
while we grab a reference
to the preview buffer
sp<MemoryHeapBase> previewBuffer;

if (mPreviewBuffer
== 0)
{
mPreviewBuffer = new MemoryHeapBase(size, 0,
NULL);
} else
if (size > mPreviewBuffer->virtualSize())
{
mPreviewBuffer.clear();
mPreviewBuffer = new MemoryHeapBase(size, 0,
NULL);
}
if (mPreviewBuffer
== 0)
{
LOGE("failed to allocate space for preview buffer");
mLock.unlock();
return;
}
previewBuffer = mPreviewBuffer;

memcpy(previewBuffer->base(),
(uint8_t *)heap->base()
+ offset, size);

sp<MemoryBase> frame
= new MemoryBase(previewBuffer, 0, size);
if (frame
== 0)
{
LOGE("failed to allocate space for frame callback");
mLock.unlock();
return;
}

mLock.unlock();
client->dataCallback(msgType,
frame, metadata);
}

从这里开始,回调函数进入到camera client的回调函数:frameworks/base/libs/camera/Camera.cpp

// callback from camera service when frame or image is ready

void Camera::dataCallback(int32_t msgType,
const sp<IMemory>& dataPtr,
camera_frame_metadata_t *metadata)
{
sp<CameraListener> listener;
{
Mutex::Autolock _l(mLock);
listener = mListener;
}
if (listener
!=
NULL) {
listener->postData(msgType,
dataPtr, metadata);
}
}

这里的listener到底是什么,还记得初始化的时候,在jni里面有设置listenerm吗?我们还是从新再看一下吧:frameworks/base/core/jni/android_hardware_Camera.cpp

// connect to camera service

static void android_hardware_Camera_native_setup(JNIEnv
*env, jobject thiz,
jobject weak_this, jint cameraId)
{
sp<Camera> camera
= Camera::connect(cameraId);

if (camera
==
NULL) {
jniThrowRuntimeException(env,
"Fail to connect to camera service");
return;
}

// make sure camera hardware
is alive
if (camera->getStatus()
!= NO_ERROR)
{
jniThrowRuntimeException(env,
"Camera initialization failed");
return;
}

jclass clazz = env->GetObjectClass(thiz);
if (clazz
==
NULL) {
jniThrowRuntimeException(env,
"Can't find android/hardware/Camera");
return;
}

// We use a weak reference so the Camera object can be garbage collected.
// The reference
is only used as a proxy
for callbacks.
sp<JNICameraContext>
context = new JNICameraContext(env,
weak_this, clazz,
camera);
context->incStrong(thiz);
camera->setListener(context);

// save context
in opaque field
env->SetIntField(thiz, fields.context,
(int)context.get());
}

由上面可以看出JNICameraContext是个监听类,同时set这个监听类,这个类的定义在:frameworks/base/core/jni/android_hardware_Camera.cpp

// provides persistent context for calls from native code to Java

class JNICameraContext:
public CameraListener
{
public:
JNICameraContext(JNIEnv* env, jobject weak_this, jclass clazz,
const sp<Camera>& camera);
~JNICameraContext()
{ release();
}
virtual void notify(int32_t msgType, int32_t ext1, int32_t ext2);
virtual void postData(int32_t
msgType,
const sp<IMemory>&
dataPtr,
camera_frame_metadata_t
*metadata);
virtual void postDataTimestamp(nsecs_t timestamp, int32_t msgType,
const sp<IMemory>& dataPtr);
void postMetadata(JNIEnv
*env, int32_t msgType, camera_frame_metadata_t
*metadata);
void addCallbackBuffer(JNIEnv
*env, jbyteArray cbb,
int msgType);
void setCallbackMode(JNIEnv
*env, bool installed, bool manualMode);
sp<Camera> getCamera()
{ Mutex::Autolock _l(mLock); return mCamera;
}
bool isRawImageCallbackBufferAvailable()
const;
void release();

private:
void copyAndPost(JNIEnv* env,
const sp<IMemory>& dataPtr,
int msgType);
void clearCallbackBuffers_l(JNIEnv
*env, Vector<jbyteArray>
*buffers);
void clearCallbackBuffers_l(JNIEnv
*env);
jbyteArray getCallbackBuffer(JNIEnv
*env, Vector<jbyteArray>
*buffers, size_t bufferSize);

jobject mCameraJObjectWeak;
// weak reference
to java object
jclass mCameraJClass;
// strong reference
to java class
sp<Camera> mCamera;
// strong reference
to native object
jclass mFaceClass;
// strong reference
to Face class
jclass mRectClass;
// strong reference
to Rect class
Mutex mLock;

/*
* Global reference application-managed raw image buffer queue.
*
* Manual-only mode
is supported for raw image callbacks, which
is
* set whenever method addCallbackBuffer() with msgType
=
* CAMERA_MSG_RAW_IMAGE
is called; otherwise,
null is returned
* with raw image callbacks.
*/
Vector<jbyteArray> mRawImageCallbackBuffers;

/*
* Application-managed preview buffer queue
and the flags
* associated with the usage of the preview buffer callback.
*/
Vector<jbyteArray> mCallbackBuffers;
// Global reference application managed byte[
bool mManualBufferMode;
// Whether to use application managed buffers.
bool mManualCameraCallbackSet;
// Whether the callback has been
set, used to
// reduce unnecessary calls
to set the callback.
};

标注部分是我们在上面用到的postData,我们看一看postData的实现过程:

void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr,

camera_frame_metadata_t
*metadata)
{
// VM pointer will be
NULL if object
is released
Mutex::Autolock _l(mLock);
JNIEnv *env = AndroidRuntime::getJNIEnv();
if (mCameraJObjectWeak
==
NULL) {
LOGW("callback on dead camera object");
return;
}

int32_t dataMsgType = msgType
& ~CAMERA_MSG_PREVIEW_METADATA;

// return data based
on callback type
switch (dataMsgType)
{
case CAMERA_MSG_VIDEO_FRAME:
// should never happen
break;

//
For backward-compatibility purpose,
if there is no callback
// buffer
for raw image, the callback returns
null.
case CAMERA_MSG_RAW_IMAGE:
LOGV("rawCallback");
if (mRawImageCallbackBuffers.isEmpty())
{
env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
mCameraJObjectWeak, dataMsgType, 0, 0,
NULL);
} else
{
copyAndPost(env,
dataPtr, dataMsgType);
}
break;

// There
is no data.
case 0:
break;

default:
LOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
copyAndPost(env, dataPtr, dataMsgType);
break;
}

// post frame metadata
to Java
if (metadata
&&
(msgType & CAMERA_MSG_PREVIEW_METADATA))
{
postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);
}
}

我们接着看看这个copyAndPost方法:

void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)

{
jbyteArray obj =
NULL;

// allocate Java byte
array and copy data
if (dataPtr
!=
NULL) {
ssize_t offset;
size_t size;
sp<IMemoryHeap> heap
= dataPtr->getMemory(&offset,
&size);
LOGV("copyAndPost: off=%ld, size=%d", offset, size);
uint8_t *heapBase
= (uint8_t*)heap->base();

if (heapBase
!=
NULL) {
const jbyte* data
= reinterpret_cast<const jbyte*>(heapBase
+ offset);

if (msgType
== CAMERA_MSG_RAW_IMAGE)
{
obj = getCallbackBuffer(env,
&mRawImageCallbackBuffers, size);
} else
if (msgType
== CAMERA_MSG_PREVIEW_FRAME
&& mManualBufferMode)
{
obj = getCallbackBuffer(env,
&mCallbackBuffers, size);

if
(mCallbackBuffers.isEmpty())
{
LOGV("Out of buffers, clearing callback!");
mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
mManualCameraCallbackSet =
false;

if
(obj ==
NULL) {
return;
}
}
} else
{
LOGV("Allocating callback buffer");
obj
= env->NewByteArray(size);
}

if (obj
==
NULL) {
LOGE("Couldn't allocate byte array for JPEG data");
env->ExceptionClear();
} else
{
env->SetByteArrayRegion(obj,
0, size,
data);
}
} else
{
LOGE("image heap is NULL");
}
}

// post image data
to Java
env->CallStaticVoidMethod(mCameraJClass,
fields.post_event,
mCameraJObjectWeak,
msgType, 0,
0, obj);
if (obj)
{
env->DeleteLocalRef(obj);
}
}

以上先建立一个byte数组obj,将data缓存数据存储进obj数组,CallStaticVoidMethod是C调用java函数,最后执行实在Camera.java(框架)的postEventFromNative()

从这里开始,回调函数进入到camera framework层

frameworks/base/core/java/android/hardware/Camera.java

private static void postEventFromNative(Object camera_ref,

int what,
int arg1,
int arg2, Object obj)
{
Camera c =
(Camera)((WeakReference)camera_ref).get();
if (c
==
null)
return;

if (c.mEventHandler
!=
null) {
Message m
= c.mEventHandler.obtainMessage(what,
arg1, arg2,
obj);
c.mEventHandler.sendMessage(m);
}
}

sendMessage之后由handle进行处理,定义同样在framework层

private class EventHandler extends Handler

{
private Camera mCamera;

public EventHandler(Camera c, Looper looper)
{
super(looper);
mCamera = c;
}

@Override
public void handleMessage(Message msg)
{
switch(msg.what)
{
case CAMERA_MSG_SHUTTER:
if
(mShutterCallback !=
null)
{
mShutterCallback.onShutter();
}
return;

case CAMERA_MSG_RAW_IMAGE:
if
(mRawImageCallback !=
null)
{
mRawImageCallback.onPictureTaken((byte[])msg.obj,
mCamera);
}
return;

case CAMERA_MSG_COMPRESSED_IMAGE:
if
(mJpegCallback !=
null)
{
mJpegCallback.onPictureTaken((byte[])msg.obj,
mCamera);
}
return;

case CAMERA_MSG_PREVIEW_FRAME:
if
(mPreviewCallback
!=
null)
{
PreviewCallback cb = mPreviewCallback;
if
(mOneShot)
{
// Clear the callback variable before the callback
//
in
case the app calls setPreviewCallback from
// the callback
function
mPreviewCallback =
null;
}
else
if
(!mWithBuffer)
{
// We're
faking the camera preview mode to prevent
// the app from being flooded with preview frames.
//
Set
to oneshot mode again.
setHasPreviewCallback(true,
false);
}
cb.onPreviewFrame((byte[])msg.obj,
mCamera);
}
return;

case CAMERA_MSG_POSTVIEW_FRAME:
if
(mPostviewCallback !=
null)
{
mPostviewCallback.onPictureTaken((byte[])msg.obj,
mCamera);
}
return;

case CAMERA_MSG_FOCUS:
if
(mAutoFocusCallback !=
null)
{
mAutoFocusCallback.onAutoFocus(msg.arg1
==
0 ?
false
:
true,
mCamera);
}
return;

case CAMERA_MSG_ZOOM:
if
(mZoomListener !=
null)
{
mZoomListener.onZoomChange(msg.arg1, msg.arg2
!= 0, mCamera);
}
return;

case CAMERA_MSG_PREVIEW_METADATA:
if
(mFaceListener !=
null)
{
mFaceListener.onFaceDetection((Face[])msg.obj,
mCamera);
}
return;

case CAMERA_MSG_ERROR
:
Log.e(TAG,
"Error " + msg.arg1);
if
(mErrorCallback !=
null)
{
mErrorCallback.onError(msg.arg1,
mCamera);
}
return;

default:
Log.e(TAG,
"Unknown message type "
+ msg.what);
return;
}
}
}

上面可以看出,这里处理了所有的回调,快门回调mShutterCallback.onShutter(),RawImageCallback.onPictureTaken()拍照数据回调,自动对焦回调等。。

默认是没有previewcallback这个回调的,除非你的app设置了setPreviewCallback,可以看出preview的数据还是可以向上层回调,只是系统默认不回调,这里再说深一些:

由上面绿色标注的地方可以看出,我们需要做以下事情,检查PreviewCallback
这个在framework中定义的接口有没有设置了setPreviewCallback,设置则调用,这里接口中

的onPreviewFrame方法需要开发者自己实现,这里默认是没有实现的,需要特殊使用的要自己添加,这里是自己的理解,看一下PreviewCallback
接口的定义:frameworks/base/core/java/android/hardware/Camera.java

/**

* Callback interface used
to deliver copies of preview frames as
* they are displayed.
*
* @see #setPreviewCallback(Camera.PreviewCallback)
* @see #setOneShotPreviewCallback(Camera.PreviewCallback)
* @see #setPreviewCallbackWithBuffer(Camera.PreviewCallback)
* @see #startPreview()
*/
public interface PreviewCallback
{
/**
* Called as preview frames are displayed. This callback
is invoked
* on the event thread
{@link #open(int)} was called
from.
*
* @param data the contents of the preview frame
in the format defined
* by {@link android.graphics.ImageFormat},
which can be queried
* with {@link android.hardware.Camera.Parameters#getPreviewFormat()}.
* If
{@link android.hardware.Camera.Parameters#setPreviewFormat(int)}
* is never called, the default will be the YCbCr_420_SP
* (NV21) format.
* @param camera the Camera service object.
*/
void onPreviewFrame(byte[ data, Camera camera);
};

另数据采集区与显示区两个缓存区buffer preview数据的投递,以完成preview实时显示是在HAL层完成的。

takePicture()处理过程跟preview差不多,只是增加了回调函数返回时候存储图像的动作,这里分析一下takepicture的处理过程:

case CAMERA_MSG_COMPRESSED_IMAGE:

if
(mJpegCallback !=
null)
{
mJpegCallback.onPictureTaken((byte[)msg.obj,
mCamera);
}
return;

mJpegCallback的定义

private PictureCallback mJpegCallback;

走到这里我们又不得不回头看看最起初在调用takepicture的时候是怎么调用的

try {

mCameraDevice.takePicture(mShutterCallback, mRawPictureCallback,
mPostViewPictureCallback,
new JpegPictureCallback(loc));
} catch
(RuntimeException e )
{
e.printStackTrace();
return false;
}

这里大家看到了标准部分就是要使用的mJpegCallback,但是这个callback是JpegPictureCallback类,我们定义的mJpegCallback确是PictureCallback 类,不是同一个类

所以这个还是必须得说清楚一点,看看JpegPictureCallback类的定义吧

private final class JpegPictureCallback implements PictureCallback {

Location mLocation;

public JpegPictureCallback(Location loc)
{
mLocation = loc;
}

public void onPictureTaken(
final byte [ jpegData, final android.hardware.Camera camera)
{
if (mPausing)
{
if
(mBurstImages > 0)
{
resetBurst();
mBurstImages = 0;
mHandler.sendEmptyMessageDelayed(RELEASE_CAMERA,
CAMERA_RELEASE_DELAY);
}
return;
}

FocusManager.TempBracketingStates tempState
= mFocusManager.getTempBracketingState();
mJpegPictureCallbackTime = System.currentTimeMillis();
//
If postview callback has arrived, the captured image
is displayed
//
in postview callback.
If not, the captured image
is displayed in
// raw picture callback.
if (mPostViewPictureCallbackTime
!= 0)
{
mShutterToPictureDisplayedTime =
mPostViewPictureCallbackTime
- mShutterCallbackTime;
mPictureDisplayedToJpegCallbackTime
=
mJpegPictureCallbackTime - mPostViewPictureCallbackTime;
} else
{
mShutterToPictureDisplayedTime =
mRawPictureCallbackTime - mShutterCallbackTime;
mPictureDisplayedToJpegCallbackTime
=
mJpegPictureCallbackTime - mRawPictureCallbackTime;
}
Log.v(TAG,
"mPictureDisplayedToJpegCallbackTime = "
+ mPictureDisplayedToJpegCallbackTime
+ "ms");

if (!mIsImageCaptureIntent)
{
enableCameraControls(true);

if
(( tempState != FocusManager.TempBracketingStates.RUNNING
) &&
!mCaptureMode.equals(mExposureBracketing)
&&
!mCaptureMode.equals(mZoomBracketing)
&&
!mBurstRunning
== true)
{
// We want
to show the taken picture
for a while, so we wait
//
for at least 0.5
second before restarting the preview.
long delay = 500
- mPictureDisplayedToJpegCallbackTime;
if
(delay < 0)
{
startPreview(true);
startFaceDetection();
}
else {
mHandler.sendEmptyMessageDelayed(RESTART_PREVIEW, delay);
}
}

}

if (!mIsImageCaptureIntent)
{
Size s = mParameters.getPictureSize();
mImageSaver.addImage(jpegData,
mLocation, s.width,
s.height);
} else
{
mJpegImageData = jpegData;
if
(!mQuickCapture)
{
showPostCaptureAlert();
}
else {
doAttach();
}
}

// Check this
in advance of each shot so we don't add
to shutter
// latency. It's
true that someone else could write
to the SD card in
// the mean
time and fill it, but that could have happened between the
// shutter press
and saving the JPEG too.
checkStorage();

if (!mHandler.hasMessages(RESTART_PREVIEW))
{
long now
= System.currentTimeMillis();
mJpegCallbackFinishTime =
now - mJpegPictureCallbackTime;
Log.v(TAG,
"mJpegCallbackFinishTime = "
+ mJpegCallbackFinishTime
+ "ms");
mJpegPictureCallbackTime = 0;
}

if (mCaptureMode.equals(mExposureBracketing)
) {
mBurstImages --;
if
(mBurstImages == 0
) {
mHandler.sendEmptyMessageDelayed(RESTART_PREVIEW, 0);
}
}

//reset burst
in case of exposure bracketing
if (mCaptureMode.equals(mExposureBracketing)
&& mBurstImages
== 0)
{
mBurstImages = EXPOSURE_BRACKETING_COUNT;
mParameters.set(PARM_BURST, mBurstImages);
mCameraDevice.setParameters(mParameters);
}

if (mCaptureMode.equals(mZoomBracketing)
) {
mBurstImages --;
if
(mBurstImages == 0
) {
mHandler.sendEmptyMessageDelayed(RESTART_PREVIEW, 0);
}
}

//reset burst
in case of zoom bracketing
if (mCaptureMode.equals(mZoomBracketing)
&& mBurstImages
== 0)
{
mBurstImages = ZOOM_BRACKETING_COUNT;
mParameters.set(PARM_BURST, mBurstImages);
mCameraDevice.setParameters(mParameters);
}

if ( tempState
== FocusManager.TempBracketingStates.RUNNING
) {
mBurstImages --;
if
(mBurstImages == 0
) {
mHandler.sendEmptyMessageDelayed(RESTART_PREVIEW, 0);
mTempBracketingEnabled =
true;
stopTemporalBracketing();
}
}

if (mBurstRunning)
{
mBurstImages --;
if
(mBurstImages == 0)
{
resetBurst();
mBurstRunning =
false;
mHandler.sendEmptyMessageDelayed(RESTART_PREVIEW, 0);
}
}
}
}

原来他们是父子类之间的关系,那么自然父类可以可以转换为子类的形式,但是子类就不能向父类转换了,这个不懂就没办法了,面向对象的知识

而且这里子类重新实现了父类的方法onPictureTaken

这里这个函数不就是handle里面调用的函数了嘛,可以看到上面onPictureTaken的实现过程,其实与preview最大的不同就是我上面标注的部分,

takepicture最终将图片保存下来了

好了,takepicture的过程就说到这里了,下一步要进底层了,HAL和driver之间的那些事,driver做的那些事
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: