Android Volley 源码解读
2016-05-27 11:04
441 查看
概述
本篇文章将从源码的角度学习Volley的工作流程。关于Volley的用法,我在上一篇Android Volley 通信框架应用解析已经说明。
Volley中有不少的类,为了能够梳理清晰的结构,我选择从项目中使用Volley的步骤开始。在使用Volley时,我们首先需要创建一个RequestQueue的对象,执行代码:
进入方法,代码如下:
上面的代码中,如果stack为空时,这里面有一个判断逻辑,根据当前SDK的版本号去实例化对象。版本号大于等于9(andorid 2.3)时,则创建HurlStack对象,否则创建的是HttpClientStack对象。关于HurlStack对象,使用的是HttpURLConnection访问网络,而HttpClientStack对象,则是使用HttpClient方式访问网络。接着创建Network对象,将stack传入该对象中,主要是用来处理网络请求。接着创建RequestQueue对象,传入上面的network对象,并调用内部start方法启动,最后返回该queue。
我们进入start()方法,看看里面执行了什么操作,看代码:
这个方法主要是用来调度这个请求队列,首先创建mCacheDispatcher对象,该对象是继承自Thread,并调用start方法启动它。顾名思义该线程主要是处理缓存请求任务。接着for循环中,创建4个NetworkDispatcher的线程对象,并启动它。该4个线程主要是用来调度网络请求。于是,在初始化RequestQueue后,总共会有5个线程在后台运行,不断的等待网络请求的到来。
完成RequestQueue的创建之后,再创建一个Request对象,然后直接调用requestQueue.add(request),这样开始了一个网络请求。
现在我们继续进入add方法中,代码如下:
在add方法中传入一个request之后,执行setRequestQueue(this),设置当前的RequestQueue对象到该request中,然后将该request加入到set集合mCurrentRequests中。接着if判断该request是否设置了缓存条件,默认为true,你可以通过setShouldCache来设定是否需要缓存。如果没有设置缓存,则将该request直接加入到网络请求队列中,然后直接返回该request。如果设置了缓存,则判断cacheKey也就是url地址,在临时等待的缓存中查看是否已经存在该请求任务,并将该request加入mCacheQueue缓存队列中来。
我们可以直接关注CacheDispatcher对象的run方法,如下:
我们在run方法中,可以看到while(true)是一个无限循环的方法,然后取出缓存队列mCacheQueue中的request请求,
判断该request是否被canceled,接着判断在缓存中是否有该请求的实体,如果没有则重新加入到网络请求队列mNetworkQueue中,接着下面执行一个重要的方法parseNetworkResponse,解析网络中返回的数据,然后通过接口mDelivery将结果通过postResponse方法,传递到request对象中,最后通过request中的deliverResponse方法将结果传递出来。
我们继续看NetworkDispatcher类中代码:
同样可以看到,在run方法中执行的是一个无限循环,说明该进程会一直运行,首先取出一个request对象,接着判断该request是否被canceled,然后调用mNetwork对象的mNetwork.performRequest(request),来执行网络请求,mNetwork在初始化时,使用的是BasicNetwork的对象。在得到一个网络返回networkResponse之后,同样开始执行request.parseNetworkResponse(networkResponse),来解析返回的结果,最后又会调用ExecutorDelivery的postResponse方法,来回调解析后的数据结果。
代码如下:
ResponseDeliveryRunnable中的关键代码:
这里我们就能看见熟悉的mRequest.deliverResponse(mResponse.result)方法了,然后就可以将解析的结果数据返回了。
以上便是Volley处理网络请求的业务流程了,概述如下:在主线程中调用RequestQueue的add()方法来添加一条网络请求,这条请求会先被加入到缓存队列当中,如果发现可以找到相应的缓存结果就直接读取缓存并解析,然后回调给主线程。如果在缓存中没有找到结果,则将这条请求加入到网络请求队列中,然后处理发送HTTP请求,解析响应结果,写入缓存,并回调主线程。
Ok,以上便将Volley的源码学习结束了。
本篇文章将从源码的角度学习Volley的工作流程。关于Volley的用法,我在上一篇Android Volley 通信框架应用解析已经说明。
Volley中有不少的类,为了能够梳理清晰的结构,我选择从项目中使用Volley的步骤开始。在使用Volley时,我们首先需要创建一个RequestQueue的对象,执行代码:
RequestQueue requestQueue = Volley.newRequestQueue(context);
进入方法,代码如下:
/** * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it. * * @param context A {@link Context} to use for creating the cache dir. * @return A started {@link RequestQueue} instance. */ public static RequestQueue newRequestQueue(Context context) { return newRequestQueue(context, null); } 这个方法中只有一句话,调用了方法重载,继续: /** * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it. * * @param context A {@link Context} to use for creating the cache dir. * @param stack An {@link HttpStack} to use for the network, or null for default. * @return A started {@link RequestQueue} instance. */ public static RequestQueue newRequestQueue(Context context, HttpStack stack) { File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); String userAgent = "volley/0"; try { String packageName = context.getPackageName(); PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0); userAgent = packageName + "/" + info.versionCode; } catch (NameNotFoundException e) { } if (stack == null) { if (Build.VERSION.SDK_INT >= 9) { stack = new HurlStack(); } else { // Prior to Gingerbread, HttpUrlConnection was unreliable. // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); } } Network network = new BasicNetwork(stack); RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); queue.start(); return queue; }
上面的代码中,如果stack为空时,这里面有一个判断逻辑,根据当前SDK的版本号去实例化对象。版本号大于等于9(andorid 2.3)时,则创建HurlStack对象,否则创建的是HttpClientStack对象。关于HurlStack对象,使用的是HttpURLConnection访问网络,而HttpClientStack对象,则是使用HttpClient方式访问网络。接着创建Network对象,将stack传入该对象中,主要是用来处理网络请求。接着创建RequestQueue对象,传入上面的network对象,并调用内部start方法启动,最后返回该queue。
我们进入start()方法,看看里面执行了什么操作,看代码:
/** * Starts the dispatchers in this queue. */ public void start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } }
这个方法主要是用来调度这个请求队列,首先创建mCacheDispatcher对象,该对象是继承自Thread,并调用start方法启动它。顾名思义该线程主要是处理缓存请求任务。接着for循环中,创建4个NetworkDispatcher的线程对象,并启动它。该4个线程主要是用来调度网络请求。于是,在初始化RequestQueue后,总共会有5个线程在后台运行,不断的等待网络请求的到来。
完成RequestQueue的创建之后,再创建一个Request对象,然后直接调用requestQueue.add(request),这样开始了一个网络请求。
现在我们继续进入add方法中,代码如下:
/** * Adds a Request to the dispatch queue. * @param request The request to service * @return The passed-in request */ public Request add(Request request) { // Tag the request as belonging to this queue and add it to the set of current requests. request.setRequestQueue(this); synchronized (mCurrentRequests) { mCurrentRequests.add(request); } // Process requests in the order they are added. request.setSequence(getSequenceNumber()); request.addMarker("add-to-queue"); // If the request is uncacheable, skip the cache queue and go straight to the network. if (!request.shouldCache()) { mNetworkQueue.add(request); return request; } // Insert request into stage if there's already a request with the same cache key in flight. synchronized (mWaitingRequests) { String cacheKey = request.getCacheKey(); if (mWaitingRequests.containsKey(cacheKey)) { // There is already a request in flight. Queue up. Queue<Request> stagedRequests = mWaitingRequests.get(cacheKey); if (stagedRequests == null) { stagedRequests = new LinkedList<Request>(); } stagedRequests.add(request); mWaitingRequests.put(cacheKey, stagedRequests); if (VolleyLog.DEBUG) { VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); } } else { // Insert 'null' queue for this cacheKey, indicating there is now a request in // flight. mWaitingRequests.put(cacheKey, null); mCacheQueue.add(request); } return request; } }
在add方法中传入一个request之后,执行setRequestQueue(this),设置当前的RequestQueue对象到该request中,然后将该request加入到set集合mCurrentRequests中。接着if判断该request是否设置了缓存条件,默认为true,你可以通过setShouldCache来设定是否需要缓存。如果没有设置缓存,则将该request直接加入到网络请求队列中,然后直接返回该request。如果设置了缓存,则判断cacheKey也就是url地址,在临时等待的缓存中查看是否已经存在该请求任务,并将该request加入mCacheQueue缓存队列中来。
我们可以直接关注CacheDispatcher对象的run方法,如下:
/** * Provides a thread for performing cache triage on a queue of requests. * * Requests added to the specified cache queue are resolved from cache. * Any deliverable response is posted back to the caller via a * {@link ResponseDelivery}. Cache misses and responses that require * refresh are enqueued on the specified network queue for processing * by a {@link NetworkDispatcher}. */ public class CacheDispatcher extends Thread { private static final boolean DEBUG = VolleyLog.DEBUG; /** The queue of requests coming in for triage. */ private final BlockingQueue<Request<?>> mCacheQueue; /** The queue of requests going out to the network. */ private final BlockingQueue<Request<?>> mNetworkQueue; /** The cache to read from. */ private final Cache mCache; /** For posting responses. */ private final ResponseDelivery mDelivery; /** Used for telling us to die. */ private volatile boolean mQuit = false; /** * Creates a new cache triage dispatcher thread. You must call {@link #start()} * in order to begin processing. * * @param cacheQueue Queue of incoming requests for triage * @param networkQueue Queue to post requests that require network to * @param cache Cache interface to use for resolution * @param delivery Delivery interface to use for posting responses */ public CacheDispatcher( BlockingQueue<Request<?>> cacheQueue, BlockingQueue<Request<?>> networkQueue, Cache cache, ResponseDelivery delivery) { mCacheQueue = cacheQueue; mNetworkQueue = networkQueue; mCache = cache; mDelivery = delivery; } /** * Forces this dispatcher to quit immediately. If any requests are still in * the queue, they are not guaranteed to be processed. */ public void quit() { mQuit = true; interrupt(); } @Override public void run() { if (DEBUG) VolleyLog.v("start new dispatcher"); Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); // Make a blocking call to initialize the cache. mCache.initialize(); while (true) { try { // Get a request from the cache triage queue, blocking until // at least one is available. final Request<?> request = mCacheQueue.take(); request.addMarker("cache-queue-take"); // If the request has been canceled, don't bother dispatching it. if (request.isCanceled()) { request.finish("cache-discard-canceled"); continue; } // Attempt to retrieve this item from cache. Cache.Entry entry = mCache.get(request.getCacheKey()); if (entry == null) { request.addMarker("cache-miss"); // Cache miss; send off to the network dispatcher. mNetworkQueue.put(request); continue; } // If it is completely expired, just send it to the network. if (entry.isExpired()) { request.addMarker("cache-hit-expired"); request.setCacheEntry(entry); mNetworkQueue.put(request); continue; } // We have a cache hit; parse its data for delivery back to the request. request.addMarker("cache-hit"); Response<?> response = request.parseNetworkResponse( new NetworkResponse(entry.data, entry.responseHeaders)); request.addMarker("cache-hit-parsed"); if (!entry.refreshNeeded()) { // Completely unexpired cache hit. Just deliver the response. mDelivery.postResponse(request, response); } else { // Soft-expired cache hit. We can deliver the cached response, // but we need to also send the request to the network for // refreshing. request.addMarker("cache-hit-refresh-needed"); request.setCacheEntry(entry); // Mark the response as intermediate. response.intermediate = true; // Post the intermediate response back to the user and have // the delivery then forward the request along to the network. mDelivery.postResponse(request, response, new Runnable() { @Override public void run() { try { mNetworkQueue.put(request); } catch (InterruptedException e) { // Not much we can do about this. } } }); } } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } } } }
我们在run方法中,可以看到while(true)是一个无限循环的方法,然后取出缓存队列mCacheQueue中的request请求,
判断该request是否被canceled,接着判断在缓存中是否有该请求的实体,如果没有则重新加入到网络请求队列mNetworkQueue中,接着下面执行一个重要的方法parseNetworkResponse,解析网络中返回的数据,然后通过接口mDelivery将结果通过postResponse方法,传递到request对象中,最后通过request中的deliverResponse方法将结果传递出来。
我们继续看NetworkDispatcher类中代码:
/** * Provides a thread for performing network dispatch from a queue of requests. * * Requests added to the specified queue are processed from the network via a * specified {@link Network} interface. Responses are committed to cache, if * eligible, using a specified {@link Cache} interface. Valid responses and * errors are posted back to the caller via a {@link ResponseDelivery}. */ public class NetworkDispatcher extends Thread { /** The queue of requests to service. */ private final BlockingQueue<Request<?>> mQueue; /** The network interface for processing requests. */ private final Network mNetwork; /** The cache to write to. */ private final Cache mCache; /** For posting responses and errors. */ private final ResponseDelivery mDelivery; /** Used for telling us to die. */ private volatile boolean mQuit = false; /** * Creates a new network dispatcher thread. You must call {@link #start()} * in order to begin processing. * * @param queue Queue of incoming requests for triage * @param network Network interface to use for performing requests * @param cache Cache interface to use for writing responses to cache * @param delivery Delivery interface to use for posting responses */ public NetworkDispatcher(BlockingQueue<Request<?>> queue, Network network, Cache cache, ResponseDelivery delivery) { mQueue = queue; mNetwork = network; mCache = cache; mDelivery = delivery; } /** * Forces this dispatcher to quit immediately. If any requests are still in * the queue, they are not guaranteed to be processed. */ public void quit() { mQuit = true; interrupt(); } @TargetApi(Build.VERSION_CODES.ICE_CREAM_SANDWICH) private void addTrafficStatsTag(Request<?> request) { // Tag the request (if API >= 14) if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) { TrafficStats.setThreadStatsTag(request.getTrafficStatsTag()); } } @Override public void run() { Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); while (true) { long startTimeMs = SystemClock.elapsedRealtime(); Request<?> request; try { // Take a request from the queue. request = mQueue.take(); } catch (InterruptedException e) { // We may have been interrupted because it was time to quit. if (mQuit) { return; } continue; } try { request.addMarker("network-queue-take"); // If the request was cancelled already, do not perform the // network request. if (request.isCanceled()) { request.finish("network-discard-cancelled"); continue; } addTrafficStatsTag(request); // Perform the network request. NetworkResponse networkResponse = mNetwork.performRequest(request); request.addMarker("network-http-complete"); // If the server returned 304 AND we delivered a response already, // we're done -- don't deliver a second identical response. if (networkResponse.notModified && request.hasHadResponseDelivered()) { request.finish("not-modified"); continue; } // Parse the response here on the worker thread. Response<?> response = request.parseNetworkResponse(networkResponse); request.addMarker("network-parse-complete"); // Write to cache if applicable. // TODO: Only update cache metadata instead of entire record for 304s. if (request.shouldCache() && response.cacheEntry != null) { mCache.put(request.getCacheKey(), response.cacheEntry); request.addMarker("network-cache-written"); } // Post the response back. request.markDelivered(); mDelivery.postResponse(request, response); } catch (VolleyError volleyError) { volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); parseAndDeliverNetworkError(request, volleyError); } catch (Exception e) { VolleyLog.e(e, "Unhandled exception %s", e.toString()); VolleyError volleyError = new VolleyError(e); volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); mDelivery.postError(request, volleyError); } } } private void parseAndDeliverNetworkError(Request<?> request, VolleyError error) { error = request.parseNetworkError(error); mDelivery.postError(request, error); } }
同样可以看到,在run方法中执行的是一个无限循环,说明该进程会一直运行,首先取出一个request对象,接着判断该request是否被canceled,然后调用mNetwork对象的mNetwork.performRequest(request),来执行网络请求,mNetwork在初始化时,使用的是BasicNetwork的对象。在得到一个网络返回networkResponse之后,同样开始执行request.parseNetworkResponse(networkResponse),来解析返回的结果,最后又会调用ExecutorDelivery的postResponse方法,来回调解析后的数据结果。
代码如下:
@Override public void postResponse(Request<?> request, Response<?> response) { postResponse(request, response, null); } @Override public void postResponse(Request<?> request, Response<?> response, Runnable runnable) { request.markDelivered(); request.addMarker("post-response"); mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable)); }
ResponseDeliveryRunnable中的关键代码:
@Override public void run() { // If this request has canceled, finish it and don't deliver. if (mRequest.isCanceled()) { mRequest.finish("canceled-at-delivery"); return; } // Deliver a normal response or error, depending. if (mResponse.isSuccess()) { mRequest.deliverResponse(mResponse.result); } else { mRequest.deliverError(mResponse.error); } // If this is an intermediate response, add a marker, otherwise we're done // and the request can be finished. if (mResponse.intermediate) { mRequest.addMarker("intermediate-response"); } else { mRequest.finish("done"); } // If we have been provided a post-delivery runnable, run it. if (mRunnable != null) { mRunnable.run(); } }
这里我们就能看见熟悉的mRequest.deliverResponse(mResponse.result)方法了,然后就可以将解析的结果数据返回了。
以上便是Volley处理网络请求的业务流程了,概述如下:在主线程中调用RequestQueue的add()方法来添加一条网络请求,这条请求会先被加入到缓存队列当中,如果发现可以找到相应的缓存结果就直接读取缓存并解析,然后回调给主线程。如果在缓存中没有找到结果,则将这条请求加入到网络请求队列中,然后处理发送HTTP请求,解析响应结果,写入缓存,并回调主线程。
Ok,以上便将Volley的源码学习结束了。
相关文章推荐
- 使用C++实现JNI接口需要注意的事项
- Android IPC进程间通讯机制
- Android Manifest 用法
- [转载]Activity中ConfigChanges属性的用法
- Android之获取手机上的图片和视频缩略图thumbnails
- Android之使用Http协议实现文件上传功能
- Android学习笔记(二九):嵌入浏览器
- android string.xml文件中的整型和string型代替
- i-jetty环境搭配与编译
- android之定时器AlarmManager
- android wifi 无线调试
- Android Native 绘图方法
- Android java 与 javascript互访(相互调用)的方法例子
- android 代码实现控件之间的间距
- android FragmentPagerAdapter的“标准”配置
- Android"解决"onTouch和onClick的冲突问题
- android:installLocation简析
- android searchView的关闭事件
- SourceProvider.getJniDirectories