Volley源码解析

2018-08-06  本文已影响0人  luweicheng24
Volley作为轻量级网络请求框架已经被广泛使用,这篇文章就从源码角度深层次了解Volley的构成,立足于熟练使用Volley的基础之上
 RequestQueue queue = Volley.newRequestQueue(this); // 1 创建全局请求队列(类似队列的类)
        StringRequest request = new StringRequest(Request.Method.GET, "https://blog.csdn.net/guolin_blog/article/details/17482095/", new Response.Listener<String>() {
            @Override
            public void onResponse(String response) {
                Log.d(TAG, "onResponse: " + response);
            }
        }, new Response.ErrorListener() {
            @Override
            public void onErrorResponse(VolleyError error) {
                Log.d(TAG, "onErrorResponse: " + error);
            }
        });  //  创建一个Request
        queue.add(request); //2 添加到请求队列

Volley的网络请求其实就是三步走战略:

  1. 创建全局请求队列管理类 RequestQueue
public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
        // 创建一个用于缓存数据的文件夹 data/data/包名/cache/volley
        String userAgent = "volley/0";  // 请求者的标识
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }
        // HttpStack 是执行一个http请求的最终执行对象。这里会判断android版本 
        // api 9 以后是HttpUrlConnection ,以前是用HttpClient作为 android的网络                       请求 ,这两者的区别就自行百度了
        if (stack == null) { 
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }
    // 创建一个Network 对象那,NetWork用来将stack的相应数据解析分装成Volly自己要用的NetworkResponse
        Network network = new BasicNetwork(stack);
        
        RequestQueue queue; //创建RequestQueue对象,插入用于网络请求的network和缓存对象
        if (maxDiskCacheBytes <= -1)
        {
            // No maximum size specified 自定义缓存大小
            queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        }
        else
        {
            // Disk cache size specified  默认缓存大小
            queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
        }

        queue.start(); // 开启线程

        return queue;
    }

queue.start() 这是开启网络请求的开关

  /**
     * Starts the dispatchers in this queue.
     */
    public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }

这个方法首先就是暂停所有网络请求,创建一个CacheDispatcher和四个NetWorkDispatcher,其实这两个Dispatcher 就是继承线程的子类。所以start后就会开始执行他们中的run() 方法了,既然是线程,那么方法刚进来的stop 方法来看一下:

/**
     * Stops the cache and network dispatchers.
     */
    public void stop() {
        if (mCacheDispatcher != null) {
            mCacheDispatcher.quit();
        }
        for (int i = 0; i < mDispatchers.length; i++) {
            if (mDispatchers[i] != null) {
                mDispatchers[i].quit();
            }
        }
    }

遍历缓存和网络线程调用他们各自的quit方法

 public void quit() {// 设置一个当前线程退出的标志位,然后打断当前线程,会抛出InterruptedException
        mQuit = true;
        interrupt();
    }

既然网络请求的执行者HttpStack、解析Response 的NetWork 、一个CacheDispatcher 、四个NetWorkDispatcher 都已经准备好了,而且这五个线程已经开始执行,接下来就是将构建的StringRequest 加入到开始创建的RequestQueue

  1. queue.add(request); 将一个请求添加到队列,开始请求数据逻辑:
public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this); // request 关联全局Requestqueue
        synchronized (mCurrentRequests) { // 添加同步锁,最新的请求添加到mCurrentRequest(Set<Request> 保存所有未请求的Request ,为了让用户可以cancel某一个或者取消所有request)
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber()); //设置Request请求顺序
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) { //判断该请求是否可以缓存
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey(); //以该请求的 method+url 作为缓存key
            if (mWaitingRequests.containsKey(cacheKey)) {  
                // 判断请求等待map中是否包含cachekey 
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.  如果等待请求缓存map中不包含该cachekey 则在等待map中占个坑 表示当前有正在执行一个有缓存的请求
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request); // 添加该请求到缓存队列 由CacheDispatcher执行该请求
            }
            return request;
        }
    }

接下来看一下,执行缓存的CacheDispatcher和NetworkDispatcher这两个线程内部如何执行请求:

    @Override
    public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); // 设置
线程为后台处理
        // Make a blocking call to initialize the cache.
        mCache.initialize(); // 初始化缓存相关的操作

        Request<?> request;
        while (true) { //不断循环从cacheQueue中取数据
            // release previous request object to avoid leaking request object when mQueue is drained.
            request = null;
            try {
                // Take a request from the queue.
                request = mCacheQueue.take(); // 获取一个queuest 如果被打断 跳出循环
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
            try {
                request.addMarker("cache-queue-take");

                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) { // 判断该请求是否已经被取消
                    request.finish("cache-discard-canceled");
                    continue;
                }

                // Attempt to retrieve this item from cache. 从缓存恢复获取缓存实体
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {// 如果该缓存实体被清除了,将该request添加到网络请求队列
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) { // 如果缓存过期,标注该缓存已经过期 添加到网络请求
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit"); // 缓存命中 从缓存中获取该缓存实体解析成Response  
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) { // 缓存不需要refersh 通过mDelivery发送解析数据到,Request创建的线程
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    final Request<?> finalRequest = request;
                    // 先发送数据响应,后去将整该请求添加到网络请求队列中
                    mDelivery.postResponse(request, response, new Runnable() { 
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(finalRequest);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
            }
        }
    }

    @Override
    public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); //设置优先级
        Request<?> request;
        while (true) { // 循环遍历网络队列中的Request
            long startTimeMs = SystemClock.elapsedRealtime();
            // release previous request object to avoid leaking request object when mQueue is drained.
            request = null;
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                // Perform the network request. // 执行网络请求
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) { // 是否需要缓存
                    request.finish("not-modified"); // 移除Request从RequestQueue中
                    continue;
                }

                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.          
                if (request.shouldCache() && response.cacheEntry != null) { // 缓存Response
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }
    }

由于涉及到多线程问题,在RequestQueue中包含两个队列:

 /** The cache triage queue. */
    private final PriorityBlockingQueue<Request<?>> mCacheQueue =
        new PriorityBlockingQueue<Request<?>>();

    /** The queue of requests that are actually going out to the network. */
    private final PriorityBlockingQueue<Request<?>> mNetworkQueue =
        new PriorityBlockingQueue<Request<?>>();

PriorityBlockingQueue是一个阻塞队列,继承自BlockingQueue,内部使用ReentrantBlock非公平竞争锁机制,完成在多个线程正常获取队列中的Request。

上一篇 下一篇

猜你喜欢

热点阅读