volley(4)

2016-07-10  本文已影响160人  反复横跳的龙套

3.6 CacheDispatcher & Cache

从volley的工作流程图中,我们已经知道volley的请求工作流程可以分为两个部分,一个是先通过缓存查找请求的数据,如果没有查找成功再通过网络请求。而上面一节已经介绍NetworkDispatcher,已经知道NetworkDispatcher就是负责volley中网络请求的工作,而这节介绍的CacheDispatcher从名字就可以想象得到工作和NetworkDispatcher是类似的,而它正是负责volley的缓存工作,下面就来介绍CacheDispatcher的工作流程。

3.6.1 CacheDispatcher

CacheDispatcher的工作流程跟NetworkDispatcher是类似的,因此参考NetworkDispatcher的工作流程可以将CacheDispatcher的工作流程分为以下几步:

  1. 通过存储Request的CacheQueue(继承BlockingQueue)中取出Request
  2. 通过Cacha对象获取Request相应的缓存数据Cache.Entry
  3. 如果Entry为空,则说明该Request的响应数据没有缓存在Cache当中,则将Request添加到NetworkQueue当中被NetworkDispatcher处理
  4. 如果Entry过期了,同样将Request添加到NetworkQueue当中
  5. 如果Entry可使用,则通过Entry构建NetworkResponse,然后通过Request.parseNetworkResponse将NetworkResponse转换成Response对象(这里跟NetworkDispatcher一样)
  6. 通过ResponseDelivery将Response送回到主线程处理(还是跟NetworkDispatcher一样)

可以看到,CacheDispatcher最后两步和NetworkDispatcher是一样的,只不过前面获取到NetworkResponse的方式不一样,NetworkDispatcher是通过Network执行Request请求从网络上获取到NetworkResponse,而CacheDispatcher是通过Cache从缓存中提取Request对应的NetworkResponse,下面通过流程图来看一下。

CacheDispatcher工作流程.PNG

从流程图中可以看到形式和NetworkDispatcher几乎是一样的,因此接下来看源码也是十分简单的。

public class CacheDispatcher extends Thread {

    private static final boolean DEBUG = VolleyLog.DEBUG;

    /** The queue of requests coming in for triage. */
    private final BlockingQueue<Request> mCacheQueue;

    /** The queue of requests going out to the network. */
    private final BlockingQueue<Request> mNetworkQueue;

    /** The cache to read from. */
    private final Cache mCache;

    /** For posting responses. */
    private final ResponseDelivery mDelivery;

    /** Used for telling us to die. */
    private volatile boolean mQuit = false;

    /**
     * Creates a new cache triage dispatcher thread.  You must call {@link #start()}
     * in order to begin processing.
     *
     * @param cacheQueue Queue of incoming requests for triage
     * @param networkQueue Queue to post requests that require network to
     * @param cache Cache interface to use for resolution
     * @param delivery Delivery interface to use for posting responses
     */
    public CacheDispatcher(
            BlockingQueue<Request> cacheQueue, BlockingQueue<Request> networkQueue,
            Cache cache, ResponseDelivery delivery) {
        mCacheQueue = cacheQueue;
        mNetworkQueue = networkQueue;
        mCache = cache;
        mDelivery = delivery;
    }

    /**
     * Forces this dispatcher to quit immediately.  If any requests are still in
     * the queue, they are not guaranteed to be processed.
     */
    public void quit() {
        mQuit = true;
        interrupt();
    }

    @Override
    public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

        // Make a blocking call to initialize the cache.
        mCache.initialize();

        while (true) {
            try {
                // Get a request from the cache triage queue, blocking until
                // at least one is available.
                final Request request = mCacheQueue.take();
                request.addMarker("cache-queue-take");

                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }

                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }

            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
        }
    }
}

跟NetworkDispatcher一样,CacheDispatcher源码中所需要的重点主要有2点:

  1. 类成员及构造方法,这些成员是CacheDispatcher工作不可缺少的,也不难,只要了解工作流程便可记住了,首先要从队列CacheQueue中取出Request对象,然后通过Cache获取相应的数据,若为空或过期则将Request添加到NetworkQueue中通过网络获取,若不为空则通过ResponseDelivery将数据传送到主线程
  2. CacheDispatcher也是一个Thread,工作都是在run中进行的,结合上面的流程,可以提取出关键的代码,如下所示
//1.先取出Request对象
final Request request = mCacheQueue.take();
//2.Cache获取request相应的缓存数据
Cache.Entry entry = mCache.get(request.getCacheKey());
//3.如果数据为空或者过期则将request添加到NetworkQueue中
if (entry == null) {
    mNetworkQueue.put(request);
    continue;
}
if (entry.isExpired()) {
    request.setCacheEntry(entry);
    mNetworkQueue.put(request);
    continue;
}
//4.Request调用接口方法将NetworkResponse解析成Response对象
Response<?> response = request.parseNetworkResponse(
        new NetworkResponse(entry.data, entry.responseHeaders));
//5.ResponseDelivery将response回传到主线程处理        
mDelivery.postResponse(request, response);

3.6.2 Cache

我们从上面的分析中可以得知,在volley中的请求流程主要是分为两个流程,一个是从缓存里提取数据的缓存流程,由CacheDispatcher实现,另一个是从网络中获取数据,由NetworkDispatcher实现。在CacheDispatcher的工作流程中,提供缓存功能的便是Cache对象,因此我们这节来学习volley中的Cache接口以及它的实现类,学习流程可以对照着NetworkDispatcher中的Network类的学习。

Cache只是一个接口,源码很简单,主要是注意Cache里面的Entry类,该类是主要的缓存信息存储类。

/**
 * An interface for a cache keyed by a String with a byte array as data.
 */
public interface Cache {

    public Entry get(String key);
    public void put(String key, Entry entry);
    public void initialize();
    public void invalidate(String key, boolean fullExpire);
    public void remove(String key);
    public void clear();

    /**
     * Data and metadata for an entry returned by the cache.
     */
    public static class Entry {
        /** The data returned from cache. */
        public byte[] data;

        /** ETag for cache coherency. */
        public String etag;

        /** Date of this response as reported by the server. */
        public long serverDate;

        /** TTL for this record. */
        public long ttl;

        /** Soft TTL for this record. */
        public long softTtl;

        /** Immutable response headers as received from server; must be non-null. */
        public Map<String, String> responseHeaders = Collections.emptyMap();

        /** True if the entry is expired. */
        public boolean isExpired() {
            return this.ttl < System.currentTimeMillis();
        }

        /** True if a refresh is needed from the original data source. */
        public boolean refreshNeeded() {
            return this.softTtl < System.currentTimeMillis();
        }
    }

}


从代码中可以发现Cache就是提供了简单的增删查基本的数据操作方法,没什么难理解的,接下来看看volley中Cache的实现类DiskBasedCache。

在看DiskBasedCache源码之前,先来回忆一下缓存一般都怎么实现的。我们知道缓存一般分为两种,一种是内存缓存,一种是磁盘缓存,而内存缓存一般使用LruCache实现,磁盘缓存使用DiskLruCache实现。在LruCache中底层则是用accessOrder为true的LinkedHashMap实现的,DiskLruCache是利用文件的读写来实现缓存的实现。

因此可以猜想,DiskBasedCache无非也是用这两种方法进行缓存的,下面具体看一下关键的代码。

DiskBasedCache

/**
 * Cache implementation that caches files directly onto the hard disk in the specified
 * directory. The default disk usage size is 5MB, but is configurable.
 */
public class DiskBasedCache implements Cache {

    /** Map of the Key, CacheHeader pairs */
    private final Map<String, CacheHeader> mEntries =
            new LinkedHashMap<String, CacheHeader>(16, .75f, true);

    /** Total amount of space currently used by the cache in bytes. */
    private long mTotalSize = 0;

    /** The root directory to use for the cache. */
    private final File mRootDirectory;

    /** The maximum size of the cache in bytes. */
    private final int mMaxCacheSizeInBytes;

    /** Default maximum disk usage in bytes. */
    private static final int DEFAULT_DISK_USAGE_BYTES = 5 * 1024 * 1024;

    /** High water mark percentage for the cache */
    private static final float HYSTERESIS_FACTOR = 0.9f;

    /** Magic number for current version of cache file format. */
    private static final int CACHE_MAGIC = 0x20120504;

    /**
     * Constructs an instance of the DiskBasedCache at the specified directory.
     * @param rootDirectory The root directory of the cache.
     * @param maxCacheSizeInBytes The maximum size of the cache in bytes.
     */
    public DiskBasedCache(File rootDirectory, int maxCacheSizeInBytes) {
        mRootDirectory = rootDirectory;
        mMaxCacheSizeInBytes = maxCacheSizeInBytes;
    }

    /**
     * Constructs an instance of the DiskBasedCache at the specified directory using
     * the default maximum cache size of 5MB.
     * @param rootDirectory The root directory of the cache.
     */
    public DiskBasedCache(File rootDirectory) {
        this(rootDirectory, DEFAULT_DISK_USAGE_BYTES);
    }


    @Override
    public synchronized Entry get(String key) {
        CacheHeader entry = mEntries.get(key);
        // if the entry does not exist, return.
        if (entry == null) {
            return null;
        }

        File file = getFileForKey(key);
        CountingInputStream cis = null;
        try {
            cis = new CountingInputStream(new FileInputStream(file));
            CacheHeader.readHeader(cis); // eat header
            byte[] data = streamToBytes(cis, (int) (file.length() - cis.bytesRead));
            return entry.toCacheEntry(data);
        } catch (IOException e) {
            VolleyLog.d("%s: %s", file.getAbsolutePath(), e.toString());
            remove(key);
            return null;
        } finally {
            if (cis != null) {
                try {
                    cis.close();
                } catch (IOException ioe) {
                    return null;
                }
            }
        }
    }


    @Override
    public synchronized void put(String key, Entry entry) {
        pruneIfNeeded(entry.data.length);
        File file = getFileForKey(key);
        try {
            FileOutputStream fos = new FileOutputStream(file);
            CacheHeader e = new CacheHeader(key, entry);
            e.writeHeader(fos);
            fos.write(entry.data);
            fos.close();
            putEntry(key, e);
            return;
        } catch (IOException e) {
        }
        boolean deleted = file.delete();
        if (!deleted) {
            VolleyLog.d("Could not clean up file %s", file.getAbsolutePath());
        }
    }

    @Override
    public synchronized void remove(String key) {
        boolean deleted = getFileForKey(key).delete();
        removeEntry(key);
        if (!deleted) {
            VolleyLog.d("Could not delete cache entry for key=%s, filename=%s",
                    key, getFilenameForKey(key));
        }
    }

    public File getFileForKey(String key) {
        return new File(mRootDirectory, getFilenameForKey(key));
    }

    /**
     * 判断缓存是否超过阈值,如果超过则删除缓存中的数据
     */
    private void pruneIfNeeded(int neededSpace) {
        if ((mTotalSize + neededSpace) < mMaxCacheSizeInBytes) {
            return;
        }
        if (VolleyLog.DEBUG) {
            VolleyLog.v("Pruning old cache entries.");
        }

        long before = mTotalSize;
        int prunedFiles = 0;
        long startTime = SystemClock.elapsedRealtime();

        Iterator<Map.Entry<String, CacheHeader>> iterator = mEntries.entrySet().iterator();
        while (iterator.hasNext()) {
            Map.Entry<String, CacheHeader> entry = iterator.next();
            CacheHeader e = entry.getValue();
            boolean deleted = getFileForKey(e.key).delete();
            if (deleted) {
                mTotalSize -= e.size;
            } else {
               VolleyLog.d("Could not delete cache entry for key=%s, filename=%s",
                       e.key, getFilenameForKey(e.key));
            }
            iterator.remove();
            prunedFiles++;

            if ((mTotalSize + neededSpace) < mMaxCacheSizeInBytes * HYSTERESIS_FACTOR) {
                break;
            }
        }

        if (VolleyLog.DEBUG) {
            VolleyLog.v("pruned %d files, %d bytes, %d ms",
                    prunedFiles, (mTotalSize - before), SystemClock.elapsedRealtime() - startTime);
        }
    }

    private void putEntry(String key, CacheHeader entry) {
        if (!mEntries.containsKey(key)) {
            mTotalSize += entry.size;
        } else {
            CacheHeader oldEntry = mEntries.get(key);
            mTotalSize += (entry.size - oldEntry.size);
        }
        mEntries.put(key, entry);
    }

    /**
     * Removes the entry identified by 'key' from the cache.
     */
    private void removeEntry(String key) {
        CacheHeader entry = mEntries.get(key);
        if (entry != null) {
            mTotalSize -= entry.size;
            mEntries.remove(key);
        }
    }


    ……

}


从源码中可以看出,DiskBasedCache并不是单纯的使用LruCache和DiskLruCache进行缓存,而是使用了两者的结合,利用LinkedHashMap来存储头部关键信息,相当于索引,实际存储数据是利用的文件进行操作的。下面通过其中的关键方法来进行解释。

  1. put:拿put方法来说明,首先调用pruneIfNeeded来保证磁盘中的缓存没有超出阈值,然后根据key获取到相应的文件,将entry.data输出到文件当中,并且通过putEntry(key, e)方法将该key对应的关键信息存到LinkedHashMap中即内存缓存中。这里的LinkedHashMap虽然没有存储实际数据,但它相当于建了一个索引,在通过get获取某缓存数据的时候,判断LinkedHashMap有没有该key的信息,如果有则说明磁盘中有该key对应的数据,此时再去磁盘中获取实际数据。因为LinkedHashMap是存在内存中的,因此先查找LinkedHashMap比在磁盘中查找效率高很多。
  2. get:上面put方法已经介绍了LinkedHashMap的作用了,就是先通过mEntries.get(key)判断内存中有没有该key对应的数据的关键信息,如果有则说明磁盘中存在该数据,此时再去磁盘中查找,提高了效率。
  3. pruneIfNeeded:这个方法是在put方法中调用的,目的是保证磁盘缓存不超过阈值,从该方法的源码中可以得知,删除缓存的依据是根据LinkedHashMap的iterator来删除对应数据的,这里充分利用到了LinkedHashMap的LRU算法,因为accessOrder为true的LinkedHashMap会将使用频率少的键值对留在队头,因此最先删除的就是最近最少使用的。

因此,DiskBasedCache的实现原理关键就在于利用了一个LinkedHashMap在内存当中当磁盘缓存文件的索引,实际缓存是通过磁盘来缓存的,这样避免了每次查找都去进行磁盘操作大大提高了效率。

3.7 RequestQueue

从volley的工作流程中可以看出,NetworkDispatcher和CacheDispatcher的分析其实就是volley的全部工作了,但是前面一直没有介绍它们两个是什么时候被调用工作的,这就是本节的内容RequestQueue,回顾一下volley的使用方法,就会发现Request同时也是volley框架工作的整个入口,先看一下volley的使用方法。

volley的使用方法

//1. 创建RequestQueue对象
RequestQueue queue = Volley.newRequestQueue(getApplicationContext());

//2. 创建Request对象并在Listener中处理回调对象,在回调方法中更新UI
StringRequest stringRequest = new StringRequest(
    "http://10.132.25.104/JsonData/data1.php", 
    new Response.Listener<String>() {
    @Override
    public void onResponse(String response) {
        // TODO Auto-generated method stub
        textView.setText(response);
    }
}, new Response.ErrorListener() {

    @Override
    public void onErrorResponse(VolleyError error) {
        Log.e(TAG,error.getMessage());        
    }
}); 

//3. 将Request添加至RequestQueue
queue.add(stringRequest);

可以发现,RequestQueue中最主要就是构造方法和add方法,现在就来看一下RequestQueue的源码是怎么实现的,还是只挑选重要的部分。

public class RequestQueue {

    private AtomicInteger mSequenceGenerator = new AtomicInteger();

    private final Map<String, Queue<Request>> mWaitingRequests =
            new HashMap<String, Queue<Request>>();
    private final Set<Request> mCurrentRequests = new HashSet<Request>();


    private final PriorityBlockingQueue<Request> mCacheQueue =
        new PriorityBlockingQueue<Request>();
    private final PriorityBlockingQueue<Request> mNetworkQueue =
        new PriorityBlockingQueue<Request>();

    private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4;

    private final Cache mCache;
    private final Network mNetwork;

    private final ResponseDelivery mDelivery;

    private NetworkDispatcher[] mDispatchers;
    private CacheDispatcher mCacheDispatcher;



    public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery) {
        mCache = cache;
        mNetwork = network;
        mDispatchers = new NetworkDispatcher[threadPoolSize];
        mDelivery = delivery;
    }

    public RequestQueue(Cache cache, Network network, int threadPoolSize) {
        this(cache, network, threadPoolSize,
                new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    }

    public RequestQueue(Cache cache, Network network) {
        this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
    }



    public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }


    public void stop() {
        if (mCacheDispatcher != null) {
            mCacheDispatcher.quit();
        }
        for (int i = 0; i < mDispatchers.length; i++) {
            if (mDispatchers[i] != null) {
                mDispatchers[i].quit();
            }
        }
    }

    public Request add(Request request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }

    ……
}

源码中关键的点有3个:

  1. 类成员以及构造方法,可以看出,RequestQueue里面的成员其实就是NetworkDispatcher和CacheDispatcher两个类以及它们构造方法所需要的成员,比如NetworkQueue,CacheQueue,Network,Cache,ResponseDelivery,注意这里便是传入主线程Handler的地方,可以说volley框架主要的工作配置在这里进行的。
    还有看RequestQueue最简单的构造方法,可以发现创建RequestQueu对象最少需要一个Cache和一个Network就可以了,原因也不难理解,volley工作中最主要的类就是两个Dispatcher,而这两个Dispatcher的核心工作类就是Cache和Network。
  2. start()方法,该方法是RequestQueue的入口方法,在该方法中初始化并启动volley中两个主要工作类CacheDispathcer和多个NetworkDispatcher,此时两个工作类便循环的从各自的队列CacheQueue和NetworkQueue中取出Request进行处理。
  3. add(Request request)方法,上面说了,两个Dispatcher从各自队列中取出Request执行,那么它们各自的队列的Request是从哪里来的呢?答案就是RequestQueue的add方法,从该方法源码中可以看到,首先先判断添加的Request需不需要被缓存,如果不需要则将Request添加进NetworkQueue当中,否则便将其添加至CacheQueue当中。

注意,在RequestQueue成员中的NetworkDispatcher是一个数组形式,即一个RequestQueue中有多个NetworkDispathcer同时在工作,每个NetworkDispatcher实际上就是一个线程Thread,因此NetworkDispathcer数组其实就是一个线程池!而且该线程池大小为4。所以不要认为没有使用ThreadPoolExecutor就是没有用线程池了。

以上就是RequestQueue的工作流程,是不是很简单,就是启动两个主要工作流程Dispatcher,然后通过add方法将Request添加至相应的Queue当中。

3.8 Volley

上面几节已经将volley框架的整个工作流程都介绍清楚了,最后只剩下整个工作流程最开始的入口点也就是Volley对象,它的作用是在框架工作之前配置好工作所需的参数,比如缓存的目录,使用的网络请求的引擎HttpStack类型等等,工作比较简单,我们来直接看Volley类的源码看它的工作是什么。

public class Volley {

    /** Default on-disk cache directory. */
    private static final String DEFAULT_CACHE_DIR = "volley";

    /**
     * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
     *
     * @param context A {@link Context} to use for creating the cache dir.
     * @param stack An {@link HttpStack} to use for the network, or null for default.
     * @return A started {@link RequestQueue} instance.
     */
    public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);

        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        queue.start();

        return queue;
    }

    public static RequestQueue newRequestQueue(Context context) {
        return newRequestQueue(context, null);
    }
}

可以看到,Volley中只有newRequestQueue这么一个方法,在这个方法中,最主要的工作就是构建RequestQueue并启动它,而在RequestQueue那节可以知道,构建RequestQueue最主要就是构建Cache和Network对象,而构建Cache主要就是配置缓存目录,构建Network就是选择HttpStack请求引擎。综上所述,Volley的工作就是配置缓存目录和HttpStack以构建RequestQueue并启动它。

4 volley总结

最后在介绍完volley框架中主要的工作类之后,再来看看volley的工作流程便很清晰了。

volley工作流程图.PNG

可以看出来volley的流程主要分为3步,这里结合上面实际的类仔细分析volley的工作流程:

  1. 通过Volley.newRequestQueue()创建RequestQueue对象,然后再RequestQueue.start()中配置并启动一个CacheDispatcher也就是负责缓存工作的Cache Thread,同时启动4个NetworkDispatcher用于负责网络工作的Network Threads,也就是线程池。
    然后通过RequestQueuet通过add(Request r)添加Request请求到队列中,首先是判断该Reqeust是否应该被缓存,如果不需要则将Request添加到NetworkQueue当中给NetworkDispatcher执行,否则将Request添加进CacheQueue通过CacheDispatcher进行处理。
  2. CacheDispatcher的执行流程为:从CacheQueue当中不断取出Request进行处理,取出一个Reuqest之后先通过Cache获取对应的响应数据,若数据为空或者过期,则将Request添加进NetworkQueue当中;如果数据可用,则通过request.parseNetworkResponse将数据封装成Response类对象,最后通过ResponseDelivery.postResponse将Response传回主线程处理。
  3. 由上面可以看到,当CacheDispatcher处理不了Request时便会将Request转移到NetworkQueue中让NetworkDispatcher处理。NetworkDispatcher的处理流程为:先从NetworkQueue中取出Request(就是在CacheDispatcher中放进来的),然后通过Network执行该请求获取到NetworkResponse,然后通过request.parseNetworkResponse转化为Response,然后根据需要将Response中的数据缓存到Cache当中(也就是CacheDispatcher中的那个Cache),最后也是通过ResponseDelivery将Response传回到主线程处理。

以上就是volley的实现原理。最后提一下volley中一些小细节的地方吧。

  1. 线程:缓存线程有1个,网络线程有4个
  2. 缓存大小:5M
  3. 使用情景:十分频繁的小数据的网络通信

与Okhttp的关系:volley是一个封装了处理请求、加载、缓存、线程和回调等过程,而Okhttp可以用来提供传输层的功能,也就是处理加载的功能,而且Okhttp的功能更强大,因此可以使用Okhttp代替volley中的传输层功能(如用OKHttpStack代替HurlStack),这样可以使得volley加载功能更强大,所以这就是volley+Okhttp

注意事项:

  1. Android 6.0之后就将apache的包从SDK中移除了,而volley中又实用了apache的类,因此在6.0之后使用volley时,需要引入手动添加apache的包
  2. 在使用第三方的网络工具包比如volley或者okhttp时,不管使用的是什么作为网络请求的处理,都应该自己再进行封装一层,而不是直接调用第三方包里面的方法,这样免得在修改第三方网络库的时候大改特改;如果自己封装了一层,则主程序中几乎不用做改动,而只需在自己封装那一个类里面进行替换
上一篇下一篇

猜你喜欢

热点阅读