DiskBasedCache详解
DiskBasedCache详解
首先我们来看看关于这个类的说明:
/**
* Cache implementation that caches files directly onto the hard disk in the specified directory.
* The default disk usage size is 5MB, but is configurable.
*
* <p>This cache supports the {@link Entry#allResponseHeaders} headers field.
*/
它是Cache接口的实现。作用是直接缓存文件到硬盘上指定的路径。默认缓存最大尺寸是5MB,不过这个大小是可以配置的。
查看源码,我们知道,这个类实现了Cache这个接口。那么,我们首先看看Cache的源码(有人可能会问,为啥?你想,一个类去实现某个接口,意味着这个类要实现这个接口的标准。同时说明这个类一定是和这个接口密切相关的)。
Cache
先看说明:
/** An interface for a cache keyed by a String with a byte array as data. */
它是:用字符串数组作为数据,以String作为Key的缓存的接口。
import java.util.Collections;
import java.util.List;
import java.util.Map;
/** An interface for a cache keyed by a String with a byte array as data. */
public interface Cache {
/**
* Retrieves an entry from the cache.
* 从缓存中检索条目。
* @param key Cache key
* @return An {@link Entry} or null in the event of a cache miss
*/
Entry get(String key);
/**
* Adds or replaces an entry to the cache.
* 将条目添加或替换到缓存。
* @param key Cache key
* @param entry Data to store and metadata for cache coherency, TTL, etc.
*/
void put(String key, Entry entry);
/**
* Performs any potentially long-running actions needed to initialize the cache; will be called
* from a worker thread.
* 执行初始化缓存所需的任何可能长时间运行的操作; 将从工作线程调用。
*/
void initialize();
/**
* Invalidates an entry in the cache.
* 使缓存中的条目无效。
* @param key Cache key
* @param fullExpire True to fully expire the entry, false to soft expire
*/
void invalidate(String key, boolean fullExpire);
/**
* Removes an entry from the cache.
* 从缓存中删除条目。
* @param key Cache key
*/
void remove(String key);
/** Empties the cache. 清空缓存。*/
void clear();
/** Data and metadata for an entry returned by the cache.
* 缓存返回的条目的数据和元数据。
*/
class Entry {
/** The data returned from cache. 从缓存返回的数据。*/
public byte[] data;
/** ETag for cache coherency. ETag用于缓存一致性。*/
public String etag;
/** Date of this response as reported by the server.
* 服务器报告的响应日期。
*/
public long serverDate;
/** The last modified date for the requested object.
* 请求对象的上次修改日期。
*/
public long lastModified;
/** TTL for this record. 此记录的TTL。*/
public long ttl;
/** Soft TTL for this record. 此记录的软TTL。*/
public long softTtl;
/**
* Response headers as received from server; must be non-null. Should not be mutated
* directly. 从服务器收到的响应标头; 必须是非null。 不应该直接转变。
*
* <p>Note that if the server returns two headers with the same (case-insensitive) name,
* this map will only contain the one of them. {@link #allResponseHeaders} may contain all
* headers if the {@link Cache} implementation supports it.
*/
public Map<String, String> responseHeaders = Collections.emptyMap();
/**
* All response headers. May be null depending on the {@link Cache} implementation. Should
* not be mutated directly.
*/
public List<Header> allResponseHeaders;
/** True if the entry is expired. 如果条目已过期,则为True。*/
public boolean isExpired() {
return this.ttl < System.currentTimeMillis();
}
/** True if a refresh is needed from the original data source.
* 如果需要从原始数据源进行刷新,则为True。
*/
public boolean refreshNeeded() {
return this.softTtl < System.currentTimeMillis();
}
}
}
查看注释,比较容易理解,提供了几个方法,分别用于:初始化,设置、获取或移除Entry,清空缓存。有一个内部类Entry需要了解一下,这个类记录了缓存内容(字符串数组),Tag用于标志,服务器响应的日志,请求对象的上次修改日期,传输请求记录的时间,服务器响应的header。以及提供了一个方法用户判断条目是否过去,以及一个是否需要刷新的方法。
内部类CountingInputStream
如果一个类中有一个内部类,说明这个内部类和自己的功能关系非常密切。那么,我们在正式分析这个类的代码之前,一般要先分析这个内部类。这样有利于我们读懂代码。其实从源文件上来看,我们应该先分析CacheHeader类,然而,我发现CacheHeader中调用了CountingInputStream,所以,我们就先分析它。嗯,貌似没有写注释,应该是代码功能比较简单。既然代码不长,我们直接读代码。
@VisibleForTesting
static class CountingInputStream extends FilterInputStream {
private final long length;
private long bytesRead;
CountingInputStream(InputStream in, long length) {
super(in);
this.length = length;
}
@Override
public int read() throws IOException {
int result = super.read();
if (result != -1) {
bytesRead++;
}
return result;
}
@Override
public int read(byte[] buffer, int offset, int count) throws IOException {
int result = super.read(buffer, offset, count);
if (result != -1) {
bytesRead += result;
}
return result;
}
@VisibleForTesting
long bytesRead() {
return bytesRead;
}
long bytesRemaining() {
return length - bytesRead;
}
}
它继承自FilterInputStream。对这个不熟的同学,可以看看这篇博客。
这个类只有一个构造函数。额,提一句,这就是java编程思想中的装饰者模式。
private final long length;
CountingInputStream(InputStream in, long length) {
super(in);
this.length = length;
}
比较简单,就是设置了一个长度。这个长度的作用,我们暂时还不清楚,接下来再看看。
它重写了两个read()方法。作用就是从inputStream中读取字节数组。不同的是第二个,有三个参数,分别是buffer,offset和count。作用是,buffer作为容器放置读出来的内容,offset作为读取出来的内容放在容器中的偏移位置,count表示想要读取的长度。然后返回读出来的字节数量。同时在第二个重新read()方法中,如果发现本次读取的内容不为空,则将内部变量byteRead记加读取出来的字节长度。
接下来的两个方法是:
- bytesRead() 作用是读出来已经读取的字节长度
@VisibleForTesting long bytesRead() { return bytesRead; }
- bytesRemaining() 作用是看还剩余待读的字节长度。在这,我们找到了构造函数中的length的作用了,用来计算还剩余想要读的字节长度。
long bytesRemaining() { return length - bytesRead; }
好了,我们分析完CountingInputStream类了,它的功能就是从inputStream中读取出来以字节为单位指定长度的数据。再看看另外一个内部类。
内部类CacheHeader
这个类的描述是:
/** Handles holding onto the cache headers for an entry. */
意思是:处理Header和Entry序列化和反序列化。但是,什么是Header呢?
Header
这个Header其实就Http协议中的Header。从数据结构上来讲是这样的:
import android.text.TextUtils;
/** An HTTP header. */
public final class Header {
private final String mName;
private final String mValue;
public Header(String name, String value) {
mName = name;
mValue = value;
}
public final String getName() {
return mName;
}
public final String getValue() {
return mValue;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Header header = (Header) o;
return TextUtils.equals(mName, header.mName) && TextUtils.equals(mValue, header.mValue);
}
@Override
public int hashCode() {
int result = mName.hashCode();
result = 31 * result + mValue.hashCode();
return result;
}
@Override
public String toString() {
return "Header[name=" + mName + ",value=" + mValue + "]";
}
}
嗯,就是和Map差不多,是成对出现的。有key和value。当然,在这里的叫法是:name和value。比较简单,构造函数就是以name和value为参数的。再回到CacheHeader。
这个类(CacheHeader)中,有一些内部变量很重要,如:size(这个用来表示CacheHeader标定的数据的大小),key(用来标识这个cache entry),etag(用于缓存的一致性,额,可能不好理解,我们再看看再说),serverDate(服务器报告的响应日期),lastModified(请求对象的上次修改日期),tll(此记录的TTL),softTtl(此记录的软TTL),allResponseHeaders(所有的响应Header)。其实和Entry结构类似。
// VisibleForTesting
static class CacheHeader {
/** The size of the data identified by this CacheHeader. (This is not serialized to disk. */
long size;
/** The key that identifies the cache entry. */
final String key;
/** ETag for cache coherence. */
final String etag;
/** Date of this response as reported by the server. */
final long serverDate;
/** The last modified date for the requested object. */
final long lastModified;
/** TTL for this record. */
final long ttl;
/** Soft TTL for this record. */
final long softTtl;
/** Headers from the response resulting in this cache entry. */
final List<Header> allResponseHeaders;
private CacheHeader(
String key,
String etag,
long serverDate,
long lastModified,
long ttl,
long softTtl,
List<Header> allResponseHeaders) {
this.key = key;
this.etag = ("".equals(etag)) ? null : etag;
this.serverDate = serverDate;
this.lastModified = lastModified;
this.ttl = ttl;
this.softTtl = softTtl;
this.allResponseHeaders = allResponseHeaders;
}
/**
* Instantiates a new CacheHeader object.
*
* @param key The key that identifies the cache entry
* @param entry The cache entry.
*/
CacheHeader(String key, Entry entry) {
this(
key,
entry.etag,
entry.serverDate,
entry.lastModified,
entry.ttl,
entry.softTtl,
getAllResponseHeaders(entry));
size = entry.data.length;
}
private static List<Header> getAllResponseHeaders(Entry entry) {
// If the entry contains all the response headers, use that field directly.
if (entry.allResponseHeaders != null) {
return entry.allResponseHeaders;
}
// Legacy fallback - copy headers from the map.
return HttpHeaderParser.toAllHeaderList(entry.responseHeaders);
}
/**
* Reads the header from a CountingInputStream and returns a CacheHeader object.
*
* @param is The InputStream to read from.
* @throws IOException if fails to read header
*/
static CacheHeader readHeader(CountingInputStream is) throws IOException {
int magic = readInt(is);
if (magic != CACHE_MAGIC) {
// don't bother deleting, it'll get pruned eventually
throw new IOException();
}
String key = readString(is);
String etag = readString(is);
long serverDate = readLong(is);
long lastModified = readLong(is);
long ttl = readLong(is);
long softTtl = readLong(is);
List<Header> allResponseHeaders = readHeaderList(is);
return new CacheHeader(
key, etag, serverDate, lastModified, ttl, softTtl, allResponseHeaders);
}
/** Creates a cache entry for the specified data. */
Entry toCacheEntry(byte[] data) {
Entry e = new Entry();
e.data = data;
e.etag = etag;
e.serverDate = serverDate;
e.lastModified = lastModified;
e.ttl = ttl;
e.softTtl = softTtl;
e.responseHeaders = HttpHeaderParser.toHeaderMap(allResponseHeaders);
e.allResponseHeaders = Collections.unmodifiableList(allResponseHeaders);
return e;
}
/** Writes the contents of this CacheHeader to the specified OutputStream. */
boolean writeHeader(OutputStream os) {
try {
writeInt(os, CACHE_MAGIC);
writeString(os, key);
writeString(os, etag == null ? "" : etag);
writeLong(os, serverDate);
writeLong(os, lastModified);
writeLong(os, ttl);
writeLong(os, softTtl);
writeHeaderList(allResponseHeaders, os);
os.flush();
return true;
} catch (IOException e) {
VolleyLog.d("%s", e.toString());
return false;
}
}
}
从构造函数中可以看到,其中一个构造函数以key和Entry为参数,调用另外一个构造函数来创建对象。接下来,我们发现它调用了DiskBasedCache中的几个方法,我们这里就先简单看一下。
/*
* Homebrewed simple serialization system used for reading and writing cache
* headers on disk. Once upon a time, this used the standard Java
* Object{Input,Output}Stream, but the default implementation relies heavily
* on reflection (even for standard types) and generates a ton of garbage.
*
* TODO: Replace by standard DataInput and DataOutput in next cache version.
*/
/**
* Simple wrapper around {@link InputStream#read()} that throws EOFException instead of
* returning -1.
*/
private static int read(InputStream is) throws IOException {
int b = is.read();
if (b == -1) {
throw new EOFException();
}
return b;
}
static void writeInt(OutputStream os, int n) throws IOException {
os.write((n >> 0) & 0xff);
os.write((n >> 8) & 0xff);
os.write((n >> 16) & 0xff);
os.write((n >> 24) & 0xff);
}
static int readInt(InputStream is) throws IOException {
int n = 0;
n |= (read(is) << 0);
n |= (read(is) << 8);
n |= (read(is) << 16);
n |= (read(is) << 24);
return n;
}
static void writeLong(OutputStream os, long n) throws IOException {
os.write((byte) (n >>> 0));
os.write((byte) (n >>> 8));
os.write((byte) (n >>> 16));
os.write((byte) (n >>> 24));
os.write((byte) (n >>> 32));
os.write((byte) (n >>> 40));
os.write((byte) (n >>> 48));
os.write((byte) (n >>> 56));
}
static long readLong(InputStream is) throws IOException {
long n = 0;
n |= ((read(is) & 0xFFL) << 0);
n |= ((read(is) & 0xFFL) << 8);
n |= ((read(is) & 0xFFL) << 16);
n |= ((read(is) & 0xFFL) << 24);
n |= ((read(is) & 0xFFL) << 32);
n |= ((read(is) & 0xFFL) << 40);
n |= ((read(is) & 0xFFL) << 48);
n |= ((read(is) & 0xFFL) << 56);
return n;
}
static void writeString(OutputStream os, String s) throws IOException {
byte[] b = s.getBytes("UTF-8");
writeLong(os, b.length);
os.write(b, 0, b.length);
}
static String readString(CountingInputStream cis) throws IOException {
long n = readLong(cis);
byte[] b = streamToBytes(cis, n);
return new String(b, "UTF-8");
}
static void writeHeaderList(List<Header> headers, OutputStream os) throws IOException {
if (headers != null) {
writeInt(os, headers.size());
for (Header header : headers) {
writeString(os, header.getName());
writeString(os, header.getValue());
}
} else {
writeInt(os, 0);
}
}
static List<Header> readHeaderList(CountingInputStream cis) throws IOException {
int size = readInt(cis);
if (size < 0) {
throw new IOException("readHeaderList size=" + size);
}
List<Header> result =
(size == 0) ? Collections.<Header>emptyList() : new ArrayList<Header>();
for (int i = 0; i < size; i++) {
String name = readString(cis).intern();
String value = readString(cis).intern();
result.add(new Header(name, value));
}
return result;
}
/**
* Reads length bytes from CountingInputStream into byte array.
*
* @param cis input stream
* @param length number of bytes to read
* @throws IOException if fails to read all bytes
*/
// VisibleForTesting
static byte[] streamToBytes(CountingInputStream cis, long length) throws IOException {
long maxLength = cis.bytesRemaining();
// Length cannot be negative or greater than bytes remaining, and must not overflow int.
if (length < 0 || length > maxLength || (int) length != length) {
throw new IOException("streamToBytes length=" + length + ", maxLength=" + maxLength);
}
byte[] bytes = new byte[(int) length];
new DataInputStream(cis).readFully(bytes);
return bytes;
}
这几个方法的作用是序列化,用于将磁盘上读取的内容,反序列化成对象。关于为什么要这样做,注释中写的很明白:Java的输入输出系统自带的系列化功能依赖于反射,这样会带来一些垃圾。所以,它就自己造了个轮子。额,比较简单吧,我就不详讲了。
然后在CacheHeader中,提供了几个方法
private static List<Header> getAllResponseHeaders(Entry entry);
/**
* Reads the header from a CountingInputStream and returns a CacheHeader object.
*
* @param is The InputStream to read from.
* @throws IOException if fails to read header
*/
static CacheHeader readHeader(CountingInputStream is) throws IOException;
/** Creates a cache entry for the specified data. */
Entry toCacheEntry(byte[] data);
/** Writes the contents of this CacheHeader to the specified OutputStream. */
boolean writeHeader(OutputStream os);
注释写的很清楚,就是Header和Entry之间序列化和反序列化用的。
DiskBasedCache的构造函数
哈哈,分析完接口之后,我们来分析下构造函数。
有两个构造函数。
......
/** The root directory to use for the cache. */
private final File mRootDirectory;
/** The maximum size of the cache in bytes. */
private final int mMaxCacheSizeInBytes;
/** Default maximum disk usage in bytes. */
private static final int DEFAULT_DISK_USAGE_BYTES = 5 * 1024 * 1024;
/**
* Constructs an instance of the DiskBasedCache at the specified directory.
*
* @param rootDirectory The root directory of the cache.
* @param maxCacheSizeInBytes The maximum size of the cache in bytes.
*/
public DiskBasedCache(File rootDirectory, int maxCacheSizeInBytes) {
mRootDirectory = rootDirectory;
mMaxCacheSizeInBytes = maxCacheSizeInBytes;
}
/**
* Constructs an instance of the DiskBasedCache at the specified directory using the default
* maximum cache size of 5MB.
*
* @param rootDirectory The root directory of the cache.
*/
public DiskBasedCache(File rootDirectory) {
this(rootDirectory, DEFAULT_DISK_USAGE_BYTES);
}
......
两个构造函数,区别是,第一个构造函数可以设置缓存文件的大小。第一个参数用于设置缓存文件的路径。所有请求的文件都将放在该路径下。构造函数仅仅是对内部变量赋值,应该还有别的方法来初始化其它变量。
initialize()方法
我们来看看Cache接口中定义的初始化方法-initialize()。这个方法的说明是:通过扫描当前位于指定根目录中的所有文件来初始化DiskBasedCache。 如有必要,创建根目录。需要注意的是,这个方法加了同步关键字,这意味着,多线程操作时,不会被同时调用。
/**
* Initializes the DiskBasedCache by scanning for all files currently in the specified root
* directory. Creates the root directory if necessary.
*/
@Override
public synchronized void initialize() {
//判断根路径是否存在,如果不存在就创建一个。如果创建失败,就打出异常。然后直接返回。
if (!mRootDirectory.exists()) {
if (!mRootDirectory.mkdirs()) {
VolleyLog.e("Unable to create cache dir %s", mRootDirectory.getAbsolutePath());
}
return;
}
// 获取根目录下的文件和文件夹
File[] files = mRootDirectory.listFiles();
if (files == null) {
return;
}
//遍历文件列表,然后,将这些缓存文件的索引加入到缓存列表中
for (File file : files) {
try {
//获取文件的长度
long entrySize = file.length();
//根据文件的长度和文件名创建一个CountingInputSteam。两个入参,一个是CountingInputSteam,由createInputStream创建,另外一个是entry文件的大小。
CountingInputStream cis =
new CountingInputStream(
new BufferedInputStream(createInputStream(file)), entrySize);
try {
//根据CountingInputStream读取Header.
CacheHeader entry = CacheHeader.readHeader(cis);
// NOTE: When this entry was put, its size was recorded as data.length, but
// when the entry is initialized below, its size is recorded as file.length()
entry.size = entrySize;
//将entryc储存在mEntries中
putEntry(entry.key, entry);
} finally {
// Any IOException thrown here is handled by the below catch block by design.
//noinspection ThrowFromFinallyBlock
cis.close();
}
} catch (IOException e) {
//noinspection ResultOfMethodCallIgnored
file.delete();
}
}
}
initialize()方法中,首先判断根路径是否存在,如果不存在就创建一个。如果创建失败,就打出异常。然后直接返回。接着遍历文件列表,然后,将这些缓存文件的索引加入到缓存列表(mEntries)中。在这个将磁盘本间缓存到内存中的过程中,首先获取文件长度,然后根据文件名和长度,创建CountingInputStream,利用它反序列化成CacheHeader。然后电泳putEntry将这个entry存储到内存变量mEntries中去。就这样整个初始化就完成了。
这个过程中,调用了几个方法。源码如下:
// VisibleForTesting
InputStream createInputStream(File file) throws FileNotFoundException {
return new FileInputStream(file);
}
/**
* Puts the entry with the specified key into the cache.
*
* @param key The key to identify the entry by.
* @param entry The entry to cache.
*/
private void putEntry(String key, CacheHeader entry) {
if (!mEntries.containsKey(key)) {
mTotalSize += entry.size;
} else {
CacheHeader oldEntry = mEntries.get(key);
mTotalSize += (entry.size - oldEntry.size);
}
mEntries.put(key, entry);
}
/** Removes the entry identified by 'key' from the cache. */
private void removeEntry(String key) {
CacheHeader removed = mEntries.remove(key);
if (removed != null) {
mTotalSize -= removed.size;
}
}
以上方法不是很难,我们比较容易理解,不仔细讲。
接下来,我们需要讲对于DiskBasedCache最关键的部分,根据key,设置和获取Entry方法。
set()方法
顾名思义,这个方法的作用是,将网络请求加载好的Entry,缓存起来。看看源码吧。
/** Puts the entry with the specified key into the cache. */
@Override
public synchronized void put(String key, Entry entry) {
pruneIfNeeded(entry.data.length);
File file = getFileForKey(key);
try {
BufferedOutputStream fos = new BufferedOutputStream(createOutputStream(file));
CacheHeader e = new CacheHeader(key, entry);
boolean success = e.writeHeader(fos);
if (!success) {
fos.close();
VolleyLog.d("Failed to write header for %s", file.getAbsolutePath());
throw new IOException();
}
fos.write(entry.data);
fos.close();
putEntry(key, e);
return;
} catch (IOException e) {
}
boolean deleted = file.delete();
if (!deleted) {
VolleyLog.d("Could not clean up file %s", file.getAbsolutePath());
}
}
/**
* Prunes the cache to fit the amount of bytes specified.
*
* @param neededSpace The amount of bytes we are trying to fit into the cache.
*/
private void pruneIfNeeded(int neededSpace) {
if ((mTotalSize + neededSpace) < mMaxCacheSizeInBytes) {
return;
}
if (VolleyLog.DEBUG) {
VolleyLog.v("Pruning old cache entries.");
}
long before = mTotalSize;
int prunedFiles = 0;
long startTime = SystemClock.elapsedRealtime();
Iterator<Map.Entry<String, CacheHeader>> iterator = mEntries.entrySet().iterator();
while (iterator.hasNext()) {
Map.Entry<String, CacheHeader> entry = iterator.next();
CacheHeader e = entry.getValue();
boolean deleted = getFileForKey(e.key).delete();
if (deleted) {
mTotalSize -= e.size;
} else {
VolleyLog.d(
"Could not delete cache entry for key=%s, filename=%s",
e.key, getFilenameForKey(e.key));
}
iterator.remove();
prunedFiles++;
if ((mTotalSize + neededSpace) < mMaxCacheSizeInBytes * HYSTERESIS_FACTOR) {
break;
}
}
if (VolleyLog.DEBUG) {
VolleyLog.v(
"pruned %d files, %d bytes, %d ms",
prunedFiles, (mTotalSize - before), SystemClock.elapsedRealtime() - startTime);
}
}
// VisibleForTesting
OutputStream createOutputStream(File file) throws FileNotFoundException {
return new FileOutputStream(file);
}
/**
* Creates a pseudo-unique filename for the specified cache key.
*
* @param key The key to generate a file name for.
* @return A pseudo-unique filename.
*/
private String getFilenameForKey(String key) {
int firstHalfLength = key.length() / 2;
String localFilename = String.valueOf(key.substring(0, firstHalfLength).hashCode());
localFilename += String.valueOf(key.substring(firstHalfLength).hashCode());
return localFilename;
}
/** Returns a file object for the given cache key. */
public File getFileForKey(String key) {
return new File(mRootDirectory, getFilenameForKey(key));
}
我们知道,缓存是以key-value的方式存储的。所以,这个方法的参数有两个,分别是key和entry。
在缓存之前,我们首先判断一下本地缓存是否超过文件大小限制了,这在pruneIfNeeded()方法中完成。这个方法有一个入参,表示你需要缓存的文件大小。在这里取的是entry.data.length。这个清除缓存的算法很有趣,没有判断缓存时间,直接从mEntries中,一个个的遍历,从第一个开始,删掉它,然后判断够不够用,不够,再删,直到够用。嗯,总感觉这样太粗暴了的说。
缓存空间够了之后,调用getFileForKey()方法,根据key,创建一个File。根据File创建一个BufferedOutputStream。利用这个Stream,创建一个CacheHeader。接着调用CacheHeader的writeHeader方法,将header写入BufferedOutputStream。最后,将entry.data写入BufferedOutputStream流中。这样就完成了磁盘存储。最终,调用putEntry()将entry写入内存索引中。
好了,这就是set()方法。再来看看get()方法。
get()方法
还是直接看源码
/** Returns the cache entry with the specified key if it exists, null otherwise. */
@Override
public synchronized Entry get(String key) {
CacheHeader entry = mEntries.get(key);
// if the entry does not exist, return.
if (entry == null) {
return null;
}
File file = getFileForKey(key);
try {
CountingInputStream cis =
new CountingInputStream(
new BufferedInputStream(createInputStream(file)), file.length());
try {
CacheHeader entryOnDisk = CacheHeader.readHeader(cis);
if (!TextUtils.equals(key, entryOnDisk.key)) {
// File was shared by two keys and now holds data for a different entry!
VolleyLog.d(
"%s: key=%s, found=%s", file.getAbsolutePath(), key, entryOnDisk.key);
// Remove key whose contents on disk have been replaced.
removeEntry(key);
return null;
}
byte[] data = streamToBytes(cis, cis.bytesRemaining());
return entry.toCacheEntry(data);
} finally {
// Any IOException thrown here is handled by the below catch block by design.
//noinspection ThrowFromFinallyBlock
cis.close();
}
} catch (IOException e) {
VolleyLog.d("%s: %s", file.getAbsolutePath(), e.toString());
remove(key);
return null;
}
}
首先根据key,读取内存中缓存的文件。然后根据这个key获取到文件,这样将缓存中的entry的header读出来。然后判断这连个key是否一样,如果一样,则将文件流中剩余的data读出来。根据这个data和内存中的entry,创建一个新的entry,并返回。
除了set和get之外,它还提供了其它几个方法,不是很难,直接给代码,大家可以自己分析。
/**
* Invalidates an entry in the cache.
*
* @param key Cache key
* @param fullExpire True to fully expire the entry, false to soft expire
*/
@Override
public synchronized void invalidate(String key, boolean fullExpire) {
Entry entry = get(key);
if (entry != null) {
entry.softTtl = 0;
if (fullExpire) {
entry.ttl = 0;
}
put(key, entry);
}
}
/** Removes the specified key from the cache if it exists. */
@Override
public synchronized void remove(String key) {
boolean deleted = getFileForKey(key).delete();
removeEntry(key);
if (!deleted) {
VolleyLog.d(
"Could not delete cache entry for key=%s, filename=%s",
key, getFilenameForKey(key));
}
}
/** Removes the entry identified by 'key' from the cache. */
private void removeEntry(String key) {
CacheHeader removed = mEntries.remove(key);
if (removed != null) {
mTotalSize -= removed.size;
}
}