[Cache]之Google Guava

2018-05-14  本文已影响327人  程序员驿站

背景

计算机领域中,cache无处不在,可以说我们开发软件过程中已经离不开cache了
比如:cpu有cache mysql 有cache等等,所以我们今天学习guava提供的cache

Google Guava工具包中的一个非常方便易用的本地化缓存实现,基于LRU算法实现,支持多种缓存过期策略
Cached 是本地缓存,数据保留在jvm内存中。我们经常使用ConcurrentHashMap作为本地内存,他虽然是线程安全,但是它无法做到下面几点:

通常来说,Guava Cache适用于:

前置工作

LRU

LRU(Least recently used,最近最少使用)算法根据数据的历史访问记录来进行淘汰数据,其核心思想是“如果数据最近被访问过,那么将来被访问的几率也更高”。


timg.jpeg

弱引用

组成部分

Cache:从键到值的半持久映射,

LoadingCache:LoadingCache是Cache的子接口,相比较于Cache,当从LoadingCache中读取一个指定key的记录时,如果该记录不存在,则LoadingCache可以自动执行加载数据到缓存的操作

AbstractCache 该类提供了缓存接口的基本实现,实现这个接口,只需要扩展这个类并为put和getIfPresent方法提供一个实现。
AbstractLoadingCache 同上

CacheBuilder:CacheBuilder类构建一个缓存对象,CacheBuilder类采用builder设计模式
CacheBuilderSpec:CacheBuilderSpec支持从字符串中解析配置,一系列以分隔分隔的键值对,每个都对应CacheBuilder的方法

CacheLoader LoadingCache是附带CacheLoader构建而成的缓存实现。创建自己的CacheLoader通常只需要简单地实现V load(K key) throws Exception方法
CacheStats :缓存性能的统计信息。这个类的实例是不可变的

RemovalListener:从缓存中移除对象时候通知
RemovalListeners 常见移除侦听器的集合

ForwardingCache: cache的装饰者,将其所有方法调用转发给另一个缓存的缓存
ForwardingLoadingCache LoadingCache的装饰者

使用步骤

引入guava的jar包


<dependency>
  <groupId>com.google.guava</groupId>
  <artifactId>guava</artifactId>
  <version>25.0-jre</version>
  </dependency>

使用

最简单例子

 final Cache<String, String> build = CacheBuilder.newBuilder().build(new CacheLoader<String, String>() {
            @Override
            public String load(String key) throws Exception {
                return key.intern();
            }

        });

基于大小限制

CacheLoader<String, String> loader = new CacheLoader<String, String>() {
            @Override
            public String load(String key) {
                return key.toUpperCase();
            }
        };
        LoadingCache<String, String> cache = CacheBuilder.newBuilder().maximumSize(3).build(loader);

基于权重限制

            @Override
            public int weigh(String key, String value) {
                return value.length();
            }
        };


        CacheLoader<String, String> loader = new CacheLoader<String, String>() {
            @Override
            public String load(String key) {
                return key.toUpperCase();
            }
        };
        LoadingCache<String, String> cache = CacheBuilder.newBuilder().maximumWeight(3).weigher(weighByLength).build(loader);

基于时间限制

  CacheLoader<String, String> loader = new CacheLoader<String, String>() {
            @Override
            public String load(String key) {
                return key.toUpperCase();
            }
        };
        LoadingCache<String, String> cache = CacheBuilder.newBuilder().expireAfterWrite(2, TimeUnit.MILLISECONDS).build(loader);

弱引用的key

说明:若引用key在其他地方没有引用则被垃圾回收期回收

 CacheLoader<String, String> loader = new CacheLoader<String, String>() {
            @Override
            public String load(String key) {
                return key.toUpperCase();
            }
        };
        LoadingCache<String, String> cache = CacheBuilder.newBuilder().weakKeys().build(loader);

刷新缓存

LoadingCache可以自动执行加载数据到缓存的操作(当key不存在的时候)

 CacheLoader<String, String> loader = new CacheLoader<String, String>() {
            @Override
            public String load(String key) {
                return key.toUpperCase();
            }
        };
        LoadingCache<String, String> cache = CacheBuilder.newBuilder().weakKeys().build(loader);

移除通知

  RemovalListener<String, String> listener
         = new RemovalListener<String, String>() {
            @Override
            public void onRemoval(RemovalNotification<String, String> n){
                if (n.wasEvicted()) {
                    String cause = n.getCause().name();
                    
                }
            }
        };
        CacheLoader<String, String> loader = new CacheLoader<String, String>() {
            @Override
            public String load(String key) {
                return key.toUpperCase();
            }
        };
        LoadingCache<String, String> cache = CacheBuilder.newBuilder().removalListener(listener).build(loader);


源码分析

先看下都有哪些接口

Cache作为最顶层的接口,我们看看他都有哪些行为

public interface Cache<K, V> {

  /**
   * Returns the value associated with {@code key} in this cache, or {@code null} if there is no
   * cached value for {@code key}.
   *
   * @since 11.0
   */
  @Nullable
  V getIfPresent(Object key);

  /**
   * Returns the value associated with {@code key} in this cache, obtaining that value from
   * {@code valueLoader} if necessary. No observable state associated with this cache is modified
   * until loading completes. This method provides a simple substitute for the conventional
   * "if cached, return; otherwise create, cache and return" pattern.
   *
   * <p><b>Warning:</b> as with {@link CacheLoader#load}, {@code valueLoader} <b>must not</b> return
   * {@code null}; it may either return a non-null value or throw an exception.
   *
   * @throws ExecutionException if a checked exception was thrown while loading the value
   * @throws UncheckedExecutionException if an unchecked exception was thrown while loading the
   *     value
   * @throws ExecutionError if an error was thrown while loading the value
   *
   * @since 11.0
   */
  V get(K key, Callable<? extends V> valueLoader) throws ExecutionException;

  /**
   * Returns a map of the values associated with {@code keys} in this cache. The returned map will
   * only contain entries which are already present in the cache.
   *
   * @since 11.0
   */
  ImmutableMap<K, V> getAllPresent(Iterable<?> keys);

  /**
   * Associates {@code value} with {@code key} in this cache. If the cache previously contained a
   * value associated with {@code key}, the old value is replaced by {@code value}.
   *
   * <p>Prefer {@link #get(Object, Callable)} when using the conventional "if cached, return;
   * otherwise create, cache and return" pattern.
   *
   * @since 11.0
   */
  void put(K key, V value);

  /**
   * Copies all of the mappings from the specified map to the cache. The effect of this call is
   * equivalent to that of calling {@code put(k, v)} on this map once for each mapping from key
   * {@code k} to value {@code v} in the specified map. The behavior of this operation is undefined
   * if the specified map is modified while the operation is in progress.
   *
   * @since 12.0
   */
  void putAll(Map<? extends K,? extends V> m);

  /**
   * Discards any cached value for key {@code key}.
   */
  void invalidate(Object key);

  /**
   * Discards any cached values for keys {@code keys}.
   *
   * @since 11.0
   */
  void invalidateAll(Iterable<?> keys);

  /**
   * Discards all entries in the cache.
   */
  void invalidateAll();

  /**
   * Returns the approximate number of entries in this cache.
   */
  long size();

  /**
   * Returns a current snapshot of this cache's cumulative statistics. All stats are initialized
   * to zero, and are monotonically increasing over the lifetime of the cache.
   *
   */
  CacheStats stats();

  /**
   * Returns a view of the entries stored in this cache as a thread-safe map. Modifications made to
   * the map directly affect the cache.
   *
   * <p>Iterators from the returned map are at least <i>weakly consistent</i>: they are safe for
   * concurrent use, but if the cache is modified (including by eviction) after the iterator is
   * created, it is undefined which of the changes (if any) will be reflected in that iterator.
   */
  ConcurrentMap<K, V> asMap();

  /**
   * Performs any pending maintenance operations needed by the cache. Exactly which activities are
   * performed -- if any -- is implementation-dependent.
   */
  void cleanUp();
}

LoadingCache 继承了Cache,扩展哪些功能

public interface LoadingCache<K, V> extends Cache<K, V>, Function<K, V> {

  /**
   * Returns the value associated with {@code key} in this cache, first loading that value if
   * necessary. No observable state associated with this cache is modified until loading completes.
   *
   * <p>If another call to {@link #get} or {@link #getUnchecked} is currently loading the value for
   * {@code key}, simply waits for that thread to finish and returns its loaded value. Note that
   * multiple threads can concurrently load values for distinct keys.
   *
   * <p>Caches loaded by a {@link CacheLoader} will call {@link CacheLoader#load} to load new values
   * into the cache. Newly loaded values are added to the cache using
   * {@code Cache.asMap().putIfAbsent} after loading has completed; if another value was associated
   * with {@code key} while the new value was loading then a removal notification will be sent for
   * the new value.
   *
   * <p>If the cache loader associated with this cache is known not to throw checked
   * exceptions, then prefer {@link #getUnchecked} over this method.
   *
   * @throws ExecutionException if a checked exception was thrown while loading the value. ({@code
   *     ExecutionException} is thrown <a
   *     href="https://github.com/google/guava/wiki/CachesExplained#interruption">even if
   *     computation was interrupted by an {@code InterruptedException}</a>.)
   * @throws UncheckedExecutionException if an unchecked exception was thrown while loading the
   *     value
   * @throws ExecutionError if an error was thrown while loading the value
   */
  V get(K key) throws ExecutionException;

  /**
   * Returns the value associated with {@code key} in this cache, first loading that value if
   * necessary. No observable state associated with this cache is modified until loading
   * completes. Unlike {@link #get}, this method does not throw a checked exception, and thus should
   * only be used in situations where checked exceptions are not thrown by the cache loader.
   *
   * <p>If another call to {@link #get} or {@link #getUnchecked} is currently loading the value for
   * {@code key}, simply waits for that thread to finish and returns its loaded value. Note that
   * multiple threads can concurrently load values for distinct keys.
   *
   * <p>Caches loaded by a {@link CacheLoader} will call {@link CacheLoader#load} to load new values
   * into the cache. Newly loaded values are added to the cache using
   * {@code Cache.asMap().putIfAbsent} after loading has completed; if another value was associated
   * with {@code key} while the new value was loading then a removal notification will be sent for
   * the new value.
   *
   * <p><b>Warning:</b> this method silently converts checked exceptions to unchecked exceptions,
   * and should not be used with cache loaders which throw checked exceptions. In such cases use
   * {@link #get} instead.
   *
   * @throws UncheckedExecutionException if an exception was thrown while loading the value. (As
   *     explained in the last paragraph above, this should be an unchecked exception only.)
   * @throws ExecutionError if an error was thrown while loading the value
   */
  V getUnchecked(K key);

  /**
   * Returns a map of the values associated with {@code keys}, creating or retrieving those values
   * if necessary. The returned map contains entries that were already cached, combined with newly
   * loaded entries; it will never contain null keys or values.
   *
   * <p>Caches loaded by a {@link CacheLoader} will issue a single request to
   * {@link CacheLoader#loadAll} for all keys which are not already present in the cache. All
   * entries returned by {@link CacheLoader#loadAll} will be stored in the cache, over-writing
   * any previously cached values. This method will throw an exception if
   * {@link CacheLoader#loadAll} returns {@code null}, returns a map containing null keys or values,
   * or fails to return an entry for each requested key.
   *
   * <p>Note that duplicate elements in {@code keys}, as determined by {@link Object#equals}, will
   * be ignored.
   *
   * @throws ExecutionException if a checked exception was thrown while loading the value. ({@code
   *     ExecutionException} is thrown <a
   *     href="https://github.com/google/guava/wiki/CachesExplained#interruption">even if
   *     computation was interrupted by an {@code InterruptedException}</a>.)
   * @throws UncheckedExecutionException if an unchecked exception was thrown while loading the
   *     values
   * @throws ExecutionError if an error was thrown while loading the values
   * @since 11.0
   */
  ImmutableMap<K, V> getAll(Iterable<? extends K> keys) throws ExecutionException;

  /**
   * @deprecated Provided to satisfy the {@code Function} interface; use {@link #get} or
   *     {@link #getUnchecked} instead.
   * @throws UncheckedExecutionException if an exception was thrown while loading the value. (As
   *     described in the documentation for {@link #getUnchecked}, {@code LoadingCache} should be
   *     used as a {@code Function} only with cache loaders that throw only unchecked exceptions.)
   */
  @Deprecated
  @Override
  V apply(K key);

  /**
   * Loads a new value for key {@code key}, possibly asynchronously. While the new value is loading
   * the previous value (if any) will continue to be returned by {@code get(key)} unless it is
   * evicted. If the new value is loaded successfully it will replace the previous value in the
   * cache; if an exception is thrown while refreshing the previous value will remain, <i>and the
   * exception will be logged (using {@link java.util.logging.Logger}) and swallowed</i>.
   *
   * <p>Caches loaded by a {@link CacheLoader} will call {@link CacheLoader#reload} if the
   * cache currently contains a value for {@code key}, and {@link CacheLoader#load} otherwise.
   * Loading is asynchronous only if {@link CacheLoader#reload} was overridden with an
   * asynchronous implementation.
   *
   * <p>Returns without doing anything if another thread is currently loading the value for
   * {@code key}. If the cache loader associated with this cache performs refresh asynchronously
   * then this method may return before refresh completes.
   *
   * @since 11.0
   */
  void refresh(K key);

  /**
   * {@inheritDoc}
   *
   * <p><b>Note that although the view <i>is</i> modifiable, no method on the returned map will ever
   * cause entries to be automatically loaded.</b>
   */
  @Override
  ConcurrentMap<K, V> asMap();
}

我们带着如何创建缓存、缓存的数据结构是什么?如何实现了过期以及监听器等问题,看下缓存的实现。

参考

http://www.baeldung.com/guava-cache
https://github.com/google/guava

上一篇下一篇

猜你喜欢

热点阅读