AlgorithmSystems Design

系统设计:一种LRU缓存的C++11实现

2017-06-07  本文已影响289人  wuzhiguo

目标

image.png

题目示例

LeetCode 146. LRU Cache
Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations: get and put

get(key)
put(key, value)

Follow up:Could you do both operations in O(1) time complexity?

Example:

LRUCache cache = new LRUCache( 2 /* capacity */ );

cache.put(1, 1);
cache.put(2, 2);
cache.get(1);       // returns 1
cache.put(3, 3);    // evicts key 2
cache.get(2);       // returns -1 (not found)
cache.put(4, 4);    // evicts key 1
cache.get(1);       // returns -1 (not found)
cache.get(3);       // returns 3
cache.get(4);       // returns 4

基本概念

缓存有两个特点:

体现的思想

LRU缓存是一种以LRU策略(距离当前最久没使用过的数据应该被淘汰)为缓存策略的缓存。
而所谓的缓存策略,就是当缓存满了之后,又有新数据需要加入到缓存中时,我们怎么从缓存中删除旧数据为新数据腾出空间的策略。
LRU,Least Recently Used的简写,即近期最少使用算法。该算法依据于程序的局部性原理, 其淘汰旧数据的策略是,距离当前最久没有被访问过的数据应该被淘汰。

实现原理

接口描述

int get(int key);

void put(int key, int value);

代码细节

class LRUCache {
private:
    typedef int key_t;
    typedef int value_t;
    typedef struct{
        key_t key;
        value_t value;
    } Node_t;
    typedef list<Node_t> cacheList_t;
    typedef map<key_t,cacheList_t::iterator> map_t;
    
    int m_capacity;
    cacheList_t m_cacheList;
    map_t m_mp;
    
    
    
public:
    LRUCache(int capacity) : m_capacity(capacity){
        
    }
    
    int get(int key) {
        auto it = m_mp.find(key);
        // not cached
        if(it == m_mp.end()) return -1;
        // cached
        else{
            auto list_it = m_mp[key];
            Node_t node = {key,list_it->value};
            m_cacheList.erase(list_it);
            m_cacheList.push_front(node);
            m_mp[key] = m_cacheList.begin();
            return m_cacheList.begin()->value;
        }
    }
    
    void put(int key, int value) {
        auto it = m_mp.find(key);
        // cached
        if(it != m_mp.end()){
            auto listIt = m_mp[key];
            // delete the cached node, and then insert it to the list head
            Node_t node = {key, value};
            m_cacheList.erase(listIt);
            m_cacheList.push_front(node);
            m_mp[key] = m_cacheList.begin();
            
        }
        // not cached
        else{
            // cache is full
            if(m_cacheList.size() == m_capacity){
                m_mp.erase(m_cacheList.back().key);
                m_cacheList.pop_back();
            }
            // cache is not full
            Node_t node = {key,value};
            m_cacheList.push_front(node);
            m_mp[key] = m_cacheList.begin();
            
        }
        
    }
};

/**
 * Your LRUCache object will be instantiated and called as such:
 * LRUCache obj = new LRUCache(capacity);
 * int param_1 = obj.get(key);
 * obj.put(key,value);
 */

优化的可能性

上一篇下一篇

猜你喜欢

热点阅读