2019-05-20~23~evict+PP+ Markov

2019-05-23  本文已影响0人  生病喝药水
看了两篇论文:retention time && interval - request time---process of evicting files
in process of evicting files, choosing which file to be discarded is important.  
This means, the optimal objective is max hit rate
* Hit rate=Probability(request interval < residual lifetime of content)
request interval ~ Exp(e_parameter * zipf_parameter(file))

optimizing TTL caches under heavy-tailed demand Proactive rention aware caching
objective max hit rate min storage cost+ download cost
variable content storing time/TTL
request process request inter-arrival time is heavy tailed distribution - PP
- 请求间隔服从指数分布(parameter 为uniform && sum=1)
反思

看了好久,终于TM看明白了。
Problem I:
request process 的模拟过程到底需要不需要online,或者是否需要模拟成随机过程?

Solution:问题本质是cache online还是static(间接static,即一个time window内的请求,time window可以设置足够大)

Problem II:
with static caching, a set of files is stored in the cache and the cache content does not change in the event of a cache hit or miss. in other words, the caching strategies use last a period request information to predict next period reqests.Specifically,

问题&&抽象
用户请求arrival indensity-->channel/delay? 除了CP影响delivery;
带宽影响deliver。
last period?什么时候更新一次

带宽影响deliver:

处理方式: 参考 idea
Method 1:congestion sensitive 'Mostafa'中是be sensitive of the traffic load on the link to back-end server sensitive of the traffic load on the link to cache nodes(到达service cache 有多跳路径,路径选择的影响因素有traffic congetstion, node stability)
Method 2: a distribute algorithm DR-Jian Li: 根据reverse path 的前 多跳节点信息(delay, stability, popularity,CP)来决定当前节点是否缓存

节点在不同time unit内中请求数量不同,当请求数量较大时,在缓存节点的覆盖范围内generate traffic congestion:请求策略可以转向其他节点,从而避免因congestion而发生响应请求延时过大。-->congestion?

上一篇下一篇

猜你喜欢

热点阅读