Okhttp 原理解析
2022-01-14 本文已影响0人
馒Care
OkHttp请求过程中最少只需要接触OkHttpClient、Request、Call、Response,但是框架内部进行大量的逻辑处理。
所有的逻辑大部分集中在拦截器中,但是在进入拦截器之前还需要依靠分发起来调配请求任务。
分发器:内部维护队列与线程池、完成请求调配 Dispatcher
拦截器:完成整改请求过程
=====分发器:异步请求工作流程
image.png /**
* The maximum number of requests for each host to execute concurrently. This limits requests by
* the URL's host name. Note that concurrent requests to a single IP address may still exceed this
* limit: multiple hostnames may share an IP address or be routed through the same HTTP proxy.
*
* If more than [maxRequestsPerHost] requests are in flight when this is invoked, those requests
* will remain in flight.
*
* WebSocket connections to hosts **do not** count against this limit.
*/
@get:Synchronized var maxRequestsPerHost = 5
set(maxRequestsPerHost) {
require(maxRequestsPerHost >= 1) { "max < 1: $maxRequestsPerHost" }
synchronized(this) {
field = maxRequestsPerHost
}
promoteAndExecute()
}
/**
* The maximum number of requests to execute concurrently. Above this requests queue in memory,
* waiting for the running calls to complete.
*
* If more than [maxRequests] requests are in flight when this is invoked, those requests will
* remain in flight.
*/
@get:Synchronized var maxRequests = 64
set(maxRequests) {
require(maxRequests >= 1) { "max < 1: $maxRequests" }
synchronized(this) {
field = maxRequests
}
promoteAndExecute()
}
问题1:Okhttp分发器原理 跟拦截器
问题2:为什么使用ArrayDeque
/**
* Resizable-array implementation of the {@link Deque} interface. Array
* deques have no capacity restrictions; they grow as necessary to support
* usage. They are not thread-safe; in the absence of external
* synchronization, they do not support concurrent access by multiple threads.
* Null elements are prohibited. This class is likely to be faster than
* {@link Stack} when used as a stack, and faster than {@link LinkedList}
* when used as a queue.
*
* <p>Most {@code ArrayDeque} operations run in amortized constant time.
* Exceptions include
* {@link #remove(Object) remove},
* {@link #removeFirstOccurrence removeFirstOccurrence},
* {@link #removeLastOccurrence removeLastOccurrence},
* {@link #contains contains},
* {@link #iterator iterator.remove()},
* and the bulk operations, all of which run in linear time.
*
* <p>The iterators returned by this class's {@link #iterator() iterator}
* method are <em>fail-fast</em>: If the deque is modified at any time after
* the iterator is created, in any way except through the iterator's own
* {@code remove} method, the iterator will generally throw a {@link
* ConcurrentModificationException}. Thus, in the face of concurrent
* modification, the iterator fails quickly and cleanly, rather than risking
* arbitrary, non-deterministic behavior at an undetermined time in the
* future.
*
* <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
* as it is, generally speaking, impossible to make any hard guarantees in the
* presence of unsynchronized concurrent modification. Fail-fast iterators
* throw {@code ConcurrentModificationException} on a best-effort basis.
* Therefore, it would be wrong to write a program that depended on this
* exception for its correctness: <i>the fail-fast behavior of iterators
* should be used only to detect bugs.</i>
*
* <p>This class and its iterator implement all of the
* <em>optional</em> methods of the {@link Collection} and {@link
* Iterator} interfaces.
*
* @author Josh Bloch and Doug Lea
* @since 1.6
* @param <E> the type of elements held in this deque
*/
问题3:为什么使用线程池
1.核心线程数:x,一直维护x的数量
2.最大线程数:同时执行的最大线程数量
3.
问题4:okhttp为什么使用SynchronousQueue,而不用其他的链表如:ArryListQueue
1.需要考虑SynchronousQueue跟ArrListQueue的特性
2.SynchronousQueue没有容量,放在线程池里面会不断创建线程,所以并发效果更好
3.ArrListQueue,有容量,例如设置容量为1.那么可能导致阻塞,或者后进来的消息,比早进来的提前执行
这里需要贴一下源码
image.png
问题5:okhttp的同步请求如何执行
1.RealCall --》execute方法--》client.dispatcher().executed()--->(分发器,放入runingSyncCalls)---》Dispatcher---》执行完---》finished(qeque<T> calls, T call,boolean promoteCalls )(进行完成请求从队列移除)--->getResponseWithInterceptorChain();--》
问题5:okhttp的连接池的缓存策略避免多次握手、挥手带来的性能效率下降
Okhttp的完整流程图
image.png=====拦截器 《责任链设计模式,行为型模式》
为请求创建了一个接受者对象的链,在处理请求的时候执行过滤(各司其职)
责任链上的处理者负责处理请求,客户只需要将请求发送到责任链即可,无需关心请求的处理细节和请求的传递,所以责任链将请求的发送者和请求的处理者解耦。