Android之网络—第三篇(解读OkHttp)

2019-05-08  本文已影响0人  Bzaigege
前言:这篇网络系列的初衷是分享下网络相关的知识,文章属于个人的学习总结博客。部分内容来源网络,如有不适,请私聊。

Android之网络—第一篇(Http原理)
Android之网络—第二篇(Https原理)
Android之网络—第三篇(解读OkHttp)
Android之网络—第四篇(解读Retrofit)

结合Http原理解读OkHttp库

当前android最流行的网络请求最火的框架应该就属OkHttp。而且也被google官方收录到android系统中,当做底层的网络请求库了。本文尝试从Http原理的角度来解读下OkHttp是如何实现的。
okhttp源码地址

简单介绍下OkHttp的使用

可以看看官网的实例介绍:GET 和 POST使用。链接地址:https://square.github.io/okhttp/

OkHttpClient client = new OkHttpClient();
String run(String url) throws IOException {
  Request request = new Request.Builder()
      .url(url)
      .build();

  try (Response response = client.newCall(request).execute()) {
    return response.body().string();
  }
}
public static final MediaType JSON
    = MediaType.get("application/json; charset=utf-8");

OkHttpClient client = new OkHttpClient();

String post(String url, String json) throws IOException {
  RequestBody body = RequestBody.create(JSON, json);
  Request request = new Request.Builder()
      .url(url)
      .post(body)
      .build();
  try (Response response = client.newCall(request).execute()) {
    return response.body().string();
  }
}

简单总结来看:

但是要结合之前Http原理注意下GET和POST拼接请求时的用法有些区别的。
可以参考这篇博客的一些使用说明加深下理解:Android OkHttp3简介和使用详解

第一部分:OkHttp网络请求流程分析

通过上面的例子可以看出,OkHttpClient 和 Request都是配置和管理整个网络请求的,真正的请求开始是从client.newCall(request).execute()开始的。

(1)、创建一个client.newCall(request),但其实真正new出来的对象是Call的子类RealCall对象
public class OkHttpClient implements Cloneable, Call.Factory, WebSocket.Factory {
    .....
    @Override 
    public Call newCall(Request request) {
      return RealCall.newRealCall(this, request, false /* for web socket */);
    }
    .....
}
final class RealCall implements Call {
    ......
    static RealCall newRealCall(OkHttpClient client, Request originalRequest, boolean forWebSocket) {
      // Safely publish the Call instance to the EventListener.
      RealCall call = new RealCall(client, originalRequest, forWebSocket); //创建一个RealCall对象
      call.eventListener = client.eventListenerFactory().create(call); //创建网络请求的监听对象
      return call;
    }
    
     private RealCall(OkHttpClient client, Request originalRequest, boolean forWebSocket) {
        this.client = client;
        this.originalRequest = originalRequest; //初始的请求信息
        this.forWebSocket = forWebSocket; //默认设置不支持websocket
        this.retryAndFollowUpInterceptor = new RetryAndFollowUpInterceptor(client, forWebSocket);
        this.timeout = new AsyncTimeout() {
          @Override 
          protected void timedOut() { //设置超时
            cancel();
          }
        };
        this.timeout.timeout(client.callTimeoutMillis(), MILLISECONDS);
    }   
    ......
}

这里简单说明下两个需要注意的地方

(2)、发送异步请求RealCall.enqueue(Callback) 或者 同步请求RealCall.execute()
@Override 
public void enqueue(Callback responseCallback) {
    synchronized (this) {
        //每个请求只能之执行一次
        if (executed) throw new IllegalStateException("Already Executed");
        executed = true;
    }
    captureCallStackTrace();
    eventListener.callStart(this); //监听器开始
    client.dispatcher().enqueue(new RealCall.AsyncCall(responseCallback)); //添加到异步队列里
}

@Override 
public Response execute() throws IOException {
    synchronized (this) {
        //每个请求只能之执行一次
        if (executed) throw new IllegalStateException("Already Executed");
        executed = true;
    }
    captureCallStackTrace();
    timeout.enter(); //超时计时开始
    eventListener.callStart(this); //监听器开始
    try {
        client.dispatcher().executed(this);  //添加到同步队列里
        Response result = getResponseWithInterceptorChain(); //返回结果
        if (result == null) throw new IOException("Canceled");
        return result;
    } catch (IOException e) {
        e = timeoutExit(e);
        eventListener.callFailed(this, e); //监听结束
        throw e;
    } finally {
        client.dispatcher().finished(this); //从队列移除并执行下一个任务
    }
}

这两段代码前面是很相似的,都是将Call添加到client.dispatcher().enqueue()里。差异在于new RealCall.AsyncCall(responseCallback),这里做了什么呢?其实这里继承NamedRunnable 创建了一个线程去做异步请求了。先看看源码实现:

public abstract class NamedRunnable implements Runnable {
   ......
  @Override 
    public final void run() {
    String oldName = Thread.currentThread().getName();
    Thread.currentThread().setName(name);
    try {
      //采用模板方法让子类将具体的操作放到此execute()方法
      execute();
    } finally {
      Thread.currentThread().setName(oldName);
    }
  }

  protected abstract void execute();
  ......
}


final class AsyncCall extends NamedRunnable {
    ......
    @Override 
    protected void execute() {
        boolean signalledCallback = false;  //这个标记为主要是避免异常时2次回调
        timeout.enter();
        try {
            Response response = getResponseWithInterceptorChain(); //返回结果
            if (retryAndFollowUpInterceptor.isCanceled()) {
                signalledCallback = true;
                responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
            } else {
                signalledCallback = true;
                responseCallback.onResponse(RealCall.this, response);
            }
        } catch (IOException e) {
            e = timeoutExit(e);
            if (signalledCallback) {
                // Do not signal the callback twice!
                Platform.get().log(INFO, "Callback failure for " + toLoggableString(), e);
            } else {
                eventListener.callFailed(RealCall.this, e); //监听结束
                responseCallback.onFailure(RealCall.this, e);
            }
        } finally {
            client.dispatcher().finished(this);//从队列移除并执行下一个任务
        }
    }
    ...
}
(3)、Dispatcher 请求任务调度器

在刚才的分析中,发现请求的开始都会调用client.dispatcher().executed() 和 请求的结束client.dispatcher().finished()。思考下,那这个dispatcher()在扮演什么角色呢?

public final class Dispatcher {
    private int maxRequests = 64; //最大请求数量
    private int maxRequestsPerHost = 5; //每台主机最大的请求数量
    private @Nullable Runnable idleCallback;
    
    /** Executes calls. Created lazily. */
    private @Nullable ExecutorService executorService; //线程池
    
    /** Ready async calls in the order they'll be run. */
    private final Deque<AsyncCall> readyAsyncCalls = new ArrayDeque<>(); //准备执行的请求队列
    
    /** Running asynchronous calls. Includes canceled calls that haven't finished yet. */
    private final Deque<AsyncCall> runningAsyncCalls = new ArrayDeque<>(); // 正在运行的异步请求队列
    
    /** Running synchronous calls. Includes canceled calls that haven't finished yet. */
    private final Deque<RealCall> runningSyncCalls = new ArrayDeque<>(); // 正在运行的同步请求队列
    
    /** 初始化了一个线程池,核心线程的数量为0 ,最大的线程数量为Integer.MAX_VALUE(无限制),
        空闲线程存在的最大时间为60秒,空闲60s就会被回收*/
    public synchronized ExecutorService executorService() {
        if (executorService == null) {
          executorService = new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60, TimeUnit.SECONDS,
              new SynchronousQueue<Runnable>(), Util.threadFactory("OkHttp Dispatcher", false));
        }
        return executorService;
    }
}  

先看看,同步请求时,Dispatcher 做了什么?其实就仅仅是将当期的同步请求做了一个队列的管理而已。

/** Used by {@code Call#execute} to signal it is in-flight. */
synchronized void executed(RealCall call) {
    runningSyncCalls.add(call); //添加到队列中
}

/** Used by {@code Call#execute} to signal completion. */
void finished(RealCall call) {
    finished(runningSyncCalls, call); //从队列中移除
}

再看看异步请求时,Dispatcher 又做了什么?

void enqueue(AsyncCall call) {
    synchronized (this) {
        readyAsyncCalls.add(call); //添加到准备执行的请求队列
    }
    promoteAndExecute();  //挑选合适的请求并执行
}

private boolean promoteAndExecute() {
    assert (!Thread.holdsLock(this));

    List<AsyncCall> executableCalls = new ArrayList<>(); //符合条件的请求
    boolean isRunning;
    synchronized (this) {
        //遍历没执行过的请求
        for (Iterator<AsyncCall> i = readyAsyncCalls.iterator(); i.hasNext(); ) {
            AsyncCall asyncCall = i.next();
            //正在执行的请求,没有超负载:最大请求数不超过64个,单个Host请求不超出5个。
            if (runningAsyncCalls.size() >= maxRequests) break; // Max capacity.
            if (runningCallsForHost(asyncCall) >= maxRequestsPerHost) continue; // Host max capacity.

            //将符合条件的请求拿出来
            i.remove();
            executableCalls.add(asyncCall); 
            runningAsyncCalls.add(asyncCall);
        }
        isRunning = runningCallsCount() > 0;
    }

    //将符合的请求执行
    for (int i = 0, size = executableCalls.size(); i < size; i++) {
        AsyncCall asyncCall = executableCalls.get(i);
        asyncCall.executeOn(executorService());  //放到线程池执行
    }

    return isRunning;
}

final class AsyncCall extends NamedRunnable {
    ......
    void executeOn(ExecutorService executorService) {
        assert (!Thread.holdsLock(client.dispatcher()));
        boolean success = false;
        try {
            executorService.execute(this); //线程池执行
            success = true;
        } catch (RejectedExecutionException e) {
            InterruptedIOException ioException = new InterruptedIOException("executor rejected");
            ioException.initCause(e);
            eventListener.callFailed(RealCall.this, ioException);
            responseCallback.onFailure(RealCall.this, ioException);
        } finally {
            if (!success) { //如果失败了移除
                client.dispatcher().finished(this); // This call is no longer running!
            }
        }
    }
    ...
}

/** Used by {@code AsyncCall#run} to signal completion. */
void finished(AsyncCall call) {
    finished(runningAsyncCalls, call);  //从队列中移除
}

最后来看看finished()具体做了什么事情。

private <T> void finished(Deque<T> calls, T call) {
    Runnable idleCallback;
    synchronized (this) { 
        //从队列中移除当前的任务
        if (!calls.remove(call)) throw new AssertionError("Call wasn't in-flight!");
        idleCallback = this.idleCallback;
    }

    //推动下一个任务的执行
    boolean isRunning = promoteAndExecute();

    if (!isRunning && idleCallback != null) {
        idleCallback.run();
    }
}

总结的来说,Dispatcher 就是个请求任务调度器,其实准确的来说主要是做异步请求线程管理的,管理多个网络请求。

(4)、获取响应信息getResponseWithInterceptorChain()

其实从刚才的分析,到getResponseWithInterceptorChain()获得请求返回信息,整体的网络请求流程就分析完毕了。
但是,分析完也没有发现有涉及到Http原理的内容呀。你是不是又吹牛啦。请看第二部分内容

第二部分:OkHttp网络请求与Http的原理结合

(1)、OkHttpClient 大杂烩解析
public class OkHttpClient implements Cloneable, Call.Factory, WebSocket.Factory {
    ......
    final Dispatcher dispatcher;  //请求任务调度器
    final @Nullable Proxy proxy;  //代理
    final List<Protocol> protocols; //支持的协议
    final List<ConnectionSpec> connectionSpecs; //连接配置,就是Https连接时的加密套件Cipher Suite
    final List<Interceptor> interceptors; //拦截器集合,主要用于自定义拦截器
    final List<Interceptor> networkInterceptors; 
    final EventListener.Factory eventListenerFactory; //监听器,整个网络请求过程的监听器
    final ProxySelector proxySelector;
    final CookieJar cookieJar;  //Cookie机制,存放服务器的cookie
    final @Nullable Cache cache; //缓存, 核心是DiskLruCache
    final @Nullable InternalCache internalCache; //缓存接口,配合Cache使用
    final SocketFactory socketFactory; //Socket
    final SSLSocketFactory sslSocketFactory; // TLS连接的Socket
    final CertificateChainCleaner certificateChainCleaner; //证书清洁器,可以理解为只包含证书重要信息
    final HostnameVerifier hostnameVerifier; //主机名验证器
    final CertificatePinner certificatePinner; //证书固定器,主要用于自签名证书的验证
    final Authenticator proxyAuthenticator;
    final Authenticator authenticator; //授权验证,比如说登录密码错误访问返回401,需要重新授权。
    final ConnectionPool connectionPool; //连接池,
    final Dns dns; //Dns解析,查找域名
    final boolean followSslRedirects; //是否允许切换重定向,就是当访问http后,返回https重定向是否支持
    final boolean followRedirects; //是否允许重定向
    final boolean retryOnConnectionFailure; //连接失败是否重试
    final int callTimeout; //
    final int connectTimeout; //连接超时
    final int readTimeout; //读超时
    final int writeTimeout; //写超时
    final int pingInterval; 和WebSocket有关。为了保持长连接,必须间隔一段时间发送一个ping指令进行保活

    public OkHttpClient() {
        this(new Builder());
    }

    ......

    public Builder() {
      dispatcher = new Dispatcher(); //默认创建一个任务调度器
      protocols = DEFAULT_PROTOCOLS;  //默认支持的协议为HTTP_1.1 和 HTTP_2
      connectionSpecs = DEFAULT_CONNECTION_SPECS; //默认支持TLS1.0~1.3 ,加密方式SHA/AES/DES等
      eventListenerFactory = EventListener.factory(EventListener.NONE); //默认创建一个监听器
      proxySelector = ProxySelector.getDefault();
      if (proxySelector == null) {
        proxySelector = new NullProxySelector();
      }
      cookieJar = CookieJar.NO_COOKIES;  //默认没有具体实现,需要自定义实现
      socketFactory = SocketFactory.getDefault();
      hostnameVerifier = OkHostnameVerifier.INSTANCE; //默认OkHostnameVerifier()实现取证书的第一个host
      certificatePinner = CertificatePinner.DEFAULT; //默认为空
      proxyAuthenticator = Authenticator.NONE; //默认为空
      authenticator = Authenticator.NONE; //默认为空
      connectionPool = new ConnectionPool(); //连接池默认支持5个并发socket连接,默认keepalive时间为5分钟
      dns = Dns.SYSTEM; //默认使用系统的域名解析
      followSslRedirects = true;  //默认支持http 和 https 切换重定向
      followRedirects = true; //默认支持重定向
      retryOnConnectionFailure = true; // 默认支持失败重试
      callTimeout = 0;
      connectTimeout = 10_000; //默认10秒
      readTimeout = 10_000; //默认10秒
      writeTimeout = 10_000; //默认10秒
      pingInterval = 0;
    }
    ......
}

通过简单分析OkHttpClient ,发现OkHttp可配置的东西很多,涉及到的知识点也很复杂。结合前文解析的原理分析发现,OkHttp底层实现了网络请求的线程调度,域名解析,缓存机制,证书校验,Socket连接等功能。而对应各个功能点的实现又是由不同的拦截器实现的。

(2)、OkHttp 拦截器 (最核心部分)

回头看看之前分析,最终分析到getResponseWithInterceptorChain获取到返回的信息就结束了。但是这里面到底发生了什么事呢?

Response getResponseWithInterceptorChain() throws IOException {
    // Build a full stack of interceptors.
    List<Interceptor> interceptors = new ArrayList<>(); //创建一个拦截器链列表
    interceptors.addAll(client.interceptors());  // 1、添加自定义拦截器,忽略
    interceptors.add(retryAndFollowUpInterceptor); // 2、添加重试和重定向的拦截器
    interceptors.add(new BridgeInterceptor(client.cookieJar())); // 3、添加处理请求头和响应体的拦截器
    interceptors.add(new CacheInterceptor(client.internalCache()));// 4、添加处理缓存逻辑的拦截器
    interceptors.add(new ConnectInterceptor(client));// 5、添加选择请求连接的拦截器
    if (!forWebSocket) { //忽略,不太涉及到WebSocket部分
        interceptors.addAll(client.networkInterceptors());
    }
    // 6、添加向服务器发送请求报文、从服务器读取并解析响应报文的拦截器
    interceptors.add(new CallServerInterceptor(forWebSocket));

    //将拦截器设置成链
    Interceptor.Chain chain = new RealInterceptorChain(interceptors, null, null, null, 0,
            originalRequest, this, eventListener, client.connectTimeoutMillis(),
            client.readTimeoutMillis(), client.writeTimeoutMillis());

    //拦截器链从第一个拦截器启动执行,并将结果返回
    return chain.proceed(originalRequest);
}

通过上面的代码简单注释,发现这个代码其实很简单,分三个部分,但是理解起来可不简单,而且每一个拦截器里面做的核心逻辑也不简单,请看下面的分析。

拦截器链整体结构设计

拦截器的基本代码结构

public interface Interceptor {
    //拦截器的核心逻辑方法
    //在这里方法里面可以额外的前置动作处理
    //也可以做额外的后置动作处理
    Response intercept(Chain chain) throws IOException;

    //拦截器链对象
    interface Chain {
        .....
        //方法名直译是“前进”。换句话说就是调用下一个拦截器
        Response proceed(Request request) throws IOException;
        ......
    }
}

先抛开拦截器列表里各个拦截器的具体实现,整体理解下拦截器链结构是如果实现的。

public final class RealInterceptorChain implements Interceptor.Chain {

    ......
    // 创建真正的拦截器链对象,传入初始化信息
    public RealInterceptorChain(List<Interceptor> interceptors, StreamAllocation streamAllocation,
                                HttpCodec httpCodec, RealConnection connection, int index, Request request, Call call,
                                EventListener eventListener, int connectTimeout, int readTimeout, int writeTimeout) {
        this.interceptors = interceptors;
        this.connection = connection;
        this.streamAllocation = streamAllocation;
        this.httpCodec = httpCodec;
        this.index = index;
        this.request = request;
        this.call = call;
        this.eventListener = eventListener;
        this.connectTimeout = connectTimeout;
        this.readTimeout = readTimeout;
        this.writeTimeout = writeTimeout;
    }

    ......

    @Override public Response proceed(Request request) throws IOException {
        return proceed(request, streamAllocation, httpCodec, connection);
    }

    public Response proceed(Request request, StreamAllocation streamAllocation, HttpCodec httpCodec,
                            RealConnection connection) throws IOException {
        ......

        // Call the next interceptor in the chain.
        //创建一个新链对象,但可以理解为还是之前的拦截器链,只是这个链起始拦截器变成了下一个拦截器, 
        //index + 1就是关键信息,而且getResponseWithInterceptorChain中的起始点为0, 这段代码理解为
        // 1、新建了一个拦截器链,但起始还是之前的链,只是链起始点变成下一个
        // 2、取出下一个拦截器,执行
        RealInterceptorChain next = new RealInterceptorChain(interceptors, streamAllocation, httpCodec,
                connection, index + 1, request, call, eventListener, connectTimeout, readTimeout,
                writeTimeout);
        Interceptor interceptor = interceptors.get(index);
        Response response = interceptor.intercept(next); //每一个拦截器都持有链对象,推动下一个拦截器启动,同时将结果返回。

        ......

        return response;
    }
}

再来看看具体拦截器的代码结构是如何实现的

public class CustomInterceptor implements Interceptor {
    
    @Override
    public Response intercept(Chain chain) throws IOException {

        //前置动作,处理请求信息
        Request newRequest = oldRequestToNew(oldRequest);

        // 推动下一个拦截器执行并返回结果
        Response oldResponse = chain.proceed(newRequest);

        //后置动作,处理返回信息
        Response newResponse = oldResponseToNew(oldResponse);

        return newResponse;
    }
}

总结来说就是:
OkHttp通过定义很多拦截器一步一步地对Request进行拦截处理,直到真正地发起请求,并获取数据,然后有一步一步地对Response进行拦截处理,最后拦截的结果就是回调的最终Response。

image.png

好了,整体拦截器结构就分析到这里,后面分解动作,详细解析下OkHttp默认定义的拦截器都做了什么事。

RetryAndFollowUpInterceptor详解:重试和重定向拦截器
public final class RetryAndFollowUpInterceptor implements Interceptor {
    ......
    @Override 
    public Response intercept(Chain chain) throws IOException {
        //获取链传进来的信息
        Request request = chain.request();
        RealInterceptorChain realChain = (RealInterceptorChain) chain;
        Call call = realChain.call();
        EventListener eventListener = realChain.eventListener();

        StreamAllocation streamAllocation = new StreamAllocation(client.connectionPool(),
                createAddress(request.url()), call, eventListener, callStackTrace);
        this.streamAllocation = streamAllocation;

        int followUpCount = 0; //重试次数
        Response priorResponse = null;
        while (true) { //死循环
            if (canceled) {
                streamAllocation.release();
                throw new IOException("Canceled");
            }

            Response response;
            boolean releaseConnection = true; //标记位
            try {
                response = realChain.proceed(request, streamAllocation, null, null); //推动下一个拦截器执行
                releaseConnection = false;
            } catch (RouteException e) {
                // The attempt to connect via a route failed. The request will not have been sent.
                //路由异常,尝试恢复,如果再失败就抛出异常
                if (!recover(e.getLastConnectException(), streamAllocation, false, request)) {
                    throw e.getFirstConnectException();
                }
                releaseConnection = false;
                continue; //继续重试
            } catch (IOException e) {
                // An attempt to communicate with a server failed. The request may have been sent.
                boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
                if (!recover(e, streamAllocation, requestSendStarted, request)) throw e;
                releaseConnection = false;
                continue; //继续重试
            } finally {
                // We're throwing an unchecked exception. Release any resources.
                //释放资源
                if (releaseConnection) {
                    streamAllocation.streamFailed(null);
                    streamAllocation.release();
                }
            }

            // Attach the prior response if it exists. Such responses never have a body.
            if (priorResponse != null) { //前一个重试得到的Response,可能当前proceed返回空,以之前的继续
                response = response.newBuilder()
                        .priorResponse(priorResponse.newBuilder()
                                .body(null)
                                .build())
                        .build();
            }

            Request followUp;
            try {
                followUp = followUpRequest(response, streamAllocation.route());  //根据返回的请求码进行处理得到新的Request
            } catch (IOException e) {
                streamAllocation.release();
                throw e;
            }

            //注意不同的请求码返回的结果不一样,当为空时,就直接返回结果。
            //默认200直接将结果返回,但是当一些特殊情况导致请求失败时,重试了也没用,也直接将错误结果返回。
            if (followUp == null) { 
                streamAllocation.release();
                return response;
            }

            closeQuietly(response.body());

            if (++followUpCount > MAX_FOLLOW_UPS) { //最多重试和重定向20次
                streamAllocation.release();
                throw new ProtocolException("Too many follow-up requests: " + followUpCount);
            }
              
            //后面的都是异常处理
            if (followUp.body() instanceof UnrepeatableRequestBody) {
                streamAllocation.release();
                throw new HttpRetryException("Cannot retry streamed HTTP body", response.code());
            }

            if (!sameConnection(response, followUp.url())) {
                streamAllocation.release();
                streamAllocation = new StreamAllocation(client.connectionPool(),
                        createAddress(followUp.url()), call, eventListener, callStackTrace);
                this.streamAllocation = streamAllocation;
            } else if (streamAllocation.codec() != null) {
                throw new IllegalStateException("Closing the body of " + response
                        + " didn't close its backing stream. Bad interceptor?");
            }

            request = followUp; //将得到处理后的request赋值给
            priorResponse = response; //记录当前重试的结果
        }
    }
    ......
}

总结来说:

BridgeInterceptor详解:处理请求头和响应头拦截器
public final class BridgeInterceptor implements Interceptor {
    ......
    @Override 
    public Response intercept(Chain chain) throws IOException {
        //获取链传进来的信息
        Request userRequest = chain.request();
        Request.Builder requestBuilder = userRequest.newBuilder();

        //根据配置传进来的RequestBody配置请求头
        RequestBody body = userRequest.body();
        if (body != null) {
            MediaType contentType = body.contentType();
            if (contentType != null) {
                requestBuilder.header("Content-Type", contentType.toString());
            }

            long contentLength = body.contentLength();
            if (contentLength != -1) {
                requestBuilder.header("Content-Length", Long.toString(contentLength));
                requestBuilder.removeHeader("Transfer-Encoding");
            } else {
                requestBuilder.header("Transfer-Encoding", "chunked");
                requestBuilder.removeHeader("Content-Length");
            }
        }

        if (userRequest.header("Host") == null) {
            requestBuilder.header("Host", hostHeader(userRequest.url(), false));
        }

        if (userRequest.header("Connection") == null) {
            requestBuilder.header("Connection", "Keep-Alive"); //默认开启Keep-Alive模式
        }

        // If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
        // the transfer stream.
        //添加默认的编码类型为gzip
        boolean transparentGzip = false;
        if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
            transparentGzip = true;
            requestBuilder.header("Accept-Encoding", "gzip");
        }

        //设置Cookie缓存
        List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
        if (!cookies.isEmpty()) {
            requestBuilder.header("Cookie", cookieHeader(cookies));
        }

        if (userRequest.header("User-Agent") == null) {
            requestBuilder.header("User-Agent", Version.userAgent());
        }

        //推动下一个拦截器执行
        Response networkResponse = chain.proceed(requestBuilder.build());

        //保存服务端返回的cookie
        HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());

        Response.Builder responseBuilder = networkResponse.newBuilder()
                .request(userRequest);
    
        //对数据解压缩,并处理响应头
        if (transparentGzip
                && "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
                && HttpHeaders.hasBody(networkResponse)) {
            GzipSource responseBody = new GzipSource(networkResponse.body().source());
            Headers strippedHeaders = networkResponse.headers().newBuilder()
                    .removeAll("Content-Encoding")
                    .removeAll("Content-Length")
                    .build();
            responseBuilder.headers(strippedHeaders);
            String contentType = networkResponse.header("Content-Type");
            responseBuilder.body(new RealResponseBody(contentType, -1L, Okio.buffer(responseBody)));
        }

        return responseBuilder.build();
    }
    ......
}

总结来说:

CacheInterceptor详解:处理缓存逻辑拦截器
public final class CacheInterceptor implements Interceptor {
    ......
    @Override
    public Response intercept(Chain chain) throws IOException {
        //根据请求获取缓存结果
        Response cacheCandidate = cache != null
                ? cache.get(chain.request())
                : null;
        
        long now = System.currentTimeMillis();//当前时间

        //缓存策略类,该类决定了是使用缓存还是进行网络请求
        CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
        Request networkRequest = strategy.networkRequest;
        Response cacheResponse = strategy.cacheResponse;

        if (cache != null) {
            cache.trackResponse(strategy); //缓存统计
        }

        //cacheResponse 为null 表示缓存根据缓存策略不可用
        if (cacheCandidate != null && cacheResponse == null) {
            closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
        }

        // If we're forbidden from using the network and the cache is insufficient, fail.
        // 根据缓存策略返回的结果都为空,说明当前的请求非法,直接返回504,Unsatisfiable Request
        if (networkRequest == null && cacheResponse == null) {
            return new Response.Builder()
                    .request(chain.request())
                    .protocol(Protocol.HTTP_1_1)
                    .code(504)
                    .message("Unsatisfiable Request (only-if-cached)")
                    .body(Util.EMPTY_RESPONSE)
                    .sentRequestAtMillis(-1L)
                    .receivedResponseAtMillis(System.currentTimeMillis())
                    .build();
        }

        // If we don't need the network, we're done.
        // networkRequest 为空,而cacheResponse 不为空,直接使用缓存返回
        if (networkRequest == null) {
            return cacheResponse.newBuilder()
                    .cacheResponse(stripBody(cacheResponse))
                    .build();
        }

        Response networkResponse = null;
        try {
            networkResponse = chain.proceed(networkRequest); //推动下一个拦截器执行
        } finally {
            // If we're crashing on I/O or otherwise, don't leak the cache body.
            if (networkResponse == null && cacheCandidate != null) {
                closeQuietly(cacheCandidate.body());
            }
        }

        // If we have a cache response too, then we're doing a conditional get.
        if (cacheResponse != null) {
            if (networkResponse.code() == HTTP_NOT_MODIFIED) { //根据状态码,合并更新缓存信息
                Response response = cacheResponse.newBuilder()
                        .headers(combine(cacheResponse.headers(), networkResponse.headers()))
                        .sentRequestAtMillis(networkResponse.sentRequestAtMillis())
                        .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
                        .cacheResponse(stripBody(cacheResponse))
                        .networkResponse(stripBody(networkResponse))
                        .build();
                networkResponse.body().close();

                // Update the cache after combining headers but before stripping the
                // Content-Encoding header (as performed by initContentStream()).
                cache.trackConditionalCacheHit();
                cache.update(cacheResponse, response); //更新缓存
                return response;
            } else {
                closeQuietly(cacheResponse.body());
            }
        }

        //转换下response
        Response response = networkResponse.newBuilder()
                .cacheResponse(stripBody(cacheResponse))
                .networkResponse(stripBody(networkResponse))
                .build();

        //如果设置了缓存
        if (cache != null) {
            // 有响应体 && 缓存策略可缓存
            if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
                // Offer this request to the cache.
                CacheRequest cacheRequest = cache.put(response);
                return cacheWritingResponse(cacheRequest, response);
            }

            //移除符合条件的method的Request缓存
            if (HttpMethod.invalidatesCache(networkRequest.method())) {
                try {
                    cache.remove(networkRequest);
                } catch (IOException ignored) {
                    // The cache cannot be written.
                }
            }
        }

        return response;
    }
    ......
}

总结来说:
1、前置动作,根据是否设置缓存来判断缓存策略,然后根据缓存策略返回的两个信息networkRequest 和cacheResponse 做判断处理。

  • 如果网络不可用并且无可用的有效缓存,则返回504错误,提示Unsatisfiable Request;
  • 如果网络不可用,则直接使用缓存;
  • 如果网络可用,则进行网络请求

2、针对返回的response 做后置动作,并跟cacheResponse 是否可用做逻辑判断

  • 如果根据状态码为HTTP_NOT_MODIFIED,说明缓存信息还有效,合并更新缓存信息;
  • 如果如果没有缓存,则根据缓存策略写入新的缓存,并判断该缓存是否要移除;

3、注意还有一个比较重要的CacheStrategy缓存策略类,是缓存判断逻辑的核心。(PS:缓存读写使用DiskLruCache实现的)

public final class CacheStrategy {
    .......
    public CacheStrategy get() {
        CacheStrategy candidate = getCandidate();

        if (candidate.networkRequest != null && request.cacheControl().onlyIfCached()) {
            // We're forbidden from using the network and the cache is insufficient.
            return new CacheStrategy(null, null);
        }

        return candidate;
    }

    /** Returns a strategy to use assuming the request can use the network. */
    private CacheStrategy getCandidate() {
        // No cached response.
        if (cacheResponse == null) { //没有缓存,直接网络请求
            return new CacheStrategy(request, null);
        }

        // Drop the cached response if it's missing a required handshake.
        if (request.isHttps() && cacheResponse.handshake() == null) { //请求为https但没有握手,直接网络请求
            return new CacheStrategy(request, null);
        }

        // If this response shouldn't have been stored, it should never be used
        // as a response source. This check should be redundant as long as the
        // persistence store is well-behaved and the rules are constant.
        if (!isCacheable(cacheResponse, request)) { //不可缓存,直接网络请求
            return new CacheStrategy(request, null);
        }

        CacheControl requestCaching = request.cacheControl();
        //请求头nocache或者请求头包含If-Modified-Since或者If-None-Match
        //请求头包含If-Modified-Since或者If-None-Match意味着本地缓存过期,需要服务器验证
        if (requestCaching.noCache() || hasConditions(request)) { 
            return new CacheStrategy(request, null);
        }

        CacheControl responseCaching = cacheResponse.cacheControl();

        //获取一系列标记时间戳
        long ageMillis = cacheResponseAge();
        long freshMillis = computeFreshnessLifetime();

        if (requestCaching.maxAgeSeconds() != -1) {
            freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
        }

        long minFreshMillis = 0;
        if (requestCaching.minFreshSeconds() != -1) {
            minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
        }

        long maxStaleMillis = 0;
        if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
            maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
        }

        //responseCaching 可缓存,并且ageMillis + minFreshMillis < freshMillis + maxStaleMillis
        if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
            Response.Builder builder = cacheResponse.newBuilder();
            if (ageMillis + minFreshMillis >= freshMillis) {
                builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
            }
            long oneDayMillis = 24 * 60 * 60 * 1000L;
            if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
                builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
            }
            //这里对比时间后,在响应头添加warning。
            //意味着虽过期,但可用,只是会在响应头添加warning
            return new CacheStrategy(null, builder.build());
        }

        // Find a condition to add to the request. If the condition is satisfied, the response body
        // will not be transmitted.
        //根据请求头设置的信息判断etag 、lastModified 、servedDate 等是否为空,
        String conditionName;
        String conditionValue;
        if (etag != null) {
            conditionName = "If-None-Match";
            conditionValue = etag;
        } else if (lastModified != null) {
            conditionName = "If-Modified-Since";
            conditionValue = lastModifiedString;
        } else if (servedDate != null) {
            conditionName = "If-Modified-Since";
            conditionValue = servedDateString;
        } else {
            return new CacheStrategy(request, null); // No condition! Make a regular request.
        }

        Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
        Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);

        //请求头添加If-None-Match、If-Modified-Since、If-Modified-Since等信息
        Request conditionalRequest = request.newBuilder()
                .headers(conditionalRequestHeaders.build())
                .build();
        return new CacheStrategy(conditionalRequest, cacheResponse);
    }
    ......
}

总结来说:缓存策略根据缓存是否存在、请求头设置字段NoCache 、If-Modified-Since或者If-None-Match等信息及缓存时间来结合判断是否使用缓存。

讲解后面两个拦截器之前,这里插播一些知识点方便理解。

keep-alive机制
image.png

在HTTP/1.0 之前正常发送一个请求都需要经过三次握手建立一个TCP连接,然后进行数据交互,最后再经过四次握手释放连接。但是在复杂的网络请求中,重复的创建和释放连接极大地影响了网络效率,同时也增加了系统开销。为了有效地解决这一问题,HTTP/1.1提出了Keep-Alive机制:当一个HTTP请求的数据传输结束后,TCP连接不立即释放,有一段存活时间,如果此时有新的HTTP请求,则可以直接复用TCP连接,从而省去了TCP的释放和再次创建的开销,减少了网络延时。http 1.0中默认是关闭的,需要在http头加入"Connection: Keep-Alive",才能启用Keep-Alive;http 1.1中默认启用Keep-Alive,如果加入"Connection: close ",才关闭。

管道机制(pipelining)

额外补充下,在HTTP/1.1的时候还引入了一个概念叫管道机制(pipelining),即在同一个TCP连接里面,客户端可以同时发送多个请求,只是服务端响应是按顺序来响应数据。举例来说,客户端需要请求两个资源。以前的做法是,在同一个TCP连接里面,先发送A请求,然后等待服务器做出回应,收到后再发出B请求。管道机制则是允许浏览器同时发出A请求和B请求,但是服务器还是按照顺序,先回应A请求,完成后再回应B请求。

总结下两者的区别,下面通过图来加深下keep-alive机制 和 管道机制的理解:

当未开启时,每次请求都需要建立TCP和关闭TCP,而开启之后,只需要开启和关闭一次就好了。


微信图片_20190507170849.jpg

有了pipelining的支持后,就可以一次性发送多次请求,而不需要一次一次的发送了。


微信图片_20190507171629.jpg
总结来说就是,Keep-Alive可以让一个tcp连接保持不关闭,但是每次的请求都必须在上一次响应之后,是为了解决每次http请求都要求建立和释放的过程;pipelining是为了解决每次http请求必须一次请求一次响应(但是注意这里的隐形条件就是tcp是处于连接状态的)。

但是管道机制有自身的很多限制,默认都是关闭的。参考:HTTP管线化

多路复用机制

通过上述概念理解,可以发现在http1.1中更多的解决了请求连接的问题,但是数据的传输和交互还是按顺序的发送和响应。那这个传输数据的过程是否可以优化呢?
举个例子:客户端要向服务器发送Hello、World两个单词,只能是先发送Hello再发送World,没办法同时发送这两个单词。不然服务器收到的可能就是HWeolrllod(注意是穿插着发过去了,但是顺序还是不会乱)。这样服务器就懵b了,“发的这是啥玩意我不认识”。

假设一个场景:在同一个TCP连接里面,客户端给服务端发送一个请求,但是数据不是按顺序的来发送过去,服务端能不能正确的解析出来呢?答案是肯定的,发送数据时只要给每个数据打上顺序标签,服务端接收后按顺序拼接就可以了。同样的,服务端给客户端发送数据也可以类似的。

基于上述的结果再假设一个场景:在同一个TCP连接里面,客户端给服务端发送多个请求,多个请求同时发送的多个数据,而数据也不是按顺序的来发送过去,服务端能不能正确的解析出来呢?答案也是肯定的。只要给每一个请求打上标签,比如请求1标记位A,发送的数据包分a_1、a_2、...、a_n。然后服务端就根据标签就知道数哪个请求发送了并及时响应对应的数据。

而Http2.0的就是基于这样场景引入二进制数据帧和流的概念,其中帧对数据进行顺序标识。进而基于这样的概念就可以实现多路复用,进而实现请求的并发处理。
详细的说明可参考:HTTP 2.0 原理详细分析

ConnectInterceptor详解:请求连接拦截器(最核心最底层的地方)
public final class ConnectInterceptor implements Interceptor {
    ......
    @Override 
    public Response intercept(Chain chain) throws IOException {
        //获取链传进来的信息
        RealInterceptorChain realChain = (RealInterceptorChain) chain;
        Request request = realChain.request();
        StreamAllocation streamAllocation = realChain.streamAllocation();

        // We need the network to satisfy this request. Possibly for validating a conditional GET.
        boolean doExtensiveHealthChecks = !request.method().equals("GET");
        //获取两个对象httpCodec  和 connection 
        HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);
        RealConnection connection = streamAllocation.connection();

        return realChain.proceed(request, streamAllocation, httpCodec, connection); //调用下一个拦截器
    }
}

总结来说:

回头再来看看streamAllocation.newStream()代码

public HttpCodec newStream(
        OkHttpClient client, Interceptor.Chain chain, boolean doExtensiveHealthChecks) {
    ......
    try {
        RealConnection resultConnection = findHealthyConnection(connectTimeout, readTimeout,
                writeTimeout, pingIntervalMillis, connectionRetryEnabled, doExtensiveHealthChecks);
        HttpCodec resultCodec = resultConnection.newCodec(client, chain, this);
        ......
    } catch (IOException e) {
        throw new RouteException(e);
    }
}

再来看看寻找连接过程的核心代码

private RealConnection findHealthyConnection(int connectTimeout, int readTimeout,
                                             int writeTimeout, int pingIntervalMillis, boolean connectionRetryEnabled,
                                             boolean doExtensiveHealthChecks) throws IOException {
    while (true) { //找到一个可用的连接(如果连接不可用,这个过程会一直持续哦)
        RealConnection candidate = findConnection(connectTimeout, readTimeout, writeTimeout,
                pingIntervalMillis, connectionRetryEnabled);

        // If this is a brand new connection, we can skip the extensive health checks.
        synchronized (connectionPool) {
            if (candidate.successCount == 0) {//如果是一个新的连接,直接返回就好
                return candidate;
            }
        }

        // Do a (potentially slow) check to confirm that the pooled connection is still good. If it
        // isn't, take it out of the pool and start again.
        if (!candidate.isHealthy(doExtensiveHealthChecks)) {
            noNewStreams(); //连接不好使的话,从移除连接池,并持续寻找
            continue;
        }

        return candidate;
    }
}

private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
                                      int pingIntervalMillis, boolean connectionRetryEnabled) throws IOException {
    boolean foundPooledConnection = false;
    RealConnection result = null;
    Route selectedRoute = null;
    Connection releasedConnection;
    Socket toClose;
    synchronized (connectionPool) {
        if (released) throw new IllegalStateException("released");
        if (codec != null) throw new IllegalStateException("codec != null");
        if (canceled) throw new IOException("Canceled");

        // Attempt to use an already-allocated connection. We need to be careful here because our
        // already-allocated connection may have been restricted from creating new streams.
        // 通过releaseIfNoNewStreams()判断该链接是否可以创建新的Stream,如果不能就关闭当前流
        // 经过判断后如果connection不为null,则连接是可用的
        releasedConnection = this.connection;
        toClose = releaseIfNoNewStreams();  
        if (this.connection != null) { 
            // We had an already-allocated connection and it's good.
            result = this.connection;
            releasedConnection = null;
        }
        if (!reportedAcquired) {
            // If the connection was never reported acquired, don't report it as released!
            releasedConnection = null;
        }

        // 试图从连接池中找到可用的连接,注意最后一个参数是null,代表没有路由信息
        if (result == null) {
            // Attempt to get a connection from the pool.
            Internal.instance.get(connectionPool, address, this, null);
            if (connection != null) {
                foundPooledConnection = true;
                result = connection;
            } else {
                selectedRoute = route;
            }
        }
    }
    //根据上面的逻辑做后续的处理
    closeQuietly(toClose);

    if (releasedConnection != null) {
        eventListener.connectionReleased(call, releasedConnection);
    }
    if (foundPooledConnection) {
        eventListener.connectionAcquired(call, result);
    }
    if (result != null) {
        // If we found an already-allocated or pooled connection, we're done.
        return result;
    }

    // If we need a route selection, make one. This is a blocking operation.
    boolean newRouteSelection = false;
    if (selectedRoute == null && (routeSelection == null || !routeSelection.hasNext())) {
        newRouteSelection = true;
        routeSelection = routeSelector.next();
    }

    synchronized (connectionPool) {
        if (canceled) throw new IOException("Canceled");

        if (newRouteSelection) {
            // Now that we have a set of IP addresses, make another attempt at getting a connection from
            // the pool. This could match due to connection coalescing.
            List<Route> routes = routeSelection.getAll();
            for (int i = 0, size = routes.size(); i < size; i++) {
                Route route = routes.get(i);
                Internal.instance.get(connectionPool, address, this, route); //通过路由配置,再次从连接池查找是否有可复用连接
                if (connection != null) {
                    foundPooledConnection = true;
                    result = connection;
                    this.route = route;
                    break;
                }
            }
        }

        if (!foundPooledConnection) {
            if (selectedRoute == null) {
                selectedRoute = routeSelection.next();
            }

            // Create a connection and assign it to this allocation immediately. This makes it possible
            // for an asynchronous cancel() to interrupt the handshake we're about to do.
            route = selectedRoute;
            refusedStreamCount = 0;
            result = new RealConnection(connectionPool, selectedRoute); //实在找不到合适的才新建一个连接
            acquire(result, false);
        }
    }

    // If we found a pooled connection on the 2nd time around, we're done.
    if (foundPooledConnection) {
        eventListener.connectionAcquired(call, result);
        return result;
    }

    // Do TCP + TLS handshakes. This is a blocking operation.
    // 给新建的连接建立连接过程
    //封装socket,建立TCP + TLS的过程
    result.connect(connectTimeout, readTimeout, writeTimeout, pingIntervalMillis,
            connectionRetryEnabled, call, eventListener); 
    routeDatabase().connected(result.route()); 

    Socket socket = null;
    synchronized (connectionPool) {
        reportedAcquired = true;

        // Pool the connection.
        Internal.instance.put(connectionPool, result); //放到连接池

        // If another multiplexed connection to the same address was created concurrently, then
        // release this connection and acquire that one.
        // 如果是一个http2连接,由于http2连接应具有多路复用特性,
        // 如果同时存在多个连向同一个地址的多路复用连接,则关闭多余连接,只保留一个
        if (result.isMultiplexed()) {
            socket = Internal.instance.deduplicate(connectionPool, address, this);
            result = connection;
        }
    }
    closeQuietly(socket);

    eventListener.connectionAcquired(call, result);
    return result;
}

总结来说:

接下来看看HttpCodec具体代码实现

public HttpCodec newCodec(OkHttpClient client, Interceptor.Chain chain,
                          StreamAllocation streamAllocation) throws SocketException {
    if (http2Connection != null) {
        return new Http2Codec(client, chain, streamAllocation, http2Connection); //创建Http2Codec对象
    } else {
        socket.setSoTimeout(chain.readTimeoutMillis());
        source.timeout().timeout(chain.readTimeoutMillis(), MILLISECONDS);
        sink.timeout().timeout(chain.writeTimeoutMillis(), MILLISECONDS);
        return new Http1Codec(client, streamAllocation, source, sink); //创建Http1Codec对象
    }
}

总结来说:HttpCodec只是针对不同的Http版本创建不同的对象,而这个对应有什么区别呢?其实就是前面多路复用机制提及的,Http的数据格式的区别,http1.x的数据格式还是文本格式,而http2则是基于数据帧的形式。

CallServerInterceptor详解:负责向服务器发起真正的访问请求,并接收服务器返回的响应
public final class CallServerInterceptor implements Interceptor {
@Override 
    public Response intercept(Chain chain) throws IOException {
        //获取链传进来的信息
        RealInterceptorChain realChain = (RealInterceptorChain) chain;
        HttpCodec httpCodec = realChain.httpStream();
        StreamAllocation streamAllocation = realChain.streamAllocation();
        RealConnection connection = (RealConnection) realChain.connection();
        Request request = realChain.request();

        long sentRequestMillis = System.currentTimeMillis();

        realChain.eventListener().requestHeadersStart(realChain.call());
        httpCodec.writeRequestHeaders(request);  //根据不同的http版本写请求头信息
        realChain.eventListener().requestHeadersEnd(realChain.call(), request);

        Response.Builder responseBuilder = null;
        //若请求方法允许传输请求体,且request的请求体不为空
        if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
            // If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
            // Continue" response before transmitting the request body. If we don't get that, return
            // what we did get (such as a 4xx response) without ever transmitting the request body.
            //如果在请求头中存在"Expect:100-continue",
            //则请求需要等待服务器回复是否能够处理请求体,服务器若不接受请求体则会返回一个非空的编码
            if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
                httpCodec.flushRequest();
                realChain.eventListener().responseHeadersStart(realChain.call());
                //接收服务器的返回请求,服务器若不接受请求体则会返回一个非空的响应
                responseBuilder = httpCodec.readResponseHeaders(true);
            }

            //若responseBuilder为null,则Expect不为100-continue或服务器接收请求体,开始写入请求体
            if (responseBuilder == null) {
                // Write the request body if the "Expect: 100-continue" expectation was met.
                realChain.eventListener().requestBodyStart(realChain.call());
                long contentLength = request.body().contentLength();
                CountingSink requestBodyOut =
                        new CountingSink(httpCodec.createRequestBody(request, contentLength));
                BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);

                request.body().writeTo(bufferedRequestBody);
                bufferedRequestBody.close();
                realChain.eventListener()
                        .requestBodyEnd(realChain.call(), requestBodyOut.successfulCount);
            } else if (!connection.isMultiplexed()) {
                // If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection
                // from being reused. Otherwise we're still obligated to transmit the request body to
                // leave the connection in a consistent state.
                // 如果服务器拒绝接收请求体,且不是http2,则禁止此连接被重新使用
                streamAllocation.noNewStreams();
            }
        }

        httpCodec.finishRequest(); //完成请求写入

        //通过httpCodec获取响应头
        if (responseBuilder == null) {
            realChain.eventListener().responseHeadersStart(realChain.call());
            responseBuilder = httpCodec.readResponseHeaders(false);
        }

        //通过responseBuilder填入信息创建Response
        Response response = responseBuilder
                .request(request)
                .handshake(streamAllocation.connection().handshake())
                .sentRequestAtMillis(sentRequestMillis)
                .receivedResponseAtMillis(System.currentTimeMillis())
                .build();

        int code = response.code(); //获取返回码
        if (code == 100) { //如果是101(升级到Http2协议)
            // server sent a 100-continue even though we did not request one.
            // try again to read the actual response
            responseBuilder = httpCodec.readResponseHeaders(false);

            response = responseBuilder
                    .request(request)
                    .handshake(streamAllocation.connection().handshake())
                    .sentRequestAtMillis(sentRequestMillis)
                    .receivedResponseAtMillis(System.currentTimeMillis())
                    .build();

            code = response.code();
        }

        realChain.eventListener()
                .responseHeadersEnd(realChain.call(), response);

        //处理forWebSocket情况下的响应
        if (forWebSocket && code == 101) {
            // Connection is upgrading, but we need to ensure interceptors see a non-null response body.
            response = response.newBuilder()
                    .body(Util.EMPTY_RESPONSE)
                    .build();
        } else {
            response = response.newBuilder()
                    .body(httpCodec.openResponseBody(response))
                    .build();
        }

         //若请求或者服务器要求断开连接,则断开
        if ("close".equalsIgnoreCase(response.request().header("Connection"))
                || "close".equalsIgnoreCase(response.header("Connection"))) {
            streamAllocation.noNewStreams();
        }

        //若返回204/205(服务器均未返回响应体)且响应体长度大于0)则抛出异常
        if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
            throw new ProtocolException(
                    "HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
        }

        return response;
    }
}

总结来说:整个请求的发送和数据响应都是通过HttpCodec 对象完成。而HttpCodec 实际上利用的是 Okio本质还是通过Socket完成请求。

最后通过图解来总结整个OkHttp的请求流程图:


image.png
后序:这篇文章从源码角度整体分析OkHttp,拦截器的责任链模式设计非常优雅,并且框架从底层一步一步封装实现整个网络请求,并针对Http的版本也做了完美的适配,期待Http2的推广。

如果觉得我的文章对你有帮助,请随意赞赏。您的支持将鼓励我继续创作!

上一篇下一篇

猜你喜欢

热点阅读