《Netty in Action》读书笔记

2018-04-23  本文已影响142人  AlstonWilliams

Chapter2

  1. SimpleChannelInboundHandler vs ChannelInboundHandler:
    In the client, when channelRead0() completes, you have the incoming message and you're done with it. When the method returns, SimpleChannelInboundHandler takes care of releasing the memory reference to the ByteBuf that holds the message. But ChannelInboundHandler doesn't release the message at the point.

Chapter3

  1. Channel-sockets
    EventLoop - Control flow, multithreading, concurrency
    ChannelFuture - Asynchronous notification
  2. Channel's implementation:
  1. EventLoop defines Netty's core abstraction for handling events that occur during the lifetime of a connection.
  2. The relationship between Channel, EventLoop, Thread, and EventLoopGroup are:
  1. Because all I/O operations in Netty are asynchronous, so we need a way to determine its result at a later time. So Netty provides ChannelFuture, whose addListener() method registers a ChannelFutureListener to be notified when an operation has completed.
  2. ChannelHandler serves as the container for all application logic that applies to handling inbound and outbound data. This is possible because ChannelHandler methods are triggered by network events.
  3. ChannelPipeline provides a container for a chain of ChannelHandlers and defines an API for propagating the flow of inbound and outbound events along the chain. When a Channel is created, it is automatically assigned its own ChannelPipeline.
  4. ChannelHandlers are installed in the ChannelPipeline as follows:
  1. If a message or any other inbound event is read, it will start from the head of the pipeline and be passed to the first ChannelInboundHandler. But data flows from the tail through the chain of ChannelOutboundHandlers until it reaches the head.
  2. There are two ways of sending messages in Netty. You can write directly to the Channel or write to a ChannelHandlerContext object associated with a ChannelHandler. The former approach causes the message to start from the tail of the ChannelPipeline, the latter causes the message to start from the next handler in the ChannelPipeline.
  3. Adapters you'll call most often when creating your custom handlers:
  1. Why bootstrapping a client requires only a single EventLoopGroup, but a ServerBootstrap requires two(which can be the same instance)?
    A server needs two distinct sets of Channels. The first set will contain a single ServerChannel representing the server's own listening socket, bound to a local port. The second set will contain all the Channels that have been created to handle incoming client connections - one for each connection the server has accepted.

Chapter4

  1. The implementation of compareTo() in AbstractChannel throws an Error if two distinct Channel instances return the same hash code.
  2. Typical uses for ChannelHandlers include:
  1. Netty's Channel implementations are thread-safe, so you can store a reference to a Channel and use it whenever you need to write something to the remote peer, even when many threads are in use.
  2. Netty-provided transports:

Chapter5

  1. Netty's API for data handling is exposed through two components - abstract class ByteBuf and interface ByteBufHolder.
    These are some of the advantages of the ByteBuf API:
  1. How ByteBuf works?
    ByteBuf maintains two distinct indices: one for reading and one for writing. When you read from ByteBuf, its readerIndex is incremented by the number of bytes read. Similarly, when you write to ByteBuf, its writerIndex is incremented.
  2. ByteBuf methods whose name begins with read or write advance the corresponding index, whereas operations that begins with set or get do not. The latter methods operate on a relative index that's passed as an argument to the method.
  3. ByteBuf usage pattern:
  1. The JDK's InputStream defines the methods mark(int readlimit) and reset(). These are used to mark the current position in the stream to a specified value and to reset the stream to that position, respectively.
    Similarly, you can set and reposition the ByteBuf readerIndex and ByteBuf writerIndex by calling markReaderIndex(), markWriterBuffer(), resetReaderIndex(), and resetWriterIndex(). These are similar to the InputStream calls, expect that there's no readLimit to specify when the mark becomes invalid.
  2. A derived buffer provides a view of a ByteBuffer that represents its contents in a specified way. Such views are cerated by the following methods:
  1. ByteBufHolder is a good choice if you want to implement a message object that stores its payload in a ByteBuf.
  2. You can obtain a reference to a ByteBufAllocator either from a Channel or through the ChannelHandlerContext that is bound to a ChannelHandler. The following listing illustrates both of these methods.
  3. Netty provides two implementations of ByteBufAllocator: PooledByteBufAllocator and UnpooledByteBufAllocator. The former pools ByteBuf instances to improve performance and minimum memory fragmentation. This implementation uses an efficient approach to memory allocation known as jemalloc that has been adopted by a number of modern OSes. The latter implementation doesn't pool ByteBuf instances and returns a new instance everytime it is called.

Chapter6

  1. Channel lifecycle states:
  1. ChannelHandler lifecycle methods:
  1. ChannelHandler's subinterface:
  1. ChannelInboundHandler methods:
  1. ChannelOutboundHandler:
  1. To assist you in diagnosing potential problems, Netty provides ResourceLeakDetector, which will sample about 1% of your application's buffer allocations to check for memory leaks. The overhead involved is very small.
  2. Leak-detection levels:
  1. Every new Channel that's created is assigned a new ChannelPipeline. This association is permanent; the Channel can neither attach another ChannelPipeline nor detach the current one. This is a fixed operation in Netty's component lifecycle and requires no action on the part of the developer.
  2. The ChannelHandlerContext associated with a ChannelHandler never changes, so it's safe to cache a reference to it.
    ChannelHandlerContext methods, involve a shorter event flow than do the identically named methods available on other classes. This should be exploited where possible to provide maximum performance.
  3. Use @Sharable only if you're certain that your ChannelHandler is thread-safe.
  4. Because the exception will continue to flow in the inbound direction, the ChannelInboundHandler that implements the preceding logic is usually placed last in the ChannelPipeline. This ensures that all inbound exceptions are always handled, wherever in the ChannelPipeline they may occur.

Chapter7

  1. I/O operations in Netty3:
    The threading model used in previous releases guaranteed only that inbound events would be executed in the so-called I/O thread. All outbound events were handled by the calling thread, which might be the I/O thread or any other. This seemed a good idea at first but was found to be problematical because of the need for careful synchronization of outbound events in ChannelHandlers. In shorter, it wasn't possible to guarantee that multiple thread wouldn't try to access an outbound event at the same time. This could happen, for example, if you fired simultaneous downstream events for the same Channel by calling Channel.write() in different threads.
    The threading model adopted in Netty4 resolves these problems by handling everything that occurs in a given EventLoop in the same thread. This provides a simpler execution architecture and eliminates the need for synchronization in the ChannelHandlers.
  2. The EventLoops that service I/O and events for Channels are contained in an EventLoopGroup. The manner in which EventLoops are created and assigned varies according to the transport implementation.

Chapter8

  1. The differences between handler() and childHandler() is that the former adds a handler that's processed by the accepting ServerChannel, whereas childHandler() adds a handler that's processed by an accepted Channel, which represents a socket bound to a remote peer.
  2. Reuse EventLoops wherever possible to reduce the cost of thread creation.

Chapter11

  1. Provided ChannelHandlers and codec:
  1. ChunkedInput implementations:
  1. To use your own ChunkedInput implementation install a ChunkedWriteHandler in the pipeline. Use ChunkedWriteHandler to write large data without risking OutOfMemoryErrors.
  2. JDK serialization codecs:
  1. JBoss marshalling. If you are free to make use of external dependencies, JBoss marshalling is ideal: It's up to 3 times faster than JDK serialization and more compact.
  2. JBoss marshalling codecs:
  1. Protobuf codec:
上一篇 下一篇

猜你喜欢

热点阅读