spark

1.1 Spark-RDD存储机制以及特性

2019-07-05  本文已影响0人  不羁之后_

RDD的存储机制:

1.分区列表性

  /**
   * Implemented by subclasses to return the set of partitions in this RDD. This method will only
   * be called once, so it is safe to implement a time-consuming computation in it.
   *
   * The partitions in this array must satisfy the following property:
   *   `rdd.partitions.zipWithIndex.forall { case (partition, index) => partition.index == index }`
   */
 protected def getPartitions: Array [Partition]

2.依赖其他RDD,血缘关系

  /**
   * Implemented by subclasses to return how this RDD depends on parent RDDs. This method will only
   * be called once, so it is safe to implement a time-consuming computation in it.
   */
 protected def getDependencies: Seq [Dependency[_]]= deps

3.每个分区都有一个函数计算

  /**
   * :: DeveloperApi ::
   * Implemented by subclasses to compute a given partition.
   */
  @DeveloperApi
  def compute(split: Partition, context: TaskContext): Iterator[T]

4.key-value的分区器

  /** Optionally overridden by subclasses to specify how they are partitioned. */
  @transient val partitioner: Option[Partitioner] = None

5.每个分区都有一个分区位置列表

/**
* Optionally overridden by subclasses to specify placement preferences.
* 可选的,输入参数是 split分片,返回的是一组优先的节点位置。例如hdfs的block位置
*/
protected def getPreferredLocations(split: Partition): Seq[String] = Nil
上一篇下一篇

猜你喜欢

热点阅读