您的位置:首页 > 大数据 > Hadoop

Hadoop2.6.0的事件分类与实现

2016-12-14 11:12 323 查看

前言

说实在的,在阅读Hadoop YARN的源码之前,我对于Java枚举的使用相形见绌。YARN中实现的事件在可读性、可维护性、可扩展性方面的工作都值得借鉴。

概念

在具体分析源码之前,我们先看看YARN是如何定义一个事件的。比如作业启动的事件,很多人可能会用常量将它定义到一个class文件中,就像下面这样:

[java] view plain copy

class Constants {
public static final String JOB_START_EVENT = "jobStart";
}

或者简单的使用枚举,就像下面这样;

[java] view plain copy

enum Enums {
JOB_START_EVENT("jobStart");
private String name;
private Enums(String name) {
this.name = name;
}
}

之后,当增加了作业停止的事件,代码会变为:

[java] view plain copy

class Constants {
public static final String JOB_START_EVENT = "jobStart";
public static final String JOB_END_EVENT = "jobEnd";
}

或者:

[java] view plain copy

enum Enums {
JOB_START_EVENT("jobStart"),
JOB_END_EVENT("jobEnd");
private String name;
private Enums(String name) {
this.name = name;
}
}

我们的系统往往很复杂,这时候引入了任务的概念,包括任务启动、任务停止的事件。随着业务发展,有更多的概念被加进来,就像下面这样;

[java] view plain copy

class Constants {
public static final String JOB_START_EVENT = "jobStart";
public static final String JOB_END_EVENT = "jobEnd";
public static final String TASK_START_EVENT = "taskStart";
public static final String TASK_END_EVENT = "taskEnd";
// 其它各种概念的常量
}

或者:

[java] view plain copy

enum Enums {
JOB_START_EVENT("jobStart"),
JOB_END_EVENT("jobEnd"),
// 其它各种概念的常量枚举
TASK_START_EVENT("taskStart"),
TASK_END_EVENT("taskEnd");
private String name;
private Enums(String name) {
this.name = name;
}
}

当加入的常量值越来越多时,你会发现以上使用方式越来越不可维护。各种概念混杂在一起,显得杂乱无章。你可能会说,我不会这么傻,我会将作业与任务以及其它概念的常量值分而治之,每个业务概念相关的放入一个文件,就像下面这样:

[java] view plain copy

class JobConstants {
public static final String JOB_START_EVENT = "jobStart";
public static final String JOB_END_EVENT = "jobEnd";
}


[java] view plain copy

class TaskConstants {
public static final String TASK_START_EVENT = "taskStart";
public static final String TASK_END_EVENT = "taskEnd";
}


或者:

[java] view plain copy

enum JobEnums {
JOB_START_EVENT("jobStart"),
JOB_END_EVENT("jobEnd");
private String name;
private JobEnums (String name) {
this.name = name;
}
}


[java] view plain copy

enum TaskEnums {
TASK_START_EVENT("taskStart"),
TASK_END_EVENT("taskEnd");
private String name;
private TaskEnums (String name) {
this.name = name;
}
}


现在业务出现了新的变化,每种枚举值除了name属性之外,还增加了code属性。假如你之前选择了常量值来实现,此时不可避免的需要重构。如果你选择了枚举,说明你初步的选择是明智的,你可以这样来扩展:

[java] view plain copy

enum JobEnums {
JOB_START_EVENT(10, "jobStart"),
JOB_END_EVENT(20, "jobEnd");
private int code;
private String name;
private JobEnums (int code, String name) {
this.code = code;
this.name = name;
}
}


[java] view plain copy

enum TaskEnums {
TASK_START_EVENT(110, "taskStart"),
TASK_END_EVENT(120, "taskEnd");
private int code;
private String name;
private TaskEnums (int code, String name)   {
this.code = code;
this.name = name;
}
}


可悲的是,你不得不在每一个枚举中都重复加入类似的代码。也许你认为这只不过是增加些许的工作量,你操作键盘的手法熟练而迷人,几次快速的复制操作就可以完成。噩梦远没有结束,新的需求给两个枚举类型融入了新的不同——JobEnums增加了description属性,而TaskEnums则增加了timestamp字段。此外,两者还必须都增加hashCode方法以用于散列。增加这些功能后,代码将变为:

[java] view plain copy

enum JobEnums {
JOB_START_EVENT(10, "jobStart", "job start description"),
JOB_END_EVENT(20, "jobEnd", "job end description");
private int code;
private String name;
private String description;
private JobEnums (int code, String name, String description) {
this.code = code;
this.name = name;
this.description = description;
}

public int hashCode() {
return this.name.hashCode() + this.description.hashCode();
}
}

[java] view plain copy

enum TaskEnums {
TASK_START_EVENT(110, "taskStart", 1460977775087),
TASK_END_EVENT(120, "taskEnd", 1460977775088);
private int code;
private String name;
private long timestamp;
private TaskEnums (int code, String name, long timestamp)   {
this.code = code;
this.name = name;
this.timestamp = timestamp;
}

public int hashCode() {
return this.name.hashCode();
}
}

随着业务的发展,你会发现你需要维护的枚举类型差异越来越多。即便它们之间有所不同,可是却有很多内容是重复的。为了解决枚举与常量在可读性、可维护性、可复用性、可扩展性等方面的问题,Hadoop将事件进行了以下定义:

事件 = 事件名称 + 事件类型

比如作业启动事件 = 作业事件 + 作业事件类型

事件与事件类型

Hadoop2.6.0中的事件多种多样,最为常见的包括:ContainerEvent、ApplicationEvent、JobEvent、RMAppEvent、RMAppAttemptEvent、TaskEvent、TaskAttemptEvent等。为了解决枚举与常量在可读性、可维护性、可复用性、可扩展性等方面的问题,Hadoop对事件进行了以下抽象:

[java] view plain copy

/**
* Interface defining events api.
*
*/
@Public
@Evolving
public interface Event<TYPE extends Enum<TYPE>> {

TYPE getType();
long getTimestamp();
String toString();
}


以上接口说明了任何一个具体事件都是一个枚举类型,而且有一个事件类型属性(用泛型标记TYPE表示),一个时间戳及toString()方法。

所有事件都有一个基本实现AbstractEvent,其实现如下:

[java] view plain copy

/**
* Parent class of all the events. All events extend this class.
*/
@Public
@Evolving
public abstract class AbstractEvent<TYPE extends Enum<TYPE>>
implements Event<TYPE> {

private final TYPE type;
private final long timestamp;

// use this if you DON'T care about the timestamp
public AbstractEvent(TYPE type) {
this.type = type;
// We're not generating a real timestamp here.  It's too expensive.
timestamp = -1L;
}

// use this if you care about the timestamp
public AbstractEvent(TYPE type, long timestamp) {
this.type = type;
this.timestamp = timestamp;
}

@Override
public long getTimestamp() {
return timestamp;
}

@Override
public TYPE getType() {
return type;
}

@Override
public String toString() {
return "EventType: " + getType();
}
}

以JobEvent表示作业事件,其实现如下:

[java] view plain copy

/**
* This class encapsulates job related events.
*
*/
public class JobEvent extends AbstractEvent<JobEventType> {

private JobId jobID;

public JobEvent(JobId jobID, JobEventType type) {
super(type);
this.jobID = jobID;
}

public JobId getJobId() {
return jobID;
}

}

TaskEvent表示任务事件,其实现如下:

[java] view plain copy

/**
* this class encapsulates task related events.
*
*/
public class TaskEvent extends AbstractEvent<TaskEventType> {

private TaskId taskID;

public TaskEvent(TaskId taskID, TaskEventType type) {
super(type);
this.taskID = taskID;
}

public TaskId getTaskID() {
return taskID;
}
}

事件类型属性(用泛型标记TYPE表示)在任务事件中对应的是TaskEventType,其实现如下:

[java] view plain copy

/**
* Event types handled by Task.
*/
public enum TaskEventType {

//Producer:Client, Job
T_KILL,

//Producer:Job
T_SCHEDULE,
T_RECOVER,

//Producer:Speculator
T_ADD_SPEC_ATTEMPT,

//Producer:TaskAttempt
T_ATTEMPT_LAUNCHED,
T_ATTEMPT_COMMIT_PENDING,
T_ATTEMPT_FAILED,
T_ATTEMPT_SUCCEEDED,
T_ATTEMPT_KILLED
}


JobEventType类似,不再赘述。

这种实现将枚举与各种事件之间的差异(表现在属性和方法的不同)解耦,极大地扩展了可读性、可维护性,并且保留了相同逻辑的代码复用。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  hadoop yarn
相关文章推荐