Hadoop
2016-12-18 21:08
176 查看
What Is Apache Hadoop?
The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing.The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed
to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering
a highly-available service on top of a cluster of computers, each of which may be prone to failures.
The project includes these modules:
Hadoop Common: The common utilities that support the other Hadoop modules.
Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.
Hadoop YARN: A framework for job scheduling and cluster resource management.
Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.
Other Hadoop-related projects at Apache include:
Ambari™: A web-based tool for provisioning, managing, and
monitoring Apache Hadoop clusters which includes support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboard for viewing cluster health such as heatmaps and ability to view MapReduce,
Pig and Hive applications visually alongwith features to diagnose their performance characteristics in a user-friendly manner.
Avro™: A data serialization system.
Cassandra™: A scalable multi-master database with no single points of
failure.
Chukwa™: A data collection system for managing large distributed
systems.
HBase™: A scalable, distributed database that supports structured data storage
for large tables.
Hive™: A data warehouse infrastructure that provides data summarization and
ad hoc querying.
Mahout™: A Scalable machine learning and data mining library.
Pig™: A high-level data-flow language and execution framework for parallel
computation.
Spark™: A fast and general compute engine for Hadoop data. Spark
provides a simple and expressive programming model that supports a wide range of applications, including ETL, machine learning, stream processing, and graph computation.
Tez™: A generalized data-flow programming framework, built on Hadoop
YARN, which provides a powerful and flexible engine to execute an arbitrary DAG of tasks to process data for both batch and interactive use-cases. Tez is being adopted by Hive™, Pig™ and other frameworks in the Hadoop ecosystem, and also by other commercial
software (e.g. ETL tools), to replace Hadoop™ MapReduce as the underlying execution engine.
ZooKeeper™: A high-performance coordination service for distributed applications.
相关文章推荐
- (5)Powershell别名(Alias)
- 树梅派应用15:树莓派+motion安装摄像头实现远程监控
- Hadoop是如何确定任意两个节点是位于同一机架(机架感知)
- Linux下段错误分析
- apache功能简单了解
- linux下建立mysql数据库备份脚本
- 28-进程空间与 fork 函数原理
- linux下xampp(apache)中配置域名访问,以及遇到的问题
- 运维笔记24 (iscsi的基本配置)
- apache和tomcat有什么不同,为什么要整合apache 和tomcat?
- ---arch linux 下装wordpress
- Apache—DBUtils框架
- CentOS 创建逻辑卷LVM
- 查询linux文件的MD5值
- Centos 7防火墙firewalld开放80端口(转)
- linux sed 匹配替换
- Linux下DNS服务器的操作实例(正/反向解析,主/从服务器搭建)
- tomcat配置文件server.xml含义说明
- 【OpenCV笔记】视频稳定性和超分辨率及OpenCL调用GPU学习
- Linux内核源码分析方法