Saturday, 6 April 2019

Hadoop : Part - 4


Speculative Execution

Instead of identifying and fixing the slow-running tasks, Hadoop tries to detect when the task runs slower than expected and then launches other equivalent task as backup. This backup mechanism in Hadoop is Speculative Execution.

Heartbeat in HDFS

A heartbeat is a signal indicating that it is alive. A datanode sends heartbeat to Namenode.

Hadoop archives

Hadoop Archives (HAR) offers an effective way to deal with the small files problem.
Hadoop Archives or HAR is an archiving facility that packs files in to HDFS blocks efficiently and hence HAR can be used to tackle the small files problem in Hadoop.
hadoop archive -archiveName myhar.har /input/location /output/location
Once a .har file is created, you can do a listing on the .har file and you will see it is made up of index files and part files. Part files are nothing but the original files concatenated together in to a big file. Index files are look up files which is used to look up the individual small files inside the big part files.
hadoop fs -ls /output/location/myhar.har
/output/location/myhar.har/_index
/output/location/myhar.har/_masterindex
/output/location/myhar.har/part-000000

Reason for setting HDFS blocksize as 128MB

The block size is the smallest unit of data that a file system can store. If the blocksize is smaller it requires multiple lookups on namenode to locate the file. HDFS is meant to handle large files. If the blocksize is 128MB, then the number of requests goes down, greatly reducing the cost of overhead and load on the Name Node.

Data Locality in Hadoop

Data locality refers to the ability to move the computation close to where the actual data resides on the node, instead of moving large data to computation. This minimizes network congestion and increases the overall throughput of the system.

Safemode in Hadoop

Safemode in Apache Hadoop is a maintenance state of NameNode. During which NameNode doesn’t allow any modifications to the file system. During Safemode, HDFS cluster is in read-only and doesn’t replicate or delete blocks.

Single Point of Failure

In Hadoop 1.0, NameNode is a single point of Failure (SPOF). If namenode fails, all clients would unable to read/write files.
Hadoop 2.0 overcomes this SPOF by providing support for multiple NameNode. This feature provides  If active NameNode fails, then Standby-Namenode takes all the responsibility of active node.
some deployment requires high degree fault-tolerance. So new version 3.0 enable this feature by allowing the user to run multiple standby namenode.



Strive for excellence and success will follow you.. 


No comments:

Post a Comment