10 - Hadoop Administration

​10.1 WEB UI

Hadoop has a web interface from where you can administer the entire Hadoop eco system. Through this interface you can perform monitoring of task execution effectively. Here is an example of web link:

 http://localhost:50030/jobtracker.jsp

            http://localhost:50060/tasktracker.jsp

URL address including the port number can be vary depending on you cluster setup and server setting. The above link is from the single standalone cluster hence the server name is mentioned as “localhost”. Hadoop publishes web interfaces and it displays the status about cluster. Master Node of the cluster hosts these web interfaces. These web interfaces can be viewed by using SSH so that to create a tunnel to the master node and then by configuring a SOCKS proxy so your browser can view the available websites hosted on the master node by using SSH tunnel.

10.2 Administer Map Reduce

Administrating Map Reduce involves the administration of entire Map and Reduce cycle. There are various configuration options that we can set, Configurations can be controlled through mapred-site.xml file. It contains configurations for MapReduce Tuning Hadoop configurations for cluster deployments.

  • mapred.reduce.parallel.copies – It is the maximum number of parallel copies the reduce step will run to fetch output from many parallel jobs. Its default value is 5
  • mapred.map.child.java.opts – This is for passing Java options into the map JVM. Its default value is -Xmx200M
  • mapred.reduce.child.java.opts – This is for passing Java options into the reduce JVM. Its default value is -Xmx200M
  • io.sort.mb – The memory limit while sorting data in MBs. Its default value is 200

10.3 Administer Name Node

The basic functionality of Name Node is to perform file management over the distributed Data Node. Hadoop provides utility for checking the health of files in HDFS. While performing administration of Name Node, Hadoop looksfor blocks that are missing from all Data Nodes, as well as under or overreplicatedblocks. Let us see an example where we are checking the whole filesystem for a small cluster, We need to use fsck to achieve this –

% hadoop fsck /

Status: <<will show the status>>

Total size: <<size in byte>> B

Total dirs: <<number of directories>>

Total files: <<total number of files>>

Total blocks (validated):<<number of blocks>>

Minimally replicated blocks: <<count of minimally replicated blocks>>

Over-replicated blocks: <<count of over replicated blocks>>

Under-replicated blocks: <<count of under replicated blocks>>

Mis-replicated blocks: <<percentage of mis replicated blocks>>

Default replication factor: <<value for default replication factor>>

Average block replication: <<value for average block replication>>

Corrupt blocks: <<number of corrupt blocks>>

Missing replicas: <<percentage in missing replicas>>

Number of data-nodes: <<total number of data nodes>>

Number of racks: <<count of racks>>

fsck recursively checks the filesystem namespace. It starts from the path ‘root’ and prints a dot for every file it checks. In order to perform a file check, fsck collects the metadata information for the block of the file and seeks for inconsistencies or problems. These all information gets retrieved from the NameNode. There is no need for fsck to go and retrieve information from the DataNode and get the block information. Here some of the information with respect to Block and come of the conditions associated with it:

Over Replicated Block

If the Block exceeds its replication than target replication for the file they belong to then it is known as the Over Replicated Block. HDFS automatically deletes the excess replicas.

Under Replicated Block

If the Block keeps under its replication than target replication for the file they belong to then it is known as the Under Replicated Block. HDFS automatically creates new replicas of Under Replicated Block until they meet the target replication. To get the information about the ‘Block Being Replicated’, we need to use the command as ‘hadoop dfsadmin –metasave’.

Misreplicated Block

If a Block does not fulfil the Block Replica Placement Policy then it is known as the Misreplicated Block. In case of replication factor as three in a multi rack cluster environment, if all three replicas of a block are on the same rack, then the block is known as the Misreplicated Block. Since the replicas should be spread across at least two racks. HDFS will automatically replicate Misreplicated Block in order to meet the rack placement policy.

Corrupt Block

These are the Blocks whose replicas got corrupt. Blocks with having at least single noncorruptreplica will not be reported as corrupt Block. The NameNode replicates corruptreplicatill the target replication is met.

Missing Replica

These are the Blocks whose replicas are not present anywhere in the cluster.Corrupt or missing Block means there is a loss of data, it is actually a big concern. As a default procedure fsck leaves files with missing or corrupt Blocks, However you can definitely perform one of the following actions:

  • Use the ‘move’ option to move the affected files to the ‘lost+found’ directory in HDFS, using the -move option
  • Use the ‘delete’ option to delete the affected files, Note that the files cannot be recovered after being deleted

10.4 Administer Data Node

Administering DataNode involves the managing entire set of commodity computers. Note that every DataNode runs a block scanner that periodically verifies all the blocks stored over DataNode. This assures bad blocks to be detected and fixed before they are processed and readby the clients. The DataBlockScanner maintains a list of blocks so that to verify and scans them one after another for checksum errors. Blocks are verified in every three weeks so that to guard against disk errors over time. We can set ‘dfs.datanode.scan.period.hours’ property to perform this operation, default value for this is 504 hours. Corrupt blocks are always gets reported to NameNode so that it can fix them. If you want to get verification report for a DataNode then you can visit the datanode’sweb interface at http://datanode:50075/blockScannerReport. Find below an example of report:

Total Blocks

18241

Verified in last hour

58

Verified in last day

1423

Verified in last week

6754

Verified in last four weeks

18067

Verified in SCAN_PERIOD

19136

Not yet verified

1032

Verified since restart

31814

Scans since restart

5654

Scan errors since restart

0

Transient scan errors

0

Current scan rate limit Kbps

1024

Progress this period

102%

Time left in cur period

51.17%

 

Over the period of time the distribution of blocks across DataNodes can become unbalanced. As a result of this, anunbalanced cluster can affect the locality for MapReduce and hence puts a greater pressure on the highly utilized DataNodes. Hadoop has a balancer program, a Hadoop daemon,which redistributes blocks by transferring them from over utilized DataNodes to underutilized DataNodes. This process goes on until the cluster is considered to be balanced.                                                                 

10.5 Administer Job Tracker

The JobTracker daemon is the link between our application and Hadoop system. Administration of JobTracker means managing the process in which JobTracker manages the overall working of TaskTracker. Here are some functions of JobTracker –

  1. After we submitting code to the Hadoop cluster then the JobTracker determines plan of execution by determining the required files to be processed. It assigns nodes to different tasks and monitors all running tasks
  2. In case of a task fails, the JobTracker re-launch the task automatically, most probably on a different node
  3. JobTracker is a Master process and there is only one JobTracker daemon per Hadoop cluster
  4. Once a client asks the JobTracker to initiate the data processing job, the JobTracker divides the work and assigns different map reduce tasks to each TaskTracker in the system

10.6 Administer Task Tracker

Administering TaskTracker involves the management of Tasks that is being executed on every DataNode. TaskTracker manages the execution of individual tasks on each DataNode. Here are some of the functions of TaskTracker –

  1. Every TaskTracker is responsible for running the individual tasks assigned by JobTracker
  2. However there is a single TaskTracker per DataNode, But each TaskTracker can spawn multiple JVMs in order to handle many map or reduce tasks in simultaneous execution
  3. TaskTracker continuously communicates with the JobTracker
  4. When the JobTracker does not receive a heartbeat message from a TaskTracker then JobTracker assumes the TaskTracker has crashed and will resubmit the failing tasks to other DataNodes in the cluster

10.7 Remove Node

As we know that the Hadoop system is designed to handle failure precisely without the loss of data. In case of replication factor is ‘three’, the Block that contains the data would be spread in three different DataNode (either on the same rack or different racks). There is only one case when you simultaneously shouts down all the DataNodes and loose data. However this is not the way of shutting down or removing the DataNodes. The way to decommission the DataNode is to first inform the NameNode of the DataNodes that you want to remove out of cluster. This will make sure that NameNode replicates the Blocks that are part of DataNodes being removed. If you shut down a TaskTracker that is executing Tasks then JobTracker will make a note ofthe failure and reschedule the Tasks on other TaskTracker. The entire decommissioning process is controlled by an exclude file whose details are as:

  • It is set in dfs.hosts.exclude property for HDFS
  • It is set in mapred.hosts.exclude property for MapReduce

10.8 Assign Node

In order to enhance the size of Hadoop cluster we may require to add DataNode in it. It is a DataNode assigning or commissioning process. Although adding up a DataNode in Hadoop cluster is altogether is a simple step but it has its own likely security risk associated with. Sometimes adding a machine and to allow to connect to NameNode might be risky in the sense that particular machine acting as DataNode may not be authorized to, since that machine is actually not a real DataNode, you don’t have any control on it, and may stop working at any point of time. This will result in loss of data that we will never expect to happen. Datanodes that are allowed to connect to NameNode are all specified in a file, the name of this file is mentioned in ‘dfs.hosts’ property. This file stays on the NameNode’s local

filesystem, it has an entry associated with each DataNode. In the same way TaskTracker that may connect to the JobTracker are specified in a file whosename is mentioned in ‘mapred.hosts’ property.

Here is the process to add a new NameNode to the Hadoop cluster:

  • First add the network addresses of the new machine that needs to be included in the include file
  • Update the NameNode with the information of new set of DataNode using command

% hadoop dfsadmin -refreshNodes

  • Update the jobtracker with the information of new set of TaskTracker using command

     % hadoop mradmin -refreshNodes

  • Perform an update on the slaves file with the information of new DataNode in order to include them in future operations
  • Start the newly added DataNode and TaskTracker
  • Make sure that the newly added DataNode and TaskTracker appear in the web UI

10.9 Scheduling and Debugging Job

Scheduling

Job scheduling means job has to wait until it turns come for execution. In the shared cluster environment a lot of resources are shared among various users. Hence in turn it needs a better scheduler. It is highly required that Production jobs should finish effectively in timely manner. However at the same time should allow to getresults back in a reasonable time for the users who makes smaller ad hoc queries.

We can add to set a job’s priority for the execution using mapred.job.priority property, we may use setJobPriority() method. The value that it takes are:

  • VERY_HIGH
  • HIGH
  • NORMAL
  • LOW
  • VERY_LOW

We have choice of scheduler for MapReduce in Hadoop:

  1. Fai Scheduler

The purpose of Fair Scheduler is to give every user a fair share of the available cluster capacity over the period of time.

  • In case of a single job is running – It will get all of the cluster
  • With multiple jobs submitted –Free task slots are provided to the jobs in such a way so that to give each user a fair share of the cluster.
  1. CapacityScheduler

In Capacity Scheduler, the cluster is made up of a number of queues which may be hierarchical in such a way that a queue may be the child of another queue. Also, each queue has anallocated capacity. Under each queue, jobsare scheduled using First In First Out scheduling with priorities.

Debugging

Debugging can be done in various ways and in various parts of the Hadoop eco system. But if we have to talk about LOGS especially for analyzing the system then we have following types of log available:

System Daemon Logs

Each Hadoop daemon produces a logfile usinglog4j. Written in the directory which is defined by HADOOP_LOG_DIR environmentvariable. It’s been use by the Administrator.

HDFS Audit Logs

A log of all HDFS requests. Written to the NameNode’s log, it is configurable. It’s been use by the Administrator.

MapReduce Job History Logs

A log of the events that takes place during running a job. Saved centrally on the JobTracker. It’s been use by the User.

MapReduce Task Logs

Each TaskTracker child process produces alogfile using log4j which is called syslog. Written in theuserlogs subdirectory. It’s been use by the User.

Like us on Facebook