Thursday, 29 January 2015

Configuration of Hadoop Cluster 2

I think I'll write this later but. I am too Lazy to wrtie Click Here

Monday, 24 November 2014

Hadoop Cluster Installation

Two type's of installation  Hadoop Cluster .. if I am not wrong then.
1. Through Cli mode.
2. Gui Mode using Cloudera Manager package

Basic Steps for Both type's of installation
Basic Machine as per your Requirement.
Linux OS : Any Flavor (Red hat or Ubuntu is Good)
Disabled All Firewall.
Dns Resolution.

Password Less Login
For Password Less login you can click here

Now Next step is Disable firewall.

#iptables -F
#/etc/init.d/iptables stop
#chkconfig iptables off

Now same way you need to disable ip6tables also

2. Disable Selinx and firewall
    for Selinux you need to edit mentioned file
    vim /etc/sysconfig/selinux
     SELINUX=enforcing (change enforcing to disable)
:wq(save & exit )

3. Disable firewall
by default * un-comment  and save setting

If you are not having DNS server then make sure your host able to ping each Othe with IP and HOSTNAME.

1. for that you need to make entry into
     #/vim /etc/hosts   Hadoop159   Hadoop160

Once LocalDNS is done then you need to add user hadoop
useradd hadoop
passwd hadoop
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Create Mention Folder 
mkdir /hadoop
Chage Owner with below mention Command
chown –R hadoop /hadoop

Once You complete these mention steps let's start with Hadoop Cluster Installation Step.

Download latest Version of JAVA and Install the same. And you need to install java master node as well client node's
#wget –no-check-certificates
 rpm -ivh jdk-7u1-linux-x86.rpm

Onece you done with java installation then you need to install Cloudera Repo. you can install through you or rpm as per your..

Download Cloudera Repo


Through Yum
 yum --nogpgcheck localinstall cdh3-repository-1.0-1.noarch.rpm

This action need to be perform all server master as well client node's

Now need to install Hadoop Package..

On the Master node, install NameNode and JobTracker packages
       # yum -y install hadoop-0.20-namenode hadoop-0.20-jobtracker 

On the Slave nodes, install DataNode and TaskTracker packages
#yum -y install hadoop-0.20-datanode hadoop-0.20-tasktracker

This is the Basic Cluster Installation. In next blog will do Cluster Configuration. Hoping this is helpful for someone :)

Monday, 30 September 2013

How to Add SWAP file

Linux RAM is composed of chunks of memory called pages. To free up pages of RAM, a “Linux swap” can occur and a page of memory is copied from the RAM to preconfigured space on the hard disk. Linux swaps allow a system to harness more memory than was originally physically available.
However, swapping does have disadvantages. Because hard disks have a much slower memory than RAM, virtual private server performance may slow down considerably. Additionally, swap thrashing can begin to take place if the system gets swamped from too many files being swapped in and out.
Check for Swap Space
Before we proceed to set up a swap file, we need to check if any swap files have been enabled on the VPS by looking at the summary of swap usage.
Swapon –s
An empty list will confirm that you have no swap files enabled:
Filename Type size used Priority
/dev/sda1 partition 7834620 7092 -1
Check the File System :
After we know that we do not have a swap file enabled on the server, we can check how much space we have on the server with the df command. The swap file will take 512MB— since we are only using up about 8% of the /dev/sda, we can proceed.
root@sadeek:/# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 236082692 16924872 207339736 8% /
udev 1854004 4 1854000 1% /dev
tmpfs 758308 824 757484 1% /run
none 5120 0 5120 0% /run/lock
none 1895768 80 1895688 1% /run/shm
Create and Enable the Swap File :-

Now it’s time to create the swap file itself using the dd command:
dd if=/dev/zero of=/swapfile bs=1024 count=512k

“of=/swapfile” designates the file’s name. In this case the name is swapfile.

Subsequently we are going to prepare the swap file by creating a linux swap area:
mkswap /swapfile

The results display:

Setting up swapspace version 1, size = 524284 KiB
no label, UUID=9932827f-0478-4819-b428-8debb9c43c71

Finish up by activating the swap file:
swapon /swapfile

You will then be able to see the new swap file when you view the swap summary.

root@sadeek:/# swapon -s
Filename Type Size Used Priority
/dev/sda1 partition 7834620 7092 -1
/swapfile file 524284 0 -2

This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.

Open up the file:

vi /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0

To prevent the file from being readable, you should set up the correct permissions on the swap file:

chown root:root /swapfile
chmod 600 /swapfile

Friday, 15 February 2013

Thursday, 14 February 2013

Logrotation in Linux/unix

Log files are the most valuable tools available for Linux system security. The logrotate program is used to provide the administrator with an up-to-date record of events taking place on the system. The logrotate utility may also be used to back up log files, so copies may be used to establish patterns for system use.

the logrotate program is a log file manager. It is used to regularly cycle (or rotate) log files by removing the oldest ones from your system and creating new log files. It may be used to rotate based on the age of the file or the file’s size, and usually runs automatically through the cron utility. The logrotate program may also be used to compress log files and to configure e-mail to users when they are rotated.

Configuration File :- 
/var/lib/logrotate.status   >> this file update  status of recent execution of  logrotation.

root@puppet:~/sadeek/big# ls -l /var/lib/logrotate/status
-rw-r--r-- 1 root root 2030 Feb 23 07:55 /var/lib/logrotate/status
root@puppet:/etc/logrotate.d# ls -l /var/lib/logrotate/status
-rw-r--r-- 1 root root 2030 Feb 24 05:57 /var/lib/logrotate/status

Default state file.
/etc/logrotate.conf     by default configuration file of logs

below are the some example for log rotation configuration, you need to create a file and make the entry as below. it will rotate the logs weekly.
 vi samba
/var/log/samba/log.smbd {
        rotate 7
                reload smbd 2>/dev/null

for different application you need to create a file and config . we can execute the script with the help of postrotate  you need to mention the script location. 

this job execute by cron.daily  you can cehck the cron.daily entry under the cat /etc/cron.daily file 
 vi test
/root/sadeek/logs/logs.log {
    rotate 2
    olddir /root/sadeek/big
#    postrotate
#        /bin/sh /root/sadeek/
#    endscript

Friday, 2 November 2012

Difference between Soft Mount & Hard Mount

The directory /nfs   should be created in your node/server. The nfs mount can be mount as a “soft mount” or as a “hard mount” these mount option define the how the nfs client should be handle nfs crash/failure. We will see the difference between hard mount and soft mount.

Soft mount:- suppose you have mount the nfs by using “soft mount’ when a program request a file from nfs server. Nfs demon will try to retrieve the data from the nfs server. If doesn’t get any response from nfs server due to some failure or crash on nfs server. Then nfs client report an error to the process on the client machine requesting the file access the Advantage “fast responsiveness”   it doesn’t wait to the nfs server to respond. The Main Disadvantage of this method is data corruption or loss of data so this is not the recommended option to use.

 [root@sadeek ~]# showmount -e
Export list for
/nfs *
Soft mounting (Temporary Mounting):-
[root@sadeek ~]# mount  /mnt/

Hard mounting:-if you have mounted the nfs by “hard mount”.  It will repeatly try to connect to the server. Once the server is back online the program will continue to execute undisturbed the state where it was during the crash. We can use the mount option “intr’ which allows nfs request to interrupt if the server goes down or cannot be access able.

Hard Mount (Permanent mounting):-
[root@sadeek~]# mount  -o rw,hard,intr  /mnt
[root@sadeek ~]# vim  /etc/fstab       /mnt                         nfs    Defaults    0 0
We can check nfs share  with the help of below mention command.
[root@sushee ~]# mount –a