Send Email using PHP

Download PHPMailer from


$ git clone




$mail = new PHPMailer(); // create a new object

$mail->IsSMTP(); // enable SMTP

$mail->SMTPDebug = 1; // debugging: 1 = errors and messages, 2 = messages only

$mail->SMTPAuth = true; // authentication enabled

$mail->SMTPSecure = ‘ssl’; // secure transfer enabled REQUIRED for Gmail

$mail->Host = “”;

$mail->Port = 465; // or 587


$mail->Username = “”;

$mail->Password = “password”;


$mail->Subject = “Test”;

$mail->Body = “hello”;


if(!$mail->Send()) {

echo “Mailer Error: ” . $mail->ErrorInfo;

} else {

echo “Message has been sent”;




Hadoop pseudo distributed mode

I am just going through the steps to setup a Hadoop server in pseudo distributed mode.

I assume that you have already downloaded the Hadoop tar and untarred the package and moved it to /usr/local/hadoop

Make sure you have already setup Hadoop environment. If you missed on it, check out

Once the hadoop environment is ready, following the below steps

$ sudo chown -R hduser:hadoop /usr/local/hadoop

$ vi $HADOOP_HOME/etc/hadoop/core-site.xml

Change the following contents of the file,


Screenshot from 2016-01-14 10:31:07



Screenshot from 2016-01-14 10:31:07 (copy)


$ sudo mkdir -p /app/hadoop/tmp

$ sudo chown hduser:hadoop /app/hadoop/tmp


Make sure you have an entry of your IPaddress in your /etc/hosts file

Screenshot from 2016-01-14 10:42:26


Edit the hdfs-site.xml file to change the below values

$ vi $HADOOP_HOME/etc/hadoop/hdfs-site.xml



Screenshot from 2016-01-14 10:39:18


Add these lines to the end of your .bashrc file (Remember that you are doing all these as hduser user).

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export PATH=$PATH:$JAVA_HOME/bin
alias jps=’/usr/lib/jvm/java-7-openjdk-amd64/bin/jps’
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0
export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib”
export HIVE_HOME=/usr/local/hadoop/hadoop-2.6.0/hive-0.9.0-bin
export PATH=$PATH:$HIVE_HOME/bin


Screenshot from 2016-01-14 11:02:31


$ source ~/.bashrc

$ sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode

$ sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode

$ sudo chown -R hduser:hadoop /usr/local/hadoop_store

Now format the hadoop filesystem,

$ hadoop namenode -format

Upon successful formatting you should see something like below, at the end.

16/01/07 18:49:02 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted.
16/01/07 18:49:02 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/01/07 18:49:02 INFO util.ExitUtil: Exiting with status 0


It’s all set now, we may start our hdfs server.



Enter hduser‘s password once prompted.


we can recheck the Java processes running by jps command.

$ jps
28823 SecondaryNameNode
29195 Jps
28957 ResourceManager
28485 NameNode
28639 DataNode


Create directory in HDFS

$ hadoop fs -mkdir -p /user/hduser

You should be able to see the directory contents by using the ls command,

$ hadoop fs -ls


$ hadoop fs -ls hdfs://yourIPaddress:54310/user

Found 1 items
drwxr-xr-x   – hduser supergroup          0 2016-01-07 18:51 hdfs://yourIPaddress:54310/user/hduser



Problems and solutions


WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

$ export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib/native”


No such file or directory upon ls

$ hadoop fs -mkdir -p /user/hduser


org.apache.hadoop.ipc.RemoteException( File /user/hduser/README.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

Solution 1:

$ sudo rm -rf /tmp/*

Solution 2:

$ sudo rm -r /app/hadoop/tmp
$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
$ sudo chmod 750 /app/hadoop/tmp

Upon running jps command Datanode should be seen.



Hadoop on Ubuntu (14.04)

We will go through the required steps for setting up a single-node Hadoop cluster backed by the Hadoop Distributed File System, running on Ubuntu(14.04) Linux. It provides high throughput access to application data and is suitable for applications that have large data sets.


  1. Create a dedicated user for hadoop
  2. Java should be installed
  3. Setup ssh and generate key
  4. Set environment variables
  5. Configure Java alternatives
  6. Download Hadoop
  7. Setup and configure Hadoop environment
  8. Verify and run Hadoop


Create a user for Hadoop

$ sudo useradd hduser
$ sudo passwd hduser
$ sudo addgroup hadoop
$ sudo adduser –ingroup hadoop hduser


Install Java

Java is the main prerequisite for Hadoop. First of all, you should verify the existence of java in your system using the command “java -version”.

$ java -version

If Java is working as expected, you should see something similar to,

java version “1.7.0_79”
OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-0ubuntu1.14.04.1)
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)


Setup ssh and generate key

The following commands are used for generating a key value pair using SSH. Copy the public keys form to authorized_keys, and provide the owner with read and write permissions to authorized_keys file respectively.
$ su – hduser
$ ssh-keygen -t rsa
$ cat ~/.ssh/ >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys


Set environment variables

For setting up PATH and JAVA_HOME variables, add the following commands to ~/.bashrc file.
export JAVA_HOME=/usr/local/jdk1.7.0_79
export PATH=$PATH:$JAVA_HOME/bin

Apply all the changes
$ source ~/.bashrc


Configure Java alternatives

# alternatives –install /usr/bin/java java usr/local/java/bin/java 2
# alternatives –install /usr/bin/javac javac usr/local/java/bin/javac 2
# alternatives –install /usr/bin/jar jar usr/local/java/bin/jar 2
# alternatives –set java usr/local/java/bin/java
# alternatives –set javac usr/local/java/bin/javac
# alternatives –set jar usr/local/java/bin/jar


Download Hadoop

Download Hadoop from
$ su password:
# cd /usr/local
# wget hadoop-2.4.1.tar.gz
# tar xzf hadoop-2.4.1.tar.gz # mv hadoop-2.4.1/* to hadoop/
# exit


Setup and configure Hadoop environment

Appending the following command to ~/.bashrc file.

export HADOOP_HOME=/usr/local/hadoop

Make sure Hadoop is working fine,

$ hadoop version
Hadoop 2.6.0
Subversion -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0 From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar



Install php mbstring

I was trying to document my php webserver code. I happened to look at a nice open source documenting tool for php which required php-mbstring for encoding.

php-mbstring is not available by default in yum repo.

$ sudo yum-config-manager –enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-optional

$ sudo yum install php-mbstring

$ sudo service httpd restart

Install chrome in Fedora 21

Change to super user,
su –
Create the google-chrome repo:
cat << EOF > /etc/yum.repos.d/googlechrome.repo
name=googlechrome \$basearch
Install google-chrome
# yum install google-chrome-stable

check writing speed on a mount

I wanted to compare the writing speed on my ext3 partitioned mount and tmpfs mount in fedora20. So yeah.. dd command did come pretty handy..

Here’s what I did,

$ dd if=/dev/zero of=/tmp/t bs=4k count=10000
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 0.0130912 s, 3.1 GB/s


$ dd if=/dev/zero of=/home/prabhugs/t bs=4k count=10000
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 0.0602942 s, 679 MB/s


So there goes the difference… 3.1Gb/s to 679 MB/s