Send Email using PHP

Download PHPMailer from https://github.com/PHPMailer/PHPMailer.git

 

$ git clone https://github.com/PHPMailer/PHPMailer.git

 

<?php

require_once(“PHPMailer/PHPMailerAutoload.php”);

$mail = new PHPMailer(); // create a new object

$mail->IsSMTP(); // enable SMTP

$mail->SMTPDebug = 1; // debugging: 1 = errors and messages, 2 = messages only

$mail->SMTPAuth = true; // authentication enabled

$mail->SMTPSecure = ‘ssl’; // secure transfer enabled REQUIRED for Gmail

$mail->Host = “smtp.gmail.com”;

$mail->Port = 465; // or 587

$mail->IsHTML(true);

$mail->Username = “username@gmail.com”;

$mail->Password = “password”;

$mail->SetFrom(“username@gmail.com”);

$mail->Subject = “Test”;

$mail->Body = “hello”;

$mail->AddAddress(“username@gmail.com”);

if(!$mail->Send()) {

echo “Mailer Error: ” . $mail->ErrorInfo;

} else {

echo “Message has been sent”;

}

?>

Hadoop pseudo distributed mode

I am just going through the steps to setup a Hadoop server in pseudo distributed mode.

I assume that you have already downloaded the Hadoop tar and untarred the package and moved it to /usr/local/hadoop

Make sure you have already setup Hadoop environment. If you missed on it, check out https://prabhugs.wordpress.com/2016/01/13/hadoop-on-ubuntu-14-04

Once the hadoop environment is ready, following the below steps

$ sudo chown -R hduser:hadoop /usr/local/hadoop

$ vi $HADOOP_HOME/etc/hadoop/core-site.xml

Change the following contents of the file,

<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>

Screenshot from 2016-01-14 10:31:07

 

<name>fs.default.name</name>
  <value>hdfs://yourIPaddress:54310</value>

Screenshot from 2016-01-14 10:31:07 (copy)

 

$ sudo mkdir -p /app/hadoop/tmp

$ sudo chown hduser:hadoop /app/hadoop/tmp

 

Make sure you have an entry of your IPaddress in your /etc/hosts file

Screenshot from 2016-01-14 10:42:26

 

Edit the hdfs-site.xml file to change the below values

$ vi $HADOOP_HOME/etc/hadoop/hdfs-site.xml

<name>dfs.replication</name>
<value>1</value>

<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>

Screenshot from 2016-01-14 10:39:18

 

Add these lines to the end of your .bashrc file (Remember that you are doing all these as hduser user).

#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export PATH=$PATH:$JAVA_HOME/bin
alias jps=’/usr/lib/jvm/java-7-openjdk-amd64/bin/jps’
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.6.0
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib”
#HADOOP VARIABLES END
export HIVE_HOME=/usr/local/hadoop/hadoop-2.6.0/hive-0.9.0-bin
export PATH=$PATH:$HIVE_HOME/bin

 

Screenshot from 2016-01-14 11:02:31

 

$ source ~/.bashrc

$ sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode

$ sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode

$ sudo chown -R hduser:hadoop /usr/local/hadoop_store

Now format the hadoop filesystem,

$ hadoop namenode -format

Upon successful formatting you should see something like below, at the end.

16/01/07 18:49:02 INFO common.Storage: Storage directory /usr/local/hadoop_store/hdfs/namenode has been successfully formatted.
16/01/07 18:49:02 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/01/07 18:49:02 INFO util.ExitUtil: Exiting with status 0

 

It’s all set now, we may start our hdfs server.

$ start-hdfs.sh

$ start-yarn.sh

Enter hduser‘s password once prompted.

 

we can recheck the Java processes running by jps command.

$ jps
28823 SecondaryNameNode
29195 Jps
28957 ResourceManager
28485 NameNode
28639 DataNode

 

Create directory in HDFS

$ hadoop fs -mkdir -p /user/hduser

You should be able to see the directory contents by using the ls command,

$ hadoop fs -ls

or

$ hadoop fs -ls hdfs://yourIPaddress:54310/user

Found 1 items
drwxr-xr-x   – hduser supergroup          0 2016-01-07 18:51 hdfs://yourIPaddress:54310/user/hduser

 

 

Problems and solutions

 

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

$ export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib/native”
$ export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

 

No such file or directory upon ls

$ hadoop fs -mkdir -p /user/hduser

 

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hduser/README.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

Solution 1:

$ stop-dfs.sh
$ stop-yarn.sh
$ sudo rm -rf /tmp/*
$ start-dfs.sh
$ start-yarn.sh

Solution 2:

$ sudo rm -r /app/hadoop/tmp
$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
$ sudo chmod 750 /app/hadoop/tmp
$ start-dfs.sh

Upon running jps command Datanode should be seen.

 

 

Hadoop on Ubuntu (14.04)

We will go through the required steps for setting up a single-node Hadoop cluster backed by the Hadoop Distributed File System, running on Ubuntu(14.04) Linux. It provides high throughput access to application data and is suitable for applications that have large data sets.

Steps:

  1. Create a dedicated user for hadoop
  2. Java should be installed
  3. Setup ssh and generate key
  4. Set environment variables
  5. Configure Java alternatives
  6. Download Hadoop
  7. Setup and configure Hadoop environment
  8. Verify and run Hadoop

 

Create a user for Hadoop


$ sudo useradd hduser
$ sudo passwd hduser
$ sudo addgroup hadoop
$ sudo adduser –ingroup hadoop hduser

 

Install Java

Java is the main prerequisite for Hadoop. First of all, you should verify the existence of java in your system using the command “java -version”.

$ java -version

If Java is working as expected, you should see something similar to,

java version “1.7.0_79”
OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-0ubuntu1.14.04.1)
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)

 

Setup ssh and generate key

The following commands are used for generating a key value pair using SSH. Copy the public keys form id_rsa.pub to authorized_keys, and provide the owner with read and write permissions to authorized_keys file respectively.
$ su – hduser
$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys

 

Set environment variables

For setting up PATH and JAVA_HOME variables, add the following commands to ~/.bashrc file.
export JAVA_HOME=/usr/local/jdk1.7.0_79
export PATH=$PATH:$JAVA_HOME/bin

Apply all the changes
$ source ~/.bashrc

 

Configure Java alternatives

# alternatives –install /usr/bin/java java usr/local/java/bin/java 2
# alternatives –install /usr/bin/javac javac usr/local/java/bin/javac 2
# alternatives –install /usr/bin/jar jar usr/local/java/bin/jar 2
# alternatives –set java usr/local/java/bin/java
# alternatives –set javac usr/local/java/bin/javac
# alternatives –set jar usr/local/java/bin/jar

 

Download Hadoop

Download Hadoop from http://hadoop.apache.org/releases.html
$ su password:
# cd /usr/local
# wget http://apache.claz.org/hadoop/common/hadoop-2.4.1/ hadoop-2.4.1.tar.gz
# tar xzf hadoop-2.4.1.tar.gz # mv hadoop-2.4.1/* to hadoop/
# exit

 

Setup and configure Hadoop environment

Appending the following command to ~/.bashrc file.

export HADOOP_HOME=/usr/local/hadoop

Make sure Hadoop is working fine,

$ hadoop version
Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0 From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar

 

 

Install php mbstring

I was trying to document my php webserver code. I happened to look at a nice open source documenting tool for php which required php-mbstring for encoding.

php-mbstring is not available by default in yum repo.

$ sudo yum-config-manager –enable rhui-REGION-rhel-server-extras rhui-REGION-rhel-server-optional

$ sudo yum install php-mbstring

$ sudo service httpd restart

Install chrome in Fedora 21

Change to super user,
su –
Create the google-chrome repo:
cat << EOF > /etc/yum.repos.d/googlechrome.repo
[googlechrome]
name=googlechrome \$basearch
baseurl=http://dl.google.com/linux/chrome/rpm/stable/\$basearch
enabled=1
gpgcheck=1
gpgkey=https://dlssl.google.com/linux/linux_signing_key.pub
EOF
Install google-chrome
# yum install google-chrome-stable

check writing speed on a mount

I wanted to compare the writing speed on my ext3 partitioned mount and tmpfs mount in fedora20. So yeah.. dd command did come pretty handy..

Here’s what I did,

$ dd if=/dev/zero of=/tmp/t bs=4k count=10000
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 0.0130912 s, 3.1 GB/s

 

$ dd if=/dev/zero of=/home/prabhugs/t bs=4k count=10000
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 0.0602942 s, 679 MB/s

 

So there goes the difference… 3.1Gb/s to 679 MB/s