Friday, November 11, 2011

Hadoop: Apache Pig


Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.
At the present time, Pig's infrastructure layer consists of a compiler that produces sequences of Map-Reduce programs, for which large-scale parallel implementations already exist (e.g., the Hadoop subproject). Pig's language layer currently consists of a textual language called Pig Latin, which has the following key properties:
  • Ease of programming. It is trivial to achieve parallel execution of simple, "embarrassingly parallel" data analysis tasks. Complex tasks comprised of multiple interrelated data transformations are explicitly encoded as data flow sequences, making them easy to write, understand, and maintain.
  • Optimization opportunities. The way in which tasks are encoded permits the system to optimize their execution automatically, allowing the user to focus on semantics rather than efficiency.
  • Extensibility. Users can create their own functions to do special-purpose processing.

Installation Method

Requirements

Unix and Windows users need the following:
  1. Hadoop 0.20.2 - http://hadoop.apache.org/common/releases.html
  2. Java 1.6 - http://java.sun.com/javase/downloads/index.jsp (set JAVA_HOME to the root of your Java installation)
  3. Ant 1.7 - http://ant.apache.org/ (optional, for builds)
  4. JUnit 4.5 - http://junit.sourceforge.net/ (optional, for unit tests)
Windows users need to install Cygwin and the Perl package: http://www.cygwin.com/

Beginning Pig


Download Pig

To get a Pig distribution, download a recent stable release from one of the Apache Download Mirrors (see Pig Releases).
Unpack the downloaded Pig distribution. The Pig script is located in the bin directory (/pig-n.n.n/bin/pig).
Add /pig-n.n.n/bin to your path. Use export (bash,sh,ksh) or setenv (tcsh,csh). For example:
$ export PATH=/<my-path-to-pig>/pig-n.n.n/bin:$PATH
Try the following command, to get a list of Pig commands:
$ pig -help
Try the following command, to start the Grunt shell:
$ pig 

Run Modes

Pig has two run modes or exectypes:
  • Local Mode - To run Pig in local mode, you need access to a single machine.
  • Mapreduce Mode - To run Pig in mapreduce mode, you need access to a Hadoop cluster and HDFS installation. Pig will automatically allocate and deallocate a 15-node cluster.
You can run the Grunt shell, Pig scripts, or embedded programs using either mode.

Grunt Shell

Use Pig's interactive shell, Grunt, to enter pig commands manually. See the Sample Code for instructions about the passwd file used in the example.
You can also run or execute script files from the Grunt shell. See the run and exec commands.
Local Mode
$ pig -x local
Mapreduce Mode
$ pig
or
$ pig -x mapreduce
For either mode, the Grunt shell is invoked and you can enter commands at the prompt. The results are displayed to your terminal screen (if DUMP is used) or to a file (if STORE is used).
grunt> A = load 'passwd' using PigStorage(':'); 
grunt> B = foreach A generate $0 as id; 
grunt> dump B; 
grunt> store B; 

Script Files

Use script files to run Pig commands as batch jobs. See the Sample Code for instructions about the passwd file and the script file (id.pig) used in the example.
Local Mode
$ pig -x local id.pig
Mapreduce Mode
$ pig id.pig
or
$ pig -x mapreduce id.pig
For either mode, the Pig Latin statements are executed and the results are displayed to your terminal screen (if DUMP is used) or to a file (if STORE is used).

Advanced Pig


Build Pig

To build pig, do the following:
  1. Check out the Pig code from SVN: svn co http://svn.apache.org/repos/asf/hadoop/pig/trunk.
  2. Build the code from the top directory: ant. If the build is successful, you should see the pig.jar created in that directory.
  3. Validate your pig.jar by running a unit test: ant test

Environment Variables and Properties

The Pig environment variables are described in the Pig script file, located in the /pig-n.n.n/bin directory.
The Pig properties file, pig.properties, is located in the /pig-n.n.n/conf directory. You can specify an alternate location using the PIG_CONF_DIR environment variable.

Run Modes

See Run Modes.

Embedded Programs

Used the embedded option to embed Pig commands in a host language and run the program. See the Sample Code for instructions about the passwd file and java files (idlocal.java, idmapreduce.java) used in the examples.
Local Mode
From your current working directory, compile the program:
$ javac -cp pig.jar idlocal.java
Note: idlocal.class is written to your current working directory. Include “.” in the class path when you run the program.
From your current working directory, run the program:
Unix:   $ java -cp pig.jar:. idlocal
Cygwin: $ java –cp ‘.;pig.jar’ idlocal
To view the results, check the output file, id.out.
Mapreduce Mode
Point $HADOOPDIR to the directory that contains the hadoop-site.xml file. Example:
$ export HADOOPDIR=/yourHADOOPsite/conf 
From your current working directory, compile the program:
$ javac -cp pig.jar idmapreduce.java
Note: idmapreduce.class is written to your current working directory. Include “.” in the class path when you run the program.
From your current working directory, run the program:
Unix:   $ java -cp pig.jar:.:$HADOOPDIR idmapreduce
Cygwin: $ java –cp ‘.;pig.jar;$HADOOPDIR’ idmapreduce
To view the results, check the idout directory on your Hadoop system.

Sample Code

The sample code is based on Pig Latin statements that extract all user IDs from the /etc/passwd file.
Copy the /etc/passwd file to your local working directory.
id.pig
For the Grunt Shell and script files.
A = load 'passwd' using PigStorage(':'); 
B = foreach A generate $0 as id;
dump B; 
store B into ‘id.out’;
idlocal.java
For embedded programs.
import java.io.IOException;
import org.apache.pig.PigServer;
public class idlocal{ 
public static void main(String[] args) {
try {
    PigServer pigServer = new PigServer("local");
    runIdQuery(pigServer, "passwd");
    }
    catch(Exception e) {
    }
 }
public static void runIdQuery(PigServer pigServer, String inputFile) throws IOException {
    pigServer.registerQuery("A = load '" + inputFile + "' using PigStorage(':');");
    pigServer.registerQuery("B = foreach A generate $0 as id;");
    pigServer.store("B", "id.out");
 }
}
idmapreduce.java
For embedded programs.
import java.io.IOException;
import org.apache.pig.PigServer;
public class idmapreduce{
   public static void main(String[] args) {
   try {
     PigServer pigServer = new PigServer("mapreduce");
     runIdQuery(pigServer, "passwd");
   }
   catch(Exception e) {
   }
}
public static void runIdQuery(PigServer pigServer, String inputFile) throws IOException {
   pigServer.registerQuery("A = load '" + inputFile + "' using PigStorage(':');")
   pigServer.registerQuery("B = foreach A generate $0 as id;");
   pigServer.store("B", "idout");
   }
}

Pig Tutorial

Overview

The Pig tutorial shows you how to run two Pig scripts in local mode and mapreduce mode.
  • Local Mode: To run the scripts in local mode, no Hadoop or HDFS installation is required. All files are installed and run from your local host and file system.
  • Mapreduce Mode: To run the scripts in mapreduce mode, you need access to a Hadoop cluster and HDFS installation.
The Pig tutorial file (tutorial/pigtutorial.tar.gz file in the pig distribution) includes the Pig JAR file (pig.jar) and the tutorial files (tutorial.jar, Pigs scripts, log files). These files work with Hadoop 0.20.2 and include everything you need to run the Pig scripts.
To get started, follow these basic steps:
  1. Install Java
  2. Install Pig
  3. Run the Pig scripts - in Local or Hadoop mode

Java Installation

Make sure your run-time environment includes the following:
  • Java 1.6 or higher (preferably from Sun)
  • The JAVA_HOME environment variable is set the root of your Java installation.

Pig Installation

To install Pig, do the following:
  1. Download the Pig tutorial file to your local directory.
  2. Unzip the Pig tutorial file (the files are stored in a newly created directory, pigtmp).
    $ tar -xzf pigtutorial.tar.gz
    
  3. Move to the pigtmp directory.
  4. Review the contents of the Pig tutorial file.
  5. Copy the pig.jar file to the appropriate directory on your system. For example: /home/me/pig.
  6. Create an environment variable, PIGDIR, and point it to your directory; for example, export PIGDIR=/home/me/pig (bash, sh) or setenv PIGDIR /home/me/pig (tcsh, csh).

Running the Pig Scripts in Local Mode

To run the Pig scripts in local mode, do the following:
  1. Set the maximum memory for Java.
    java -Xmx256m -cp pig.jar org.apache.pig.Main -x local script1-local.pig
    java -Xmx256m -cp pig.jar org.apache.pig.Main -x local script2-local.pig
    
  2. Move to the pigtmp directory.
  3. Review Pig Script 1 and Pig Script 2.
  4. Execute the following command (using either script1-local.pig or script2-local.pig).
    $ java -cp $PIGDIR/pig.jar org.apache.pig.Main -x local script1-local.pig
    
  5. Review the result files, located in the part-r-00000 directory.
    The output may contain a few Hadoop warnings which can be ignored:
    2010-04-08 12:55:33,642 [main] INFO  org.apache.hadoop.metrics.jvm.JvmMetrics 
    - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
    

Running the Pig Scripts in Mapreduce Mode

To run the Pig scripts in mapreduce mode, do the following:
  1. Move to the pigtmp directory.
  2. Review Pig Script 1 and Pig Script 2.
  3. Copy the excite.log.bz2 file from the pigtmp directory to the HDFS directory.
    $ hadoop fs –copyFromLocal excite.log.bz2 .
    
  4. Set the HADOOP_CONF_DIR environment variable to the location of your core-site.xml, hdfs-site.xml and mapred-site.xml files.
  5. Execute the following command (using either script1-hadoop.pig or script2-hadoop.pig):
    $ java -cp $PIGDIR/pig.jar:$HADOOP_CONF_DIR  org.apache.pig.Main script1-hadoop.pig
    
  6. Review the result files, located in the script1-hadoop-results or script2-hadoop-results HDFS directory:
    $ hadoop fs -ls script1-hadoop-results
    $ hadoop fs -cat 'script1-hadoop-results/*' | less
    

Pig Tutorial File

The contents of the Pig tutorial file (pigtutorial.tar.gz) are described here.
File
Description
pig.jar
Pig JAR file
tutorial.jar
User-defined functions (UDFs) and Java classes
script1-local.pig
Pig Script 1, Query Phrase Popularity (local mode)
script1-hadoop.pig
Pig Script 1, Query Phrase Popularity (Hadoop cluster)
script2-local.pig
Pig Script 2, Temporal Query Phrase Popularity (local mode)
script2-hadoop.pig
Pig Script 2, Temporal Query Phrase Popularity (Hadoop cluster)
excite-small.log
Log file, Excite search engine (local mode)
excite.log.bz2
Log file, Excite search engine (Hadoop cluster)
The user-defined functions (UDFs) are described here.
UDF
Description
ExtractHour
Extracts the hour from the record.
NGramGenerator
Composes n-grams from the set of words.
NonURLDetector
Removes the record if the query field is empty or a URL.
ScoreGenerator
Calculates a "popularity" score for the n-gram.
ToLower
Changes the query field to lowercase.
TutorialUtil
Divides the query string into a set of words.

Pig Script 1: Query Phrase Popularity

The Query Phrase Popularity script (script1-local.pig or script1-hadoop.pig) processes a search query log file from the Excite search engine and finds search phrases that occur with particular high frequency during certain times of the day.
The script is shown here:
  • Register the tutorial JAR file so that the included UDFs can be called in the script.
REGISTER ./tutorial.jar; 
  • Use the PigStorage function to load the excite log file (excite.log or excite-small.log) into the “raw” bag as an array of records with the fields usertime, and query.
raw = LOAD 'excite.log' USING PigStorage('\t') AS (user, time, query);
  • Call the NonURLDetector UDF to remove records if the query field is empty or a URL.
clean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);
  • Call the ToLower UDF to change the query field to lowercase.
clean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;
  • Because the log file only contains queries for a single day, we are only interested in the hour. The excite query log timestamp format is YYMMDDHHMMSS. Call the ExtractHour UDF to extract the hour (HH) from the time field.
houred = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;
  • Call the NGramGenerator UDF to compose the n-grams of the query.
ngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;
  • Use the DISTINCT operator to get the unique n-grams for all records.
ngramed2 = DISTINCT ngramed1;
  • Use the GROUP operator to group records by n-gram and hour.
hour_frequency1 = GROUP ngramed2 BY (ngram, hour);
  • Use the COUNTfunction to get the count (occurrences) of each n-gram.
hour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;
  • Use the GROUP operator to group records by n-gram only. Each group now corresponds to a distinct n-gram and has the count for each hour.
uniq_frequency1 = GROUP hour_frequency2 BY group::ngram;
  • For each group, identify the hour in which this n-gram is used with a particularly high frequency. Call the ScoreGenerator UDF to calculate a "popularity" score for the n-gram.
uniq_frequency2 = FOREACH uniq_frequency1 GENERATE flatten($0), flatten(org.apache.pig.tutorial.ScoreGenerator($1));
  • Use the FOREACH-GENERATE operator to assign names to the fields.
uniq_frequency3 = FOREACH uniq_frequency2 GENERATE $1 as hour, $0 as ngram, $2 as score, $3 as count, $4 as mean;
  • Use the FILTER operator to move all records with a score less than or equal to 2.0.
filtered_uniq_frequency = FILTER uniq_frequency3 BY score > 2.0;
  • Use the ORDER operator to sort the remaining records by hour and score.
ordered_uniq_frequency = ORDER filtered_uniq_frequency BY (hour, score);
  • Use the PigStorage function to store the results. The output file contains a list of n-grams with the following fields: hourngramscorecountmean.
STORE ordered_uniq_frequency INTO '/tmp/tutorial-results' USING PigStorage(); 

Pig Script 2: Temporal Query Phrase Popularity

The Temporal Query Phrase Popularity script (script2-local.pig or script2-hadoop.pig) processes a search query log file from the Excite search engine and compares the occurrence of frequency of search phrases across two time periods separated by twelve hours.
The script is shown here:
  • Register the tutorial JAR file so that the user-defined functions (UDFs) can be called in the script.
REGISTER ./tutorial.jar;
  • Use the PigStorage function to load the excite log file (excite.log or excite-small.log) into the “raw” bag as an array of records with the fields usertime, and query.
raw = LOAD 'excite.log' USING PigStorage('\t') AS (user, time, query);
  • Call the NonURLDetector UDF to remove records if the query field is empty or a URL.
clean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);
  • Call the ToLower UDF to change the query field to lowercase.
clean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;
  • Because the log file only contains queries for a single day, we are only interested in the hour. The excite query log timestamp format is YYMMDDHHMMSS. Call the ExtractHour UDF to extract the hour from the time field.
houred = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;
  • Call the NGramGenerator UDF to compose the n-grams of the query.
ngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;
  • Use the DISTINCT operator to get the unique n-grams for all records.
ngramed2 = DISTINCT ngramed1;
  • Use the GROUP operator to group the records by n-gram and hour.
hour_frequency1 = GROUP ngramed2 BY (ngram, hour);
  • Use the COUNT function to get the count (occurrences) of each n-gram.
hour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;
  • Use the FOREACH-GENERATE operator to assign names to the fields.
hour_frequency3 = FOREACH hour_frequency2 GENERATE $0 as ngram, $1 as hour, $2 as count;
  • Use the FILTERoperator to get the n-grams for hour ‘00’
hour00 = FILTER hour_frequency2 BY hour eq '00';
  • Uses the FILTER operators to get the n-grams for hour ‘12’
hour12 = FILTER hour_frequency3 BY hour eq '12';
  • Use the JOIN operator to get the n-grams that appear in both hours.
same = JOIN hour00 BY $0, hour12 BY $0;
  • Use the FOREACH-GENERATE operator to record their frequency.
same1 = FOREACH same GENERATE hour_frequency2::hour00::group::ngram as ngram, $2 as count00, $5 as count12;
  • Use the PigStorage function to store the results. The output file contains a list of n-grams with the following fields: hourcount00count12.
STORE same1 INTO '/tmp/tutorial-join-results' USING PigStorage();

No comments:

Post a Comment

Thank you for Commenting Will reply soon ......

Featured Posts

Enhancing Unix Proficiency: A Deeper Look at the 'Sleep' Command and Signals

Hashtags: #Unix #SleepCommand #Signals #UnixTutorial #ProcessManagement In the world of Unix commands, there are often tools that, at first ...