Introduction
The Pig tutorial shows you how to run two Pig scripts in Local mode and Hadoop mode.
- Local Mode: To run the scripts in local mode, no Hadoop or HDFS installation is required. All files are installed and run from your local host and file system.
- Hadoop Mode: To run the scripts in hadoop (mapreduce) mode, you need access to a Hadoop cluster and HDFS installation available through Hadoop Virtual Machine provided with this tutorial.
The Pig tutorial files are installed on the Hadoop Virtual Machine under "/home/hadoop-user/pig" directory. It includes the Pig JAR file (pig.jar) and the tutorial files (tutorial.jar, Pigs scripts, log files). These files work with Hadoop 0.18.0 and provide everything you need to run the Pig scripts. This Pig Tutorial is also available on the apache Pig website.
JAVA INSTALLATION (NOTE: ALREADY SET-UP ON THE HADOOP VM.)
- Java 1.6.x (from Sun) is installed on /usr/jre16.
- The JAVA_HOME environment variable is set the root of your Java installation in "/home/hadoop-user/.profile" file.
PIG INSTALLATION (NOTE: ALREADY SET-UP ON THE HADOOP VM.)
- The pig.jar and tutorial files are stored in "/home/hadoop-user/pig" directory.
- The PIGDIR environment variable is set to "/home/hadoop-user/pig/"in the .profile file of hadoop-user.
PIG SCRIPTS: LOCAL MODE
To run the Pig scripts in local mode, do the following:
- Go to the /home/hadoop-user/pig directory on Hadoop VM.
- Review Pig Script 1 and Pig Script 2.
- Execute the following command (using either script1-local.pig or script2-local.pig).
- Review the result file (either script1-local-results.txt or script2-local-results.txt):
PIG SCRIPTS: HADOOP MODE
To run the Pig scripts in hadoop (mapreduce) mode, do the following:
- Go to the /home/hadoop-user/pig directory on Hadoop VM.
- Review Pig Script 1 and Pig Script 2.
- Copy the excite.log.bz2 file from the pigtmp directory to the HDFS directory.
- The HADOOPSITEPATH environment variable is set to the location of your hadoop-site.xml file i.e. "/home/hadoop-user/hadoop-tutorial-conf/" directory.
- Execute the following command (using either script1-hadoop.pig or script2-hadoop.pig):
- Review the result files (located in either the script1-hadoop-results or script2-hadoop-results HDFS directory):
PIG TUTORIAL FILE
The contents of the Pig tutorial are described here.
File
|
Description
|
pig.jar
|
Pig JAR file
|
tutorial.jar
|
User-defined functions (UDFs) and Java classes
|
script1-local.pig
|
Pig Script 1, Query Phrase Popularity (local mode)
|
script1-hadoop.pig
|
Pig Script 1, Query Phrase Popularity (Hadoop cluster)
|
script2-local.pig
|
Pig Script 2, Temporal Query Phrase Popularity (local mode)
|
script2-hadoop.pig
|
Pig Script 2, Temporal Query Phrase Popularity (Hadoop cluster)
|
excite-small.log
|
Log file, Excite search engine (local mode)
|
excite.log.bz2
|
Log file, Excite search engine (Hadoop cluster)
|
The user-defined functions (UDFs) are described here.
UDF
|
Description
|
ExtractHour
|
Extracts the hour from the record.
|
NGramGenerator
|
Composes n-grams from the set of words.
|
NonURLDetector
|
Removes the record if the query field is empty or a URL.
|
ScoreGenerator
|
Calculates a "popularity" score for the n-gram.
|
ToLower
|
Changes the query field to lowercase.
|
TutorialUtil
|
Divides the query string into a set of words.
|
PIG SCRIPT 1: QUERY PHRASE POPULARITY
The Query Phrase Popularity script (script1-local.pig or script1-hadoop.pig) processes a search query log file from the Excite search engine and finds search phrases that occur with particular high frequency during certain times of the day.
The script is shown here:
- Register the tutorial JAR file so that the included UDFs can be called in the script.
- Use the PigStorage function to load the excite log file (excite.log or excite-small.log) into the “raw” bag as an array of records with the fields user, time, and query.
- Call the NonURLDetector UDF to remove records if the query field is empty or a URL.
- Call the ToLower UDF to change the query field to lowercase.
- Because the log file only contains queries for a single day, we are only interested in the hour. The excite query log timestamp format is YYMMDDHHMMSS. Call the ExtractHour UDF to extract the hour (HH) from the time field.
- Call the NGramGenerator UDF to compose the n-grams of the query.
- Use the DISTINCT command to get the unique n-grams for all records.
- Use the GROUP command to group records by n-gram and hour.
- Use the COUNT function to get the count (occurrences) of each n-gram.
- Use the GROUP command to group records by n-gram only. Each group now corresponds to a distinct n-gram and has the count for each hour.
- For each group, identify the hour in which this n-gram is used with a particularly high frequency. Call the ScoreGenerator UDF to calculate a "popularity" score for the n-gram.
- Use the FOREACH-GENERATE command to assign names to the fields.
- Use the FILTER command to move all records with a score less than or equal to 2.0.
- Use the ORDER command to sort the remaining records by hour and score.
- Use the PigStorage function to store the results. The output file contains a list of n-grams with the following fields: hour, ngram, score, count, mean.
PIG SCRIPT 2: TEMPORAL QUERY PHRASE POPULARITY
The Temporal Query Phrase Popularity script (script2-local.pig or script2-hadoop.pig) processes a search query log file from the Excite search engine and compares the occurrence of frequency of search phrases across two time periods separated by twelve hours.
The script is shown here:
- Register the tutorial JAR file so that the user-defined functions (UDFs) can be called in the script.
- Use the PigStorage function to load the excite log file (excite.log or excite-small.log) into the “raw” bag as an array of records with the fields user, time, and query.
- Call the NonURLDetector UDF to remove records if the query field is empty or a URL.
- Call the ToLower UDF to change the query field to lowercase.
- Because the log file only contains queries for a single day, we are only interested in the hour. The excite query log timestamp format is YYMMDDHHMMSS. Call the ExtractHour UDF to extract the hour from the time field.
- Call the NGramGenerator UDF to compose the n-grams of the query.
- Use the DISTINCT command to get the unique n-grams for all records.
- Use the GROUP command to group the records by n-gram and hour.
- Use the COUNT function to get the count (occurrences) of each n-gram.
- Use the FOREACH-GENERATE command to assign names to the fields.
- Use the FILTER command to get the n-grams for hour ‘00’
- Uses the FILTER command to get the n-grams for hour ‘12’
- Use the JOIN command to get the n-grams that appear in both hours.
- Use the FOREACH-GENERATE command to record their frequency.
- Use the PigStorage function to store the results. The output file contains a list of n-grams with the following fields: hour, count00, count12.
No comments:
Post a Comment
Thank you for Commenting Will reply soon ......