Monday, November 28, 2011

Visual Studio 11 Features


Develop Metro style Apps for Windows 8
Visual Studio 11 includes a set of templates that get you started quickly developing Metro style applications with JavaScript, C#, VB or C++. The blank Application template provides the simplest starting point with a default project structure that includes sample resources and images. The Grid View, Split View, and Navigation templates are designed to provide a starting point for more complex user interfaces.
From Visual Studio 11, seamlessly open up your Metro style app with JavaScript in Expression Blend to add the style and structure of your application.

Visual Studio 2011-2012 features


Code window resize
A feature you can turn on/off is automatic resizing of the code window.
When you are editing your code, selecting the code window maximizes it at the expense of Solution Explorer, the Output pane, etc.
Visual Studio 2012 
 Visual Studio 2012


Creating a new template for custom lists


To create a new template for custom lists, do the following:
  1. Go to the Program Files\Common Files\Microsoft Shared\web server extensions\50\Templates\1033\Layouts directory and copy the CustList folder, renaming the new folder appropriately.
  2. In this new directory, open the file SCHEMA.XML.
  3. Add as many field definitions as needed to SCHEMA.XML in the empty Fields element just after the opening <MetaData> tag. The following example defines fields for a sign-up sheet that has drop-down lists:
    <Fields>
      <Field Name="LinkTitle" DisplayName="Driver" Required="TRUE"/>
      <Field Name="Title" DisplayName="Driver" Required="TRUE"/>
      <Field Name="ParkingLocation" Type="Choice" DisplayName="Park & Ride Location" Required="TRUE">
        <CHOICES>
          <CHOICE>Eastgate Mall</CHOICE>
          <CHOICE>North Park</CHOICE>
          <CHOICE>South Terrace Center</CHOICE>
          <CHOICE>Lake Shores Park & Ride</CHOICE>
        </CHOICES>
      </Field>
      <Field Name="ToWork" Type="Choice" DisplayName="To Work" Required="TRUE">
        <CHOICES>
          <CHOICE>7am</CHOICE>
          <CHOICE>8am</CHOICE>
          <CHOICE>9am</CHOICE>
          <CHOICE>10am</CHOICE>
        </CHOICES>
      </Field>
      <Field Name="FromWork" Type="Choice" DisplayName="From Work" Required="TRUE">
        <CHOICES>
          <CHOICE>4pm</CHOICE>
          <CHOICE>5pm</CHOICE>
          <CHOICE>6pm</CHOICE>
          <CHOICE>7pm</CHOICE>
        </CHOICES>
      </Field>
      <Field Name="Capacity" Type="Number" DisplayName="Capacity" Required="TRUE"/>
      <Field Name="Preferences" Type="Note" DisplayName="Personal Preferences"/>
    </Fields>
    
    

Adding a title link to an item in a document list


To add a title link in a document list, do the following:
  1. In the Program Files\Common Files\Microsoft Shared\web server extensions\50\Templates\1033\xml directory, open ONET.XML.
  2. Find the BaseTypes section and, within this section, find the BaseType element where the Type attribute is set to 1.
  3. In the Fields section within the MetaData element for this base type, add the following entry:
    <Field ReadOnly="TRUE" Type="Computed" Name="LinkedToTitle"
      DisplayName="Document Title" DisplayNameSrcField="Title"
      AuthoringInfo="(linked to document)">
      <FieldRefs>
        <FieldRef Name="Title"/>
        <FieldRef Name="FileRef"/>
      </FieldRefs>
      <DisplayPattern>
    
    

Adding a field to a list in a custom document library


To add a field to a list for a document library, do the following:
  1. Open the Program Files\Common Files\Microsoft Shared\web server extensions\50\Templates\1033\Layouts directory.
  2. Copy the entire contents of the DocLib folder to the directory, renaming the copy, for example, MyDocLib.
  3. In the new MyDocLib directory, open SCHEMA.XML.
  4. Find the Fields element within the opening MetaData section. Between the opening and closing <Fields> tags, add a Field element like the following:
    <Field Name="EmployeeID" DisplayName="Employee ID" Type="Number" Required="TRUE"
      Description="Enter the ID from your employee badge."/>
    
    

Adding a new document type and file type icon


To add a document type and type icon, do the following:
  1. In the Program Files\Common Files\Microsoft Shared\web server extensions\50\Templates\1033\xml directory, open DOCICON.XML.
  2. In the middle of the file, find the ByExtension section.
  3. For the sake of example, we will add a wav file type and an icon to represent what kind of file it is. Add a line like the following:
    <Mapping Key="wav" Value="icwav.gif"/> 
  4. In the Program Files\Common Files\Microsoft Shared\web server extensions\50\Templates\1033\Images directory, add an appropriate icon and call it "icwav.gif".
  5. Restart Microsoft Internet Information Services (IIS).
  6. Create a new subweb.All sites created from now on will display icwav.gif in document libraries for files that have the .wav extension.

Customizing the top link bar


To add a new item to the top link bar of your SharePoint team Web site, do the following:
  1. In the Program Files\Common Files\Microsoft Shared\web server extensions\50\Templates\1033\xml directory, open ONET.XML.
  2. At the top of the ONET.XML file, find the TopMenuItems section. The first item listed in the collection corresponds to the item that appears farthest to the left on the link bar.
  3. Add a new node where you would like the new item to appear on the link bar, including the file name of the page you want to link to, such as follows:
    <TopMenuItem Name="NewNavigationItem" DisplayName="New Navigation Item" Url="_layouts/NewNavigationItem.htm"/>
    Note   The URL is relative to the root of the SharePoint team Web site.
  4. Add your new page to the Program Files\Common Files\Microsoft Shared\web server extensions\50\Templates\1033\Layouts directory.
  5. Create a new subweb.All new sites will contain your new page in their individual _layouts folder within the wwwroot directory, and the top link bar will now include a link to the new page.

Customizing the logo on your team Web sites


To customize the logo used on new SharePoint team Web sites, do the following:
  1. Open the Program Files\Common Files\Microsoft Shared\web server extensions\50\Templates\1033\Images directory.
  2. Copy the image you want to appear on the home page of your team Web site to this directory.
  3. Delete the SharePoint Team Services logo HOME.GIF.
  4. Rename the new image HOME.GIF.
  5. Create a new subweb.All sites created from now on will have the new image as a logo.

What Is CAML?


Collaborative Application Markup Language (CAML) is the XML-based language that is used to build and customize Web sites based on SharePoint™ Team Services from Microsoft®.
CAML can be used to do the following:
  • Provide schema definition to the Web site provisioning system about how the site looks and acts.
  • Define views and forms for data and page rendering or execution.
  • Act as a rendering language that performs functions in the DLL like pulling a value from a particular field.
  • Provide batch functionality for posting multiple commands to the server using protocol.
Why would you use CAML as opposed to just using Microsoft FrontPage® or other editing tools?

Deploy files to _layouts in SharePoint 2010


Steps to add custom aspx page in _layouts folder in SharePoint 2010 -
1. Open the Visual studio and create an empty project “DeployAspxToLayouts”.
2. Next, select “Deploy as Farm solution”.
3. Next, right click on the Project and add navigate to “Add -> SharePoint “Layouts” Mapped Folder”. See the screen below
4. Once you add that, you will have a Layouts folder structure in your project. Next we will add our custom aspx page(existing or new) to the Layouts folder that got created in our project. For this example i am creating a new aspx file and adding it to my custom folder i.e. “DeployAspxToLayouts” which i created under my Layouts folder.

Creating and Deploying Custom aspx Page as Feature and Solution Package

There are two ways to deploy a custom aspx page in SharePoint.

1. Using VseWss extensions project. Here you won't need to create a solution package manually. All the files need to create a solution package are created by the VS extensions itself. See the related Post Deploy Custom Css file in 12 hive. You can use same method to deploy your Custom aspx page.

2. The second way is creating a solution package around your asp.net webapplication so that the pages in the webapplictaion can be deployed in sharepoint. This requires you to manually create all the solution related files (I mean manifest.xml, feature.xml, elements.xml and .ddf file).


In this Post, we will create a solution package manually for a asp.net webapplication project, so that the custom aspx page created in asp.net web application can be deployed in SharePoint's Layouts folder.

Below are the Steps that you can follow :

1. Create a New WebApplication Project.

2. Create a new folder "MyCustomFolder" in the solution explorer and Add your custom aspx page (along with cs file) under it.

3. Add two more xml files in the same folder with names as elements.xml and feature.xml.

The Elements.xml File should look like below -

<elements xmlns="<a href=">http://schemas.microsoft.com/sharepoint/">
<module name="Pages" url="_layouts">
<file url="CustomPage.aspx" name="CustomPage.aspx" type="GhostableInLibrary"> </file>
</module>
</elements>
</div>

Note : Add Module name as “Pages” and url as ” _Layouts “


The Feature.xml File should look like below -

<feature id="79DD53E7-8719-45b0-8E25-C2450B3E3F14" title="Project.CustomPage" description="Custom Page" scope="Web" version="1.0.0.0" hidden="false" xmlns="http://schemas.microsoft/sharepoint/">
<elementmanifests>
<elementmanifest location="elements.xml">
<elementfile location="CustomPage.aspx"> </elementfile> </elementmanifest>
</elementmanifests>
</feature></div>


4. Now, Create another xml file in the Project and name it as manifest.xml

The manifest.xml should look like below -

<solution xmlns="<a href=">http://schemas.microsoft.com/sharepoint/" SolutionId="A5A9C1C2-4EBF-40db-935F-66085A9E0BE8">
<rootfiles>
<rootfile location="TEMPLATE\LAYOUTS\MyCustomFolder\CustomPage.aspx">
</rootfile>
<assemblies>
<assembly deploymenttarget="GlobalAssemblyCache" location="Project.CustomPage.dll">
<safecontrols> 
<safecontrol assembly="Project.CustomPage Version=1.0.0.0, Culture=neutral, PublicKeyToken=a28586c97e90b41f" namespace=" Project.CustomPage" typename="*" safe="True"> </safecontrol> 
</safecontrols>
</assembly>
</assemblies>
</rootfiles>
</solution>

Note : If you are using some code behind with your aspx page, then change the Inherit tag in the aspx page to inherit from the assembly of the project.

For e.g. change the tag to

Inherits="NameSpace.Class, NameSpace, Version=1.0.0.0, Culture=neutral, PublicKeyToken=2ef8d0c4bab8980b" Debug="true"

You dont need to deploy .cs file with the Project. The code is accessed via its .dll .

5. Finally Create the .ddf file ( its simply a text file with name as .ddf)

.ddf file would be something like below -

.OPTION Explicit ; Generate errors.Set CompressionType=MSZIP

.Set UniqueFiles=Off.Set

DiskDirectory1=Package

.Set CabinetNameTemplate=Project.CustomPage.wsp

manifest.xml

bin\ Project.CustomPage.dll

.Set DestinationDir=TEMPLATE\LAYOUTS\CustomPage.aspx

;sets the feature directory.

Set DestinationDir=CustomPageFolder

;adds the feature to the feature directory

MyCustomFolder\feature.xml

;adds the element to the feature

MyCustomFolder\elements.xml

I have created an empty folder in the Project with a name as “Package” to save the .wsp file in it.


6. Sign the project with a key (Project Properties -> "signing tab" and browse your .snk key) and Build the Project.

7. Now, Add and deploy the .wsp solution which is under Package Folder in SharePoint using stsadm commands.

Best practices for SharePoint site

Some of the best practices for using Publishing Site are:

1. Keep Files UnCustomized - Try to keep your files UnCustomized or on the server. Avoid editing them in SharePoint designer or by using SharePoint API if not needed. Customizing the files can cost you alot in terms of Performance and space as every Customized or Unghosted file in is subjected to safe mode parser.Its basically, a little Check to see that everything on the Page is allowed to run in SharePoint.


2. Avoid adding lot of WebParts on a single Page - Check the closed webparts on your page and make sure you delete them.

3. Memory Management- Always, Dispose SPSIte and SPWeb Objects if you have created them in your code. You can employ certain coding techniques to ensure object disposal. These techniques include using the following in your code:

* Dispose method

* using clause

* try, catch, and finally blocks

Note : SPContext objects are managed by the SharePoint framework and should not be explicitly disposed in your code. This is true also for the SPSite and SPWeb objects returned by SPContext.Site, SPContext.Current.Site, SPContext.Web, and SPContext.Current.Web.


4. Reduce the Page Payload - SharePoint Page loads a lot of images from _layouts or other various paths which can make the Page load a slow process. To avoid this Payload you can use clustering or stitching, which combines multiple images into a single image file. You can then use CSS to clip parts of the image, giving users the impression that multiple images are being used.

5. Enable output caching for a site collection - For each page request for which an output cached version of a page is served, the server does not have to:
* Make a round trip to the database to fetch the source code for the .aspx page and any .ascx controls on the page.
* Reload and re-render the controls.
* Requery any data sources that the controls rely on for data.
However, Output caching consumes additional memory. Each version of a page consumes memory on the Web client.When used with two or more front-end Web servers, output caching may affect consistency.

Using External Javascript, CSS or Image File in a WebPart


// Referring External Javascript 
ClientScriptManager cs = Page.ClientScript;
// Include the required javascript file.
if (!cs.IsClientScriptIncludeRegistered("jsfile"))
cs.RegisterClientScriptInclude(this.GetType(), "jsfile", "/_wpresources/MyWP/1.0.0.0_9f4da00116c38ec5/jsfile.js");

Test : 
Testbutton= new Button();
Testbutton.Text = "Click me";
Testbutton.OnClientClick = "jsfile_Function()"; // specify function name here
this.Controls.Add(Testbutton); 

// Refering External CSS 
Microsoft.SharePoint.WebControls.CssLink cssLink = new Microsoft.SharePoint.WebControls.CssLink();
cssLink.DefaultUrl = "/_wpresources/MyWP/1.0.0.0_9f4da00116c38ec5/styles.css";
this.Page.Header.Controls.Add(cssLink);

// Using External Image 
imagePath = "/_wpresources/MyWP/1.0.0.0_9f4da00116c38ec5/Image.jpg";
img.ImageUrl = imagePath;
img.ID = "image1";

this.Controls.Add(mybutton); 
this.Controls.Add(img); 

Impersonation in Sharepoint (RunWithElevatedPrivileges)


The SPSecurity class provides a method (RunWithElevatedPrivileges) that allows you to run a subset of code in the context of an account with higher privileges than the current user.
The premise is that you wrap the RunWithElevatedPrivileges method around your code. And also In certain circumstances, such as when working with Web forms, you may also need to set the AllowSafeUpdates method to true to temporarily turn off security validation within your code. If you use this technique, it is imperative that you set the AllowSafeUpdates method back to false to avoid any potential security risks.

Code example

{
SPSite mySite = SPContext.Current.Site;
SPWeb myWeb = mySite.OpenWeb();

//Using RunWithElevatedPrivileges

SPSecurity.RunWithElevatedPrivileges(delegate()
{
// Get references to the site collection and site for the current context.
// The using statement makes sures these references are disposed properly.

using (SPSite siteCollection = new SPSite(mySite.ID))
{

using (SPWeb web = siteCollection.OpenWeb(myWeb.ID))
{

web.AllowUnsafeUpdates = true;

try
{
//Your code


web.AllowUnsafeUpdates = false;

//siteCollection = null;
//web = null;

}

Getting large no of items from SharePoint list

If you have to reterive a large number of Items and also need a better performance then you should use one of the methods below :

1. Using SPQuery 

2. Using PortalSiteMapProvider Class


Lets see the examples for both the methods :
Our Query - Query to get all the Items in a list where Category is "Sp2007"

SPQuery -


// Get SiteColl
SPSite curSite = new SPSite("http://myPortal");

//Get Web Application
SPWeb curWeb = curSite.OpenWeb(); 

// Create a SPQuery Object
SPQuery curQry = new SPQuery(); 

// Write the query
curQry.Query = "<Where><Eq><FieldRef Name='Category'/>
<Value Type='Text'>SP2007 </Value></Eq></Where>";

// Set the Row Limit
curQry.RowLimit = 100;

//Get the List 
SPList curList = curWeb.Lists(new Guid("myListGUID")); 

//Get the Items using Query
SPListItemCollection curItems = curList.GetItems(curQry); 

// Enumerate the resulting items 
foreach (SPListItem curItem in curItems) 
{

string ResultItemTitle = curItem["Title"].ToString();

}


PortalSiteMapProvider class -
The class includes a method calledGetCachedListItemsByQuery that retrieves data from a list based on an SPQuery object that is provided as a parameter to the method call.
The method then looks in its cache to see if the items already exist. If they do, the method returns the cached results, and if not, it queries the list, stores the results in cache and returns them from the method call.


// Get Current Web
SPWeb curWeb = SPControl.GetContextWeb(HttpContext.Current);

//Create the Query
SPQuery curQry = new SPQuery();
curQry.Query = "<Where><Eq><FieldRef Name=\'Category\'/><Value Type=\'Text\'>SP2007</Value></Eq></Where>";


// Get Portal Map Provider 
PortalSiteMapProvider ps = PortalSiteMapProvider.WebSiteMapProvider;

PortalWebSiteMapNode pNode = TryCast (ps.FindSiteMapNode (curWeb.ServerRelativeUrl), PortalWebSiteMapNode);

// Get the items
pItems = ps.GetCachedListItemsByQuery(pNode, "myListName_NotID", curQry, curWeb);

// Enumerate all resulting Items
foreach (PortalListItemSiteMapNode curItem in pItems)
{
string ResultItemTitle = curItem["Title"].ToString();
}

Sunday, November 27, 2011

Running Hadoop in Pseudo Distributed Mode


This section contains instructions for Hadoop installation on ubuntu. This is Hadoop quickstart tutorial to setup Hadoop quickly. This is shortest tutorial of Hadoop installation, here you will get all the commands and their description required to install Hadoop in Pseudo distributed mode(single node cluster)


COMMANDDESCRIPTION
sudo apt-get install sun-java6-jdkInstall java

If you don't have hadoop bundle download here download hadoop
sudo tar xzf file_name.tar.gzExtract hadoop bundle
Go to your hadoop installation directory(HADOOP_HOME)
vi conf/hadoop-env.shEdit configuration file hadoop-env.sh and set JAVA_HOME:
export JAVA_HOME=path to be the root of your Java installation(eg: /usr/lib/jvm/java-6-sun)
vi conf/core-site.xml
then type: 
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
Edit configuration file core-site.xml
vi conf/hdfs-site.xml
then type: 
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Edit configuration file hdfs-site.xml
vi conf/mapred.xml
then type:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
Edit configuration file mapred-site.xml and type:
sudo apt-get install openssh-server openssh-clientinstall ssh
ssh-keygen -t rsa -P ""
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
ssh localhost
Setting passwordless ssh
bin/hadoop namenode –formatFormat the new distributed-filesystem
During this operation :
Name node get start
Name node get formatted
Name node get stopped
bin/start-all.shStart the hadoop daemons
jpsIt should give output like this:
14799 NameNode
14977 SecondaryNameNode
15183 DataNode
15596 JobTracker
15897 TaskTracker
Congratulations Hadoop Setup is Completed
http://localhost:50070/web based interface for name node
http://localhost:50030/web based interface for job tracker
Now lets run some examples
bin/hadoop jar hadoop-*-examples.jar pi 10 100run pi example
bin/hadoop dfs -mkdir input
bin/hadoop dfs -put conf input
bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
bin/hadoop dfs -cat output/*
run grep example
bin/hadoop dfs -mkdir inputwords
bin/hadoop dfs -put conf inputwords
bin/hadoop jar hadoop-*-examples.jar wordcount inputwords outputwords
bin/hadoop dfs -cat outputwords/*
run wordcount example


bin/stop-all.shStop the hadoop daemons

Enhancing the Hilo Browser User Interface


The Hilo Browser application now allows you to share selected photos through an online photo sharing application. It also allows you to edit selected photos by launching the Hilo Annotator application. The Hilo Browser has been extended to make it easier to perform these two actions. In the first version of the Browser double-clicking (or double-tapping) on a photo launches the Annotator in order to edit the photo. The Browser now uses the double-click gesture to launch the slide show mode where the carousel is hidden and the selected photo is shown at a larger scale (Figure 1).

Introducing the WIC


Introducing the WIC

Both the Hilo Browser and Annotator display photos, and Annotator allows you to alter photos. For Hilo the definition of a photo is any image type so the Hilo applications have to be able to load and display a wide range of file types. The Windows 7 Imaging Component provides this functionality. The WIC can load images that are made up of multiple frames and it can access metadata in the image file. The WIC supports all the common image formats and even allows developers to develop codecs (coder-decoder components) for new formats.
To use the WIC you must include the wincodec.h header file, it contains the definitions for the various interfaces used by the WIC, definitions of structures and GUIDs for the WIC objects, and the standard pixel formats. The WIC is not just one component, instead there are several components used to encode and decode the different formats supported. Different image formats store image data in different ways, so to load an image file you need a decoder component to decode the data into a format your application can use. When you save image information you need an encoder component to encode the data into the format defined by the image file format. If you know the type of image you wish to load then you can create a decoder with a call to CoCreateInstanceEx and provide the Class ID (CLSID) of the decoder object. If you do not know the image type you can use the WIC API to examine the file and choose the appropriate object. To do this you create an instance of the WIC factory object.
Listing 1 shows the Direct2DUtility::GetWICFactory method that is used by the Hilo applications to create an instance of the factory object. Like all the other WIC objects, the factory object is a COM object and so this code must be called in a COM apartment. You can initialize either an STA or an MTA apartment.

Introducing Hilo Annotator


The Hilo Annotator is a separate application that you can launch either directly from the desktop, the command line, or from within the Hilo Browser application itself. The Browser application was updated to support the integration of the Annotator. The user can launch the Annotator from within the Browser by double tapping on a photo with their finger or double-clicking with the mouse. This action generates a WM_LBUTTONDBLCLK message which is handled by the media pane through theMediaPaneMessageHandler::LaunchAnnotator method, by passing the name of the selected photo. Listing 1 shows the code for the LaunchAnnotator method.
Listing 1 Hilo Browser code to launch the Annotator process

Using the Windows Ribbon


Introducing the Ribbon Framework

The Windows Ribbon control is a COM control and since it has a user interface you must initialize an STA (single threaded) apartment. The Windows Ribbon control is not an ActiveX control. This means that you do not have to provide an ActiveX control site, which simplifies considerably the code that you have to write in your application.
The Ribbon control uses adaptive layout. This means that the developer provides information about the controls that will be used and how they will be grouped, and at run time the Ribbon control determines the actual position of the controls.
To see the effect of adaptive layout you can run Windows 7 Paint and resize the window. This is shown in Figure 1. The top left image shows the Ribbon control with the View tab selected. At this width the items on the Ribbon control are shown full size. When the window width is reduced, the Ribbon control width is reduced and adaptive layout resizes the controls to enable all of them to be shown.
The bottom left image in Figure 1 shows the first change, that the Zoom group has compacted from a row of three buttons to a column of buttons. When the width is reduced further (bottom right, Figure 1) the Display group collapses to a column of buttons. At this size, there is no space to show the Customize Quick Access Toolbar button on the title bar, so instead there is a single button labeled .. and when you click on this button the toolbar pops up. The most compact arrangement (top right, Figure 1) collapses the Zoom group to a drop-down menu. If the window width is reduced further, the items on the Ribbon control cannot be shown and it disappears completely.

Using Windows HTTP Services


Flickr Share Dialog

When Hilo uploads a photo to Flickr, it makes several calls to the Flickr web server. These calls are made to authenticate the Hilo application (and obtain a session token called a frob); to authorize the access of the Hilo Flickr application to upload a photo to a Flickr account (and obtain an access token) and then to upload the photo. These calls are made across the network, and potentially they can take a noticeable amount of time. Hilo has to wait for responses from the Flickr web server in such a way that the user is kept informed. This is the purpose of the Share dialog.
The Hilo Browser’s user interface provides a button, labeled Share (Figure 1). When you click on this button you will see the Share dialog (Figure 2), which is implemented by theShareDialog class. This class has static methods and allows you to upload either the selected photo, or all photos in the current folder. The Share dialog has effectively three sets of controls reflecting your progress through the mechanism of uploading photos. The first set of controls is shown in Figure 2. When you click the Upload button a progress bar is shown under the radio buttons (Figure 3) to display the progress of the upload; and when the upload is complete all the initial controls are hidden except for the Cancel button which is relabeled Close, and the View Photos link control is displayed (Figure 4). The same class is used for all versions of this dialog.

Sharing Photos with Hilo


Updating the Hilo Browser User Interface

Hilo’s photo sharing functionality is accessed through the Hilo Browser application. Previous versions of the Hilo Browser allowed the user to launch the Annotator by double-tapping (or double-clicking) a photo in the media pane. This worked adequately because there was just one action that could be performed on a photo. Now that there is an additional action—share—another approach must be used. In the final version of the Hilo Browser, double-tapping a photo shows the photo in the slideshow mode (Figure 1) while double-tapping the screen again returns to browsing mode.

Quick install HBase in “pseudo distributed” mode and connect from Java


On first reading of the HBase documentation, setting up pseudo distributed mode sounds very simple. The problem is that there a lot of gotchas, which can make life very difficult indeed. Consequently, many people follow a twisted journey to their final destination, and when they finally get there, aren’t sure which of the measures they took were needed, and which were not. This is reflected by a degree of misinformation on the Web, and I will try and present here a reasonably minimal way of getting up and running (that is not even to say that every step I take is absolutely necessary even, but I’ll mention where I’m not sure).
Step 1: Check your IP setup
I believe this is one of the main causes of the weirdness that can happen. So, if you’re on Ubuntu check your hosts file. If you see something like:
127.0.0.1 localhost
127.0.1.1 <server fqn> <server name, as in /etc/hostname>

get rid of the second line, and change to
127.0.0.1 locahost
<server ip> <server fqn> <server name, as in /etc/hostname>

e.g.
127.0.0.1 localhost
23.201.99.100 hbase.mycompany.com hbase

Hadoop Troubleshooting


General Advice

  • If you are having problems, check the logs in the logs directory to see if there are any Hadoop errors or Java Exceptions.
  • Logs are named by machine and job they carry out in the cluster, and this can help you figure out which part of your configuration is giving you trouble.
  • Even if you were very careful, the problem is probably with your configuration. Try running the grep example from the QuickStart. If it doesn't run then you need to check your configuration.
  • If you can't get it to work on a real cluster, try it on a single-node.
  • Sometimes it can just take some time and sweat to make complex systems run; but, it never hurts to ask for help so please ask the TA and your fellow students ASAP if you are having trouble making Hadoop run.

Symptoms and Possible Solutions

SymptomPossible ProblemPossible Solution
You get an error that you cluster is in "safe mode"Your cluster enters safe mode when it hasn't been able to verify that all the data nodes necessary to replicate your data are up and responding. Checkthe documentation to learn more about safe mode.
  1. First, wait a minute or two and then retry your command. If you just started your cluster, it's possible that it isn't fully initialized yet.
  2. If waiting a few minutes didn't help and you still get a "safe mode" error, check your logs to see if any of your data nodes didn't start correctly (either they have Java exceptions in their logs or they have messages stating that they are unable to contact some other node in your cluster). If this is the case you need to resolve the configuration issue (or possibly pick some new nodes) before you can continue.
You get a NoRouteToHostException in your logs or in stderr output from a command.One of your nodes cannot be reached correctly. This may be a firewall issue, so you should report it to me.The only workaround is to pick a new node to replace the unreachable one. Currently, I think that creusa is unreachable, but all other Linux boxes should be okay. None of the Macs will currently work in a cluster.
You get an error that "remote host identification has changed" when you try to ssh to localhost.You have moved your single node cluster from one machine in the Berry Patch to another. The name localhost thus is pointing to a new machine, and your ssh client thinks that it might be a man-in-the-middle attack.You can ask your login to skip checking the validity of localhost. You do this by setting NoHostAuthenticationForLocalhost to yes in ~/.ssh/config. You can accomplish this with the following command:
echo "NoHostAuthenticationForLocalhost yes" >>~/.ssh/config
Your DataNode is started and you can create directories withbin/hadoop dfs -mkdir, but you get an error message when you try to put files into the HDFS (e.g., when you run a command like bin/hadoop dfs -put).Creating directories is only a function of the NameNode, so your DataNode is not exercised until you actually want to put some bytes into a file. If you are sure that the DataNode is started, then it could be that your DataNodes are out of disk space.
  • Go to the HDFS info web page (open your web browser and go to http://namenode:dfs_info_port where namenode is the hostname of your NameNode and dfs_info_port is the port you chose dfs.info.port; if followed the QuickStart on your personal computer then this URL will be http://localhost:50070). Once at that page click on the number where it tells you how many DataNodes you have to look at a list of the DataNodes in your cluster.
  • If it says you have used 100% of your space, then you need to free up room on local disk(s) of the DataNode(s).
  • If you are on Windows then this number will not be accurate (there is some kind of bug either in Cygwin's df.exe or in Windows). Just free up some more space and you should be okay. On one Windows machine we tried the disk had 1GB free but Hadoop reported that it was 100% full. Then we freed up another 1GB and then it said that the disk was 99.15% full and started writing data into the HDFS again. We encountered this bug on Windows XP SP2.
You try to run the grep example from the QuickStart but you get an error message like this:
java.io.IOException: Not a file:
  hdfs://localhost:9000/user/ross/input/conf
          
You may have created a directory inside theinput directory in the HDFS. For example, this might happen if you run bin/hadoop dfs -put conf input twice in a row (this would create a subdirectory in input... why?).The easiest way to get the example run is to just start over and make the input anew.
bin/hadoop dfs -rmr input
bin/hadoop dfs -put conf input
Your DataNodes won't start, and you see something like this inlogs/*datanode*:
Incompatible namespaceIDs in /tmp/hadoop-ross/dfs/data
Your Hadoop namespaceID became corrupted. Unfortunately the easiest thing to do reformat the HDFS.You need to do something like this:
bin/stop-all.sh
rm -Rf /tmp/hadoop-your-username/*
bin/hadoop namenode -format
Be VERY careful with rm -Rf
When you try the grep example in the QuickStart, you get an error like the following:
org.apache.hadoop.mapred.InvalidInputException:
  Input path doesnt exist : /user/ross/input
You haven't created an input directory containing one or more text files.
bin/hadoop dfs -put conf input
When you try the grep example in the QuickStart, you get an error like the following:
org.apache.hadoop.mapred.FileAlreadyExistsException:
  Output directory /user/ross/output already exists
You might have already run the example once, creating an output directory. Hadoop doesn't like to overwrite files.Remove the output directory before rerunning the example:
bin/hadoop dfs -rmr output
Alternatively you can change the output directory of the grep example, something like this:
bin/hadoop jar hadoop-*-examples.jar \
grep input output2 'dfs[a-z.]+'
You can run Hadoop jobs written in Java (like the grep example), but your HadoopStreaming jobs (such as the Python example that fetches web page titles) won't work.You might have given only a relative path to the mapper and reducer programs. The tutorial originally just specified relative paths, but absolute paths are required if you are running in a real cluster.Use absolute paths like this from the tutorial:
bin/hadoop jar contrib/hadoop-0.15.2-streaming.jar \
  -mapper  $HOME/proj/hadoop/multifetch.py         \
  -reducer $HOME/proj/hadoop/reducer.py            \
  -input   urls/*                                  \
  -output  titles

Featured Posts

#Linux Commands Unveiled: #date, #uname, #hostname, #hostid, #arch, #nproc

 #Linux Commands Unveiled: #date, #uname, #hostname, #hostid, #arch, #nproc Linux is an open-source operating system that is loved by millio...