Thursday, 11 September 2014

Restart Nautalis in Ubuntu 12.04

Nautilus is the default file manager/explorer in Ubuntu. 
Sometimes, you may need to kill and restart it to see the effect of installing or removing a program,if drivers are not displaying properly ,after installing some nautalis plugins or upgrades
1
$ nautilus -q or killall nautilus

Then open nautilus via Unity menu 

Tuesday, 9 September 2014

Install Gstreamer 1.0 in Ubuntu 12.04



Below are instructions to install gstreamer 1.0 on Ubuntu 12.04 OS.


sudo apt-get install ubuntu-restricted-extras

//backports from gstreamer developer ppa
sudo add-apt-repository ppa:gstreamer-developers/ppa
sudo apt-get update
//install the base packages and plugins for gstreamer
sudo apt-get install gstreamer1.0*


Saturday, 23 August 2014

Android Touch Gestures Capturing Interface

Introduction

In this article we will look at android application to capture touch gestures.This module is first part  of generic touch based gesture recognition library.

Background
Gesture is a prerecorded touch screen motion sequency.Gesture recognition is an active research area in the field of pattern recognition,image analysis and computer vision.
We will have several modes in which application can operate.One of options the user can select is to capture and store the candidate gesture.
The aim would be to build a generic C/C++ library that can store gestures is a user defined format.

Gesture Registration Android Interface

This process of capturing and storing information about candidate gesture classes  is called gesture registration. 

In the present article we will use the GestureOverlay method.A gesture overlay acts as a simple drawing board on which the user can draw his gestures. The user  can modify  several visual properties, like the color and the width of the stroke used to draw gestures, and register various listeners to follow what the user is doing.

To capture gestures and process them first stem is to add a GestureOverlayView to store_gesture.xml XML layout file.


......
......



    
 

        
 



Some properties of gesture overlay are specified that the gestures stroke type is single indicating uni-stroke gestures.

Now in the main activity file we just need to set the content view to the layout file.In the present application the name of layout file is "activity_open_vision_gesture.xml".

Since we also need to capture or process the gesture once they are performed we add a gesture listerner to the overlay.The most commonly used listener is GestureOverlayView.

OnGesturePerformedListener which fires whenever a user is done drawing a gesture.We use a class GesturesProcessor that implements the GestureOverlayListner.

Once the gesture is drawn by the user the control flow enters the onGestureEnded method.Here we copy the gesture   and can perform host of activities like storing,predicting etc.

Below is a image of the UI Interface


private class GesturesProcessor implements GestureOverlayView.OnGestureListener {
        public void onGestureStarted(GestureOverlayView overlay, MotionEvent event) {
            mDoneButton.setEnabled(false);
            mGesture = null;
        }

        public void onGesture(GestureOverlayView overlay, MotionEvent event) {
        }

        //callback function entered when the gesture registration is completed
        public void onGestureEnded(GestureOverlayView overlay, MotionEvent event) {
           //copy the gesture to local variable
            mGesture = overlay.getGesture();
          //ignore the gesture if length is below a threshold
            if (mGesture.getLength() < LENGTH_THRESHOLD) {
                overlay.clear(false);
            }
          //enable the store button
            mDoneButton.setEnabled(true);
        }

        public void onGestureCancelled(GestureOverlayView overlay, MotionEvent event) {
        }
    }


Upon clicking the store button the program enters "onStore" callback function.

    public void addGesture(View v) {
     Log.e("CreateGestureActivity","Adding Gestures");
        //function which extracts information from the Android Gesture Objects like locations and then make native library function calls to store the gesture
     extractGestureInfo();
    }


We define all the JNI Interface functions in the class GestureLibraryInterface.We define 2 JNI Interface calls to the native C/C++ gesture library

public class GestureLibraryInterface {
    static{Loader.load();}
    //makes native calls to GestureLibrary to store gesture information in local filesystem
    public native static void addGesture(ArrayList%lt;Float> location,ArrayList time,String name);
    //make native calls to GestureLibrary to set the gesture directory
    public native static void setDirectory(String name);
}

The first step is to set the directory where the Gestures will be stored by making the "setDirectory" call.This is done when the AndroidActivity is initialized in the "onCreate" function.

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        
        setContentView(R.layout.store_gesture);

        mDoneButton = findViewById(R.id.done);
        eText = (EditText) findViewById(R.id.gesture_name);
        GestureOverlayView overlay = (GestureOverlayView) findViewById(R.id.gestures);
        overlay.addOnGestureListener(new GesturesProcessor());
        GestureLibraryInterface.setDirectory(DIR);
    }

The extractGestureInfo reads the gesture strokes and stores the locations in ArrayList which is passed to native C/C++ using JNI Interface.

.....
private static final String DIR=Environment.getExternalStorageDirectory().getPath()+"/AndroidGesture/v1";
....
      @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        //load the store activity GUI layoyt files
        setContentView(R.layout.store_gesture);
        //gets the store button object
        mDoneButton = findViewById(R.id.done);
        //get the EditText object
        eText = (EditText) findViewById(R.id.gesture_name);
        //configures the gestureOverLay Listener
        GestureOverlayView overlay = (GestureOverlayView) findViewById(R.id.gestures);
        overlay.addOnGestureListener(new GesturesProcessor());
        //gets the gesture directory  
        GestureLibraryInterface.setDirectory(DIR);
    }

The JNI C/C++ codes associated with the java class are defined in the file GestureLibraryInterface.cpp and GestureLibraryInterface.hpp files

//function calls the GetureRegognizer methods to add gesture to class path
JNIEXPORT void JNICALL Java_com_openvision_androidgesture_GestureLibraryInterface_addGesture(JNIEnv *, jobject, jobject, jobject, jstring);

//function calls the GestureRecognizer methods to set the main gesture directory path
JNIEXPORT void JNICALL Java_com_openvision_androidgesture_GestureLibraryInterface_setDirectory(JNIEnv *, jobject, jstring);

//Utility functions to convert from jobject datatype to float and Long
float getFloat(JNIEnv *env,jobject value);
long getLong(JNIEnv *env,jobject value);


The UniStrokeGesture Library consits of the following files
  • UniStrokeGestureLibary
  • UniStrokeGesture
  • GesturePoint
The UniStrokeGestureLibrary Class encapsulates all the properties of Unistroke gestures.It contains methods for storing,retriving and predicting the gestures amongst others.

The "addGesture" JNI Method calls the save routine implemented in the class to Store the Gestures
The UniStrokeGestureLibrary consists of a sequence of objects of type UniStrokeGesture.

The objects of class UniStrokeGesture encapsulates all the properties of single gesture class.UniStrokeGesture Class contains facility to store multiple instances of sample gesture,as UniStroke Gesture can be represented by multiple candidated instances. 

The UniStrokeGesture is contains as sequence of objects of type GesturePoint.Each Gesture points represents a element of UniStrokeGesture and is characterized by its location in 2D grid.

/**
 *  function that stores the Gesture to a specified directory
 */
void UniStrokeGestureRecognizer::save(string dir,vector points)
{

    char abspath1[1000],abspath2[1000];
    sprintf(abspath1,"%s/%s",_path.c_str(),dir.c_str());
   
    int count=0;

    //check if directory exists else create it
    int ret=ImgUtils::createDir((const char *)abspath1);

    count=ImgUtils::getFileCount(abspath1);

     sprintf(abspath2,"%s/%d.csv",abspath1,count);
     //writing contents to file in csv format
     ofstream file(abspath2,std::ofstream::out);
     for(int i=0;i     //creating a bitmap while storing the CSV file
generateBitmap(abspath2);   }

Consider an example of gesture stored in CSV format



The generateBitMap fuction loads the gesture points from the input csv file and generates a bitmap image that is suitable for display

void UniStrokeGestureRecognizer::generateBitmap(string file)
{
   
   string basedir=ImgUtils::getBaseDir(file);
   string name=ImgUtils::getBaseName(file);
   string line;
   //ifstream classFile(file.c_str());

   
   float x,y;
   cv::Mat image=cv::Mat(640,480,CV_8UC3);
   image.setTo(cv::Scalar::all(0));
   Point x1,x2,x3;
   int first=-1;
   int delta=20;
   vector points;
   //loading the gesture from CSV file
   points=loadTemplateFile(file.c_str(),"AA");

   //getting the bounding box
   Rect R=boundingBox(points);
   
   //drawing the gesture
   int i=0;
   cv::circle(image,cv::Point((int)points[i].position.x,(int)points[i].position.y),3,cv::Scalar(255,255,0),-1,CV_AA);
   for(i=1;iimage.cols)
       R.width=image.cols-R.x-1;
   if(R.y+R.height>image.rows)
       R.height=image.rows-R.y-1;

   //extract the ROI
   Mat roi=image(R);
   Mat dst;
   cv::resize(roi,dst,Size(640,480));
   string bmpname=basedir+"/"+name+".bmp";
   //save the bitmap
   cv::imwrite(bmpname,dst);



}

 

Display the Gesture List

The next part of the application deals with displaying the gesture created in the above section on the Android UI.

Upon starting the application all the bmp files  in the template directory are loaded

The activity of loading the gestures the bitmaps is done in background asynchronously.

In android a ListView is used to display the gesture bitmaps and associated text.

Layout each item of the list is defined in the file gesture_item.xml



The layout for ListView is defined in the main layout file activity_open_vision.xml

The displayGestures function defined in OpenVisionGesture.java.

A object of type "AsyncTask",GesturesLoadTask is defined in the main class files.This methods of these classes is called from the displayGestures function,which loads the gesture list in the background.

The objects of class ArrayTask reuire three methods to be defined.
  • doInBackground - main function executed in the background
  • onPreExecute - the function invoked before background task  is executed
  • onPostExecute - the function invoked after the background task has executed
  • onProgressUpdate - the function can be used to update UI contents while background task is executing
In the background task ,the code parses throught the gesture template directory and reads all the bitmap files.

An ArrayAdapter  takes an Array and converts the items into View objects to be loaded into the ListView container.We define an adapter which maintains a NamedGesture objects which contain the gesture name and identifier.The "getView" function in the ArrayAdapter class is responsible for converting the java object to View.

we maintain a List of Bitmaps identified by a ID as well as List of Gesture names represented by the SameID.

When ever a bitmap is read we update the lists and GUI so that it can be displayed by calling the "publishProgress" function which leads to onProgressUpdate function being called in the main UI thread.

Using this approach we can see the bitmaps being populated with time
        @Override
        protected Integer doInBackground(Void... params) {
            if (isCancelled()) return STATUS_CANCELLED;
            if (!Environment.MEDIA_MOUNTED.equals(Environment.getExternalStorageState())) {
                return STATUS_NO_STORAGE;
            }

            

            Long id=new Long(0);
         File list = new File(CreateGestureActivity.DIR);
         
         //get list of template classes
         File[] files = list.listFiles(new DirFilter());
         
         for(int i=0;i  //get list of image files in the template folder
File[] list1=files[i].listFiles(new ImageFileFilter()); for(int k=0;k //load the image files BitmapFactory.Options options = new BitmapFactory.Options(); options.inPreferredConfig = Bitmap.Config.ARGB_8888; Bitmap bitmap = BitmapFactory.decodeFile(list1[k].getPath(), options); Bitmap ThumbImage = ThumbnailUtils.extractThumbnail(bitmap, mThumbnailSize, mThumbnailSize); final NamedGesture namedGesture = new NamedGesture(); namedGesture.id=id; namedGesture.name = files[i].getName()+"_"+list1[k].getName(); //add bitmap to hashtable mAdapter.addBitmap((Long)id, ThumbImage); id=id+1;   //update the GUI publishProgress(namedGesture); bitmap.recycle(); } } return STATUS_SUCCESS; }


Once the add function in adapter is called ,the UI ListView is updated by displaying the gesture name and associated bitmap in the "getView" function

        @Override
        public View getView(int position, View convertView, ViewGroup parent) {
            if (convertView == null) {
                //view associated with individual gesture item
                convertView = mInflater.inflate(R.layout.gestures_item, parent, false);
            }
            //get the gesture at specified position in the listView
            final NamedGesture gesture = getItem(position);
            final TextView label = (TextView) convertView;

            //set the gesture names
            label.setTag(gesture);
            label.setText(gesture.name);
            //get the bitmap from hashtable identified by id and display bitmap to left of text
            label.setCompoundDrawablesWithIntrinsicBounds(mThumbnails.get(gesture.id),null, null, null);

            return convertView;
        }

Code

The files found in the ImgApp Directory form the OpenVision repository and can be found at github repository www.github.com/pi19404/OpenVision.

The complete android project can be found at samples/Android/AndroidGestureCapture directory in OpenVision repository.This is android project source package and can be directly imported onto eclipse and run directly.The application was tested on Mobile device with Android Version 4.1.2.Compatibility with other Android OS versions has not been tested or kept in consideration while developing the application.

You need to have OpenCV installed on you system.The present application was developed on Ubuntu 12.04 OS.The paths in the Android.mk are specified based on this.For windows or other OS or if OpenCV paths are different modify the make file accordingly

The apk and source File Can be downloaded from

Wednesday, 20 August 2014

Compiling Native C/C++ library for Android

Introduction

This article describes method to cross compile C/C++ library for mobile devices which use Android OS.

Installation and Code Compilation

Before Proceeding make sure that you have all the below software components installed and configured in Eclipse
  • Eclipse IDE
Develop the code on Desktop Computer and check if you are able to compile it properly without errors.
The present example consists of files containing following classes
  • UniStrokeGestureRecognizer
  • UniStrokeGesture
  • GesturePoint
The library libOpenVision.so has been successfully compiled on the Ubuntu OS and now we proceed with cross compilation of the library for ARM based mobile devices which use the Android OS.
Cross Compilation
The simplest approach to do this is to use the Eclipse IDE.The Eclipse IDE provides features for adding native C/C++ support to an existing Android based project.
The  project name is AndroidGesture.Right click on an Android project and
select Android Tools -> Add native support.
And enter the desired library name as OpenVision
This will configure the AndroidProject for the native build.Create a jni folder with OpenVision.cpp file and associated Android.mk make file
Copy the all the C/C++ project files in the jni folder and then proceed to modify the Android.mk file to configure for native build.
Create a directory called OpenVision in the jni directory
Copy all of the following files in the ImageApp subdirectory
  • UniStrokeGestureRecognizer.cpp
  • UniStrokeGestureRecognizer.hpp
  • UniStrokeGesture.cpp
  • UniStrokeGesture.hpp
  • GesturePoint.cpp
  • GesturePoint.hpp
Copy the file OpenCVCommon.hpp in the Common Subdirectory
The preset code uses OpenCV libraries.Copy the attached OpenCV pre-compiled libraries for ARM in the libs/armeabi and libs/armeabi2 directories.
MakeFiles
below are the contents of Android.mk file.This file is like a standard make file containing the include paths,source files,library dependencies etc.Few of the syntaxes are specific to android build and explanation is provided in the comments
Android.mk file
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)

# name of the library to be built
LOCAL_MODULE    := OpenVision
#list of source files to be build as part of the library
LOCAL_SRC_FILES := ImgApp/GesturePoint.cpp ImgApp/UniStrokeGesture.cpp ImgApp/UniStrokeGestureRecognizer.cpp

# list of dependent 3rd party or external libraries are included in the LOCAL_SHARED_LIBRARY variable
LOCAL_SHARED_LIBRARIES := $(foreach module,$(OPENCV_LIBS3),opencv_$(module)) 
OPENCV_MODULES3:=core imgproc flann contrib features2d video  highgui legacy ml objdetect  
OPENCV_LIBS3:=$(OPENCV_MODULES3)

# list of dependent system libraries
LOCAL_LDLIBS +=  -fPIC -llog -ldl -lm  -lz -lm -lc -lgcc -Wl,-rpath,'libs/armeabi-v7a' 
LOCAL_LDLIBS += -L$(LOCAL_PATH)/../libs/armeabi -llog -Llibs/armeabi-v7a/ 

# include path for header files for C and C++ applications
LOCAL_C_INCLUDES +=/usr/local/include /usr/local/include/opencv2 /home/pi19404/repository/OpenVision/OpenVision
LOCAL_CPP_INCLUDES +=/usr/local/include /usr/local/include/opencv2 /home/pi19404/repository/OpenVision/OpenVision
#The compilation flags for C/C++ applications
LOCAL_CPPFLAGS += -DHAVE_NEON -fPIC -DANDROID -I/usr/local/include/opencv  -I/usr/local/include   -I/OpenVision -I/home/pi19404/repository/OpenVision/OpenVision -fPIC
LOCAL_CFLAGS += -DHAVE_NEON -fPIC -DANDROID  -I/usr/local/include/opencv -I/usr/local/include -I/OpenVision -I/home/pi19404/repository/OpenVision/OpenVision -fPIC
LOCAL_CPP_FEATURES += exceptions

#statement specifies build of a shared library
include $(BUILD_SHARED_LIBRARY)

#files in the libs/armeabi are deleted during each build
#we need to have 3rd party opencv libraries in this directory
#the files are placed in the armeabi2 directory
#when ever a native build is trigged the opencv library files specified in the OPENCV_MODULES2
#variable are copied from the armeabi2 directory to the armeabi or armeabi-v7a directory
#as per the specification of APP_ABI in the Application.mk file

include $(CLEAR_VARS)
OPENCV_MODULES2:= calib3d contrib  core features2d flann highgui imgproc  legacy ml nonfree objdetect photo stitching  video videostab
OPENCV_LIBS2:=$(OPENCV_MODULES2)  
OPENCV_LIB_SUFFIX:=so
OPENCV_LIB_TYPE:=SHARED


define add_opencv_module1 
include $(CLEAR_VARS)
 LOCAL_PATH := libs/armeabi2
 LOCAL_MODULE:=aaaopencv_$1
 LOCAL_SRC_FILES:=libopencv_$1.$(OPENCV_LIB_SUFFIX)
 include $(PREBUILT_$(OPENCV_LIB_TYPE)_LIBRARY) 
endef

$(foreach module,$(OPENCV_LIBS2),$(eval $(call add_opencv_module1,$(module))))

Application.mk make file
APP_ABI :=   armeabi-v7a armeabi
APP_STL := gnustl_static
APP_PLATFORM    := android-8
APP_CPPFLAGS := -frtti -fexceptions -ftree-vectorize  -mfpu=neon -O3 -mfloat-abi=softfp -ffast-math

After building the project the libOpenVision.so files can be found in the libs/armeabi and libs/armeabi-v7a directories.These have been cross-compiled for use on android based devices.
These can now be loaded and called from java application using JNI Interface

Files

The pre compiled opencv libraries for Android can be found at www.github.com/pi19404/OpenCVAndroid
The source and make files used above came be found in the OpenVision repository at www.github.com/pi19404/OpenVision
The Android.mk and Application.mk files and contents of jni directory can be found below

Thursday, 24 April 2014

Adaptive Skin Color Detector

Adaptive Skin Color Detection

Introduction

In this article we will look at Adaptive Skin Color Detection technique described in the paper \cite{Dadgostar:2006:ARS:1151986.1151991}.
  • Skin Color can be efficiently represented using Hue channel of HSV color space.
  • A static Skin color detector/Global Skin color detection can be specified by low and higher hue thresholds.
  • The hue range specified global skin color detector should detect the aactual skin colored pixels ,but it would also falsely detect some non skin colored pixels belonging to background or objects with similar hue color as skin like wood etc
  • The ammount of falsely detected pixels may be large in some situtations compared to actual skin pixels,if a significant ammount of scence contains objects with hue similar to the skin color.
  • The choise of image aquisition system,lighting conditions,pre-processing etc affect the choice of hue thresholds.
  • Hence the optimum thresholds needs to be decided adaptively.
  • In HCI applications hand or face regions are used to communicate with the computer and assuming that dominant motion in the scene belongs to hand and skin pixels.
  • One of the ways to detect the regions belonging to skin regions is motion tracking.
  • The Hue thresholds are adaptively changed by observing the motion of skin colored pixels in the image.
  • Thus the first step is to determine the in motion skin colored pixels.

    Global Skin Colored Detector

  • A Global Skin Colored Detector is specified by lower and upper Hue thresholds and lower and higher intensity thresholds.
  • Initals lower and upper hue thresholds are chosen as 3 and 33.
  • The Hue range provided is a generic thresholds that will cover all the possible skin colors.
  • Due to the generic nature of the skin threshold,some background objects whose hue is similar to skin or falls within the specified threshold may also be detected.
  • The initial lower and higher intensity thresholds are chosen as 15 and 250.
  • The thresholds are choosen such to avoid over or under-exposed regions in the images.

    Skin Color Histogram

  • The filtered skin colored pixels can be used to construct a skin colored histogram which represents the statistical distribution of the skin colored pixels in the scene.
  • In case of global Skin colored detector,this histogram also accumulates the data due to background pixles.
  • Let us known assume that we known the pixels that belong to hand or face pixels.
  • We again compute the histogram of skin colored pixels.
  • The Actual Skin Colored histogram is merged with original skin colored histogram
  • The histograms are combined using a weighted average

  • A good result is obtained by choosing a value in the range 0.02-0.05 for a.
  • For each frame the range of hue thresholds are re-calculated based on new histogram.
  • The criteria used for selection of lower and upper thresholds is such that area under the histogram covers f%.
  • In the paper a criteria of 90-96\% was used

    Original
    Global Threshold
    new Threshold
  • Without any additional information ,the criteria used for selection of lower and upper threholds ,simply is instrumental in removal of outliers.
  • In the figures shown ,some background pixels,pixels belonging to hair,lips etc are also shown as skin colored pixels.
  • Now let is consider pixels which belong to face,This in given manually by specifying a mask.Pixels in ROI (168,63,50,50) is explicitly specified as skin colored pixels.
  • A histogram is computed by considering only the pixels in ROI.
  • The global histogram and newly constructed histogram are combined by performing a weighted average.
  • Obviously the pixels in histogram computed over entire image are large than ones computed in small ROI,to avoid bias due to count of pixels used to build the histogram, the histograms are normalized between 0 to 1,before computing the linear combination.
  • Then we determine the pixels between which 90\% of pixels lie.
  • The hue range corresponding to this is (13,16)
  • The skin color detected considering the new range in shown in figure fig:image5

    New threshold
    Skin Image
  • Thus incorporating the cue's about skin color,enhances the detection performane of skin colored pixels.
  • In the above example the cue has been encorporated manually,however if we can incorporate the cur automatically then we have make the process of skin color detection completely adaptive.
  • Some techniques suggested in the paper were based on using motion based cue like frame differencing and optical flow tracking.
  • These techniques are suitable for HCI application assuming the object of interest is in motion.
  • Frame differencing provides a simple method to determine the region which encountered motion and use these pixels .

    Code

    The code for the same can be found at OpenVision Repository https://github.com/pi19404/OpenVision The class AdaptiveSkinDetector encapsulates the methods for implementing the skin detector. The code for the same can be found in files ImgProc/adaptiveskindetector.cpp IgmProc/adaptiveskindetector.hpp. For histogram computation the class Histogram is used which can be found in the filesImgProc/Histogram.cpp,ImgProc/Histogram.hpp
    1#include "ImgProc/adaptiveskindetector.hpp"
    2...
    3AdaptiveSkinDetector ss1;
    4...
    5Mat hmask;
    6ss1.run(image,hmask);
    7...
References

  1. Farhad Dadgostar and Abdolhossein Sarrafzadeh. An Adaptive Real-time Skin Detector Based on Hue Thresholding: A Comparison on Two Motion Tracking Methods . In: Pattern Recogn. Lett. 27.12 (Sept. 2006), pp. 1342 1352.  issn:0167-8655. doi: 10.1016/j.patrec.2006.01.007. url: http://dx.doi. org/10.1016/j.patrec.2006.01.007.


The PDF version of the document can be found below