Feature Matching in Android

The feature matching algorithm in this tutorial is in JAVA with OpenCV4Android.

My purpose of writing the code is to find the perspective transformation between a pattern and an object in a scene. Basic steps to find a homography include 1) keypoint calculation 2) descriptor calculation 3) coarse matching 4) finer matching and 5) finding the transformation matrix. ( Let’s assume that the pattern has some nice features, and that we are dealing with gray-scale images )

First of all, we have to decide what feature to use for keypoint and descriptor calculation. It’s preferred to use SIFT or SURF features but they are not compiled with OpenCV4Android as they are non-free libraries. If you don’t want to separately compile those libraries, you could alternatively use FAST or ORB features; they also work decently. And for matching algorithm, Hamming distance/L1/L2 are usually preferred options.

FeatureDetector Orbdetector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor OrbExtractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);

For each of the two images (pattern and scene), we calculate its keypoints and descriptors

Mat descriptors1 = new Mat();
MatOfKeyPoint keypoints1 = new MatOfKeyPoint(); 
Orbdetector.detect(image1, keypoints1);
OrbExtractor.compute(image1, keypoints1, descriptors1);

Then we do matching between the two sets of descriptors of the two images. Before this step, it’s better to check the size of descriptor1 and descriptor2 to see if they have enough points

MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors1,descriptors2,matches);

We want to discard some of the poor matches, and thus we set a threshold and only store matches that have a distance below the threshold

 LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
 MatOfDMatch gm = new MatOfDMatch();
 for (int i=0;i<descriptors2.rows();i++){
   if(matchesList.get(i).distance<3*min_dist){ // 3*min_dist is my threshold here
     good_matches.addLast(matchesList.get(i));
   }
 }
 gm.fromList(good_matches);

Once we got a list of good matches, we can extract these pairs of points from the two images. If you want to match image1 to image2 (or find the transformation matrix from image1 to image2), then trainIdx is used for image1 while queryIdx is used for image2

LinkedList<Point> objList = new LinkedList<Point>();
LinkedList<Point> sceneList = new LinkedList<Point>();
for(int i=0;i<good_matches.size();i++){
  objList.addLast(keypoints2_List.get(good_matches.get(i).trainIdx).pt);
  sceneList.addLast(keypoints1_List.get(good_matches.get(i).queryIdx).pt);
}

The inputs to the function “getPerspectiveTransform” have to be in Mat format, convert the linked list to Mat formats and then find the transformation matrix between the two sets of good matching features

obj.fromList(objList);
scene.fromList(sceneList);
Mat perspectiveTransform = Imgproc.getPerspectiveTransform(obj,scene);

Leave a comment