A “cool” application of different aspects of image warping is image mosaicing. Using two or more photographs, you can create an image mosaic by registering, projective warping, resampling, and compositing them. One significant method used in the project was the computation of homographies, which was used to warp images.
I took photos at multiple locations by rotating my iPhone camera on a flat surface.
I implemented the approach from lecture and discussion in my code. Using the input points, im1_pts
and im2_pts
, I created the arrays below and applied least squares to solve for the homography matrix H
.
First, I set the source points to be the 4 corners of the input image. I calculated the destination by multiplying the source by the homography matrix, H * source
and normalized with destination[:2] / destination[2]
. Next, I shifted the destination points such that the upper left point is at (0, 0)
. I then created a new array based on the shift to be able to fit the output image. Next, I used the same procedure as Project 3 for inverse warping, except I shifted the polygon returned by skimage.draw.polygon
. Finally, I displayed the resulting warped image.
First, I found 4 source points from the input image that is a rectangle (but doesn't appear as one due to camera perspective). I then used the image dimensions to generate 4 destination points that were in the shape of a rectangle. I found the homography from the source to the destination, and warped the image into a rectangular plane with the homography.
In this part, I followed the approach from lecture again. First, I warped the right image onto the left. Next, I found the destination transforms of the left image and warped right using scipy.ndimage.distance_transform_edt
. Then I applied the Python equivalent of logical(dtrans1 > dtrans2)
from lecture to generate the mask. Finally, I used the blending code from Project 2 to blend everything together and generate the mosaics.
In Project 4A, we worked with stitching with manually-selected points. In this part, we will be automating the stitching process. Points are selected with Harris Interest Point Detector and ANMS before getting matched. Then RANSAC is used to increase match accuracies. Finally, we have a set of points and a homography matrix that we can use to warp and mosaic.
In order to find the interest points for each image, I first converted to grayscale before using the Harris Interest Point Detector code from the course website.
Next, to restrict the number of interest points, I implemented Adaptive Non-Maximal Suppression (ANMS). I followed the formula ri = min_j |xi - xj|, s.t. f(xi) < c_robust f(xj), xj in I
from the paper, handling infinite radii by replacing them with the largest non-infinite radius. I used n_ip = 50
interest points and c_robust = 0.9
.
First, I applied my Gaussian blur function from Project 2 to blur the grayscale version of each image. Next, following the paper, I sampled each 40x40
square around every interest point, resized to an 8x8
patch, and normalized. Finally, I stored a flattened version of each patch in a (n_ip, 64)
array.
To match the features between two images, I once again followed the paper. First, I set an image to be the reference image. I then found the 1- and 2-NNs for the reference neighbors by calculating the distances between each feature between the reference and second image. Finally, I filtered through all of the points using Lowe's threshold formula 1-NN / 2-NN < threshold
and returned an array of matching indices. I used a threshold of 0.8
in this part.
In the previous part, it can be observed that some points are matched incorrectly - thus, we use RANSAC to reduce inaccuracies. For this part, I implemented 4-point RANSAC by exactly following the lecture slides. To calculate homographies, I used computeH
from Project 4A. I had to slightly modify how I did operations with the homography matrix due to array shapes and x
and y
coordinates flipping. My RANSAC
function outputted the largest set of inliers and its homography matrix, as outlined in the paper.
Based on the points and homographies from RANSAC, I warped the right image to the left image using a slightly modified warpImage
method (to account for different input shapes). The rest of the mosaicing process is identical to Project 4A.
The coolest thing I learned in this project was how we're able to mathematically match features, and how random sampling processes like RANSAC can reduce inaccuracies. Also, I really liked how autostitching can end up stitching images better than manually selecting points.