c++ - OpenCV stitch images by warping both -


i found lot questions , answers image stitching , warping opencv still not find answer question.

i have 2 fisheye cameras calibrated distortion removed in both images.

now want stitch rectified images together. pretty follow example mentioned in lot of other stitching questions: image stitching example

so keypoint , descriptor detection. find matches , homography matrix can warp 1 of images gives me stretched image result. other image stays untouched. stretching want avoid. found nice solution here: stretch solution.

on slide 7 can see both images warped. think reduce stretching of 1 image (in opinion stretching separated example 50:50). if wrong please tell me.

the problem have don't know how warp 2 images fit. have calculate 2 homografies? have define reference plane rect() or something? how achieve warping result shown on slide 7?

to make clear, not studying @ tu dresden found while doing research.

warping 1 of 2 images in coordinate frame of other more common because easier: 1 can directly compute 2d warping transformation image correspondences.

warping both images new coordinate frame possible more complex, because involves 3d transformations , require accurately define new 3d coordinate frame respect initial two.

the basic idea (very roughly) represented in hand drawing on slide #2 in linked presentation. made bigger one:

enter image description here

basically, procedure follows:

  1. if cameras calibrated, can estimate relative 3d pose between 2 images exclusively feature correspondences computing fundamental matrix, deducing essential matrix [hz03 paragraph 9.6 , equation 9.12], , deducing relative pose [hz03 paragraph 9.6.2]. hence, can estimate example 3d rigid transformation t2<-1 mapping coordinate frame of img1 onto coordinate frame of img2:

t2<-1 = r2<-1 * [ i3 | 0 ]

  1. from this, can define accurately image plane new image, respect other 2 images. example:

tn<-1 = square_root( r2<-1) * [ i3 | 0 ]

tn<-2 = tn<-1 * t2<-1-1

  1. from these 2 relative poses, can derive pixel 2d transformations warp 2 images in new image plane [hz03, example 13.2]. basically, warping homography respecively img1 new image , img2 new image are:

hn<-1 = k * rn<-1 * k-1

hn<-2 = k * rn<-2 * k-1

  1. then can compute range of valid pixels (i.e. xmin, xmax, ymin, ymax) in new image plane, crop , form new image.

note step #3 assumes images taken same point in space (pure camera rotation), otherwise there parallax between images, produce visible stitching imperfections.

hope helps.

reference: [hz03] hartley, richard, , andrew zisserman. multiple view geometry in computer vision. cambridge university press, 2003.


Comments

Popular posts from this blog

javascript - Using jquery append to add option values into a select element not working -

Android soft keyboard reverts to default keyboard on orientation change -

Rendering JButton to get the JCheckBox behavior in a JTable by using images does not update my table -