Skip to content Skip to sidebar Skip to footer

What Is The Correct Way To Undistort Points Captured Using Fisheye Camera In Opencv In Python?

INFO: I've calibrated my camera and have found the camera's intrinsics matrix (K) and its distortion coefficients (d) to be the following: import numpy as np K = np.asarray([[556.3

Solution 1:

Answer to Q1:

You are not using map_1 and map_2 correctly.

The map generate by the cv2.fisheye.initUndistortRectifyMap function should be the mapping of the pixel location of the destination image to the pixel location of the source image, i.e. dst(x,y)=src(mapx(x,y),mapy(x,y)). see remap in OpenCV.

In the code, map_1 is for the x-direction pixel mapping and map_2 is for the y-direction pixel mapping. For example, (X_undistorted, Y_undistorted) is the pixel location in the undistorted image. map_1[Y_undistorted, X_undistorted] gives you where is this pixel should map to the x coordinate in the distorted image, and map_2 will give you the corresponding y coordinate.

So, map_1 and map_2 are useful for constructing an undistorted image from a distorted image, and not really suitable for the reversed process.

remapped_points = []
for corner in corners2:
    remapped_points.append(
              (map_1[int(corner[0][1]), int(corner[0][0])], map_2[int(corner[0][1]), int(corner[0][0])]))

This code to find the undistorted pixel location of the corners is not correct. You will need to use undistortPoints function.


Answer to Q2:

The mapping and undistortion are different.

You can think of mapping as constructing the undistorted image based on the pixel locations in the undistorted image with the pixel maps, while undistortion is to find undistorted pixel locations using the original pixel location using lens distortion model.

In order to find the correct pixel locations of the corners in the undistorted image. You need to convert the normalized coordinates of the undistorted points back to pixel coordinates using the newly estimated K, in your case, it's the final_K, because the undistorted image can be seen as taken by a camera with the final_K without distortion (there is a small zooming effect).

Here is the modified undistort function:

defundistort_list_of_points(point_list, in_K, in_d, in_K_new):
    K = np.asarray(in_K)
    d = np.asarray(in_d)
    # Input can be list of bbox coords, poly coords, etc.# TODO -- Check if point behind camera?
    points_2d = np.asarray(point_list)

    points_2d = points_2d[:, 0:2].astype('float32')
    points2d_undist = np.empty_like(points_2d)
    points_2d = np.expand_dims(points_2d, axis=1)

    result = np.squeeze(cv2.fisheye.undistortPoints(points_2d, K, d))

    K_new = np.asarray(in_K_new)
    fx = K_new[0, 0]
    fy = K_new[1, 1]
    cx = K_new[0, 2]
    cy = K_new[1, 2]

    for i, (px, py) inenumerate(result):
        points2d_undist[i, 0] = px * fx + cx
        points2d_undist[i, 1] = py * fy + cy

    return points2d_undist

Here is my code for doing the same thing.

import cv2
import numpy as np
import matplotlib.pyplot as plt

K = np.asarray([[556.3834638575809,0,955.3259939726225],[0,556.2366649196925,547.3011305411478],[0,0,1]])
D = np.asarray([[-0.05165940570900624],[0.0031093602070252167],[-0.0034036648250202746],[0.0003390345044343793]])
print("K:\n", K)
print("D:\n", D.ravel())

# read image and get the original image on the left
image_path = "sample.jpg"
image = cv2.imread(image_path)
image = image[:, :image.shape[1]//2, :]
image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

fig = plt.figure()
plt.imshow(image_gray, "gray")

H_in, W_in = image_gray.shape
print("Grayscale Image Dimension:\n", (W_in, H_in))

scale_factor = 1.0 
balance = 1.0

img_dim_out =(int(W_in*scale_factor), int(H_in*scale_factor))
if scale_factor != 1.0:
    K_out = K*scale_factor
    K_out[2,2] = 1.0

K_new = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K_out, D, img_dim_out, np.eye(3), balance=balance)
print("Newly estimated K:\n", K_new)

map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), K_new, img_dim_out, cv2.CV_32FC1)
print("Rectify Map1 Dimension:\n", map1.shape)
print("Rectify Map2 Dimension:\n", map2.shape)

undistorted_image_gray = cv2.remap(image_gray, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
fig = plt.figure()
plt.imshow(undistorted_image_gray, "gray")
  
ret, corners = cv2.findChessboardCorners(image_gray, (6,8),cv2.CALIB_CB_ADAPTIVE_THRESH+cv2.CALIB_CB_FAST_CHECK+cv2.CALIB_CB_NORMALIZE_IMAGE)
corners_subpix = cv2.cornerSubPix(image_gray, corners, (3,3), (-1,-1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1))

undistorted_corners = cv2.fisheye.undistortPoints(corners_subpix, K, D)
undistorted_corners = undistorted_corners.reshape(-1,2)


fx = K_new[0,0]
fy = K_new[1,1]
cx = K_new[0,2]
cy = K_new[1,2]
undistorted_corners_pixel = np.zeros_like(undistorted_corners)

for i, (x, y) inenumerate(undistorted_corners):
    px = x*fx + cx
    py = y*fy + cy
    undistorted_corners_pixel[i,0] = px
    undistorted_corners_pixel[i,1] = py
    
undistorted_image_show = cv2.cvtColor(undistorted_image_gray, cv2.COLOR_GRAY2BGR)
for corner in undistorted_corners_pixel:
    image_corners = cv2.circle(np.zeros_like(undistorted_image_show), (int(corner[0]),int(corner[1])), 15, [0, 255, 0], -1)
    undistorted_image_show = cv2.add(undistorted_image_show, image_corners)

fig = plt.figure()
plt.imshow(undistorted_image_show, "gray")

Post a Comment for "What Is The Correct Way To Undistort Points Captured Using Fisheye Camera In Opencv In Python?"