Issue
I am trying to apply perspective transformation to the image using open cv . I have the image of card in which I have converted the background color to black and foreground object as white color as shown in below image . Now I want to apply perspective transformation on it so that image gets properly viewed ?. My code is displaying just complete black thing .
Image:
Code:
import cv2,numpy as np
from operator import itemgetter
from glob import glob
import matplotlib.pyplot as plt
input_image2 = cv2.imread("/home/hamza/Desktop/card_in_polygon_format.jpeg")
orig_im_coor = np.float32([[90, 261], [235, 386], [417, 178], [268, 83]])
height , width = 450,350
new_image_coor = np.float32([[0, 0], [width, 0], [0, height], [width, height]])
P = cv2.getPerspectiveTransform(orig_im_coor,new_image_coor)
perspective = cv2.warpPerspective(input_image2,P,(width,height))
cv2.imshow("Perspective transformation", perspective)
cv2.waitKey(0)
cv2.destroyAllWindows()
Note: Every time my code will gets an image as black and white . If it capture the corners by itself too then it will be appreciate able instead of taking out it manually.
Solution
Automatic quadrangle fitting is not so trivial...
- There is a good example in the following post, but it's implemented in C++.
- The method I use is more like the following post - simpler, but less accurate.
The suggested solution uses the following stages:
- Find contours, (and get the largest - needed in case there is more than one).
- Approximate the contour to polygon using cv2.approxPolyDP.
Assume the polygon is a quadrangle. - Sort the 4 corners in the right order.
Note: The method I used for sorting the corner is too complicated - you may sort the corners using simple logic.
Here is a code sample:
import cv2
import numpy as np
def find_corners(im):
"""
Find "card" corners in a binary image.
Return a list of points in the following format: [[640, 184], [1002, 409], [211, 625], [589, 940]]
The points order is top-left, top-right, bottom-left, bottom-right.
"""
# Better approach: https://stackoverflow.com/questions/44127342/detect-card-minarea-quadrilateral-from-contour-opencv
# Find contours in img.
cnts = cv2.findContours(im, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # [-2] indexing takes return value before last (due to OpenCV compatibility issues).
# Find the contour with the maximum area (required if there is more than one contour).
c = max(cnts, key=cv2.contourArea)
# https://stackoverflow.com/questions/41138000/fit-quadrilateral-tetragon-to-a-blob
epsilon = 0.1*cv2.arcLength(c, True)
box = cv2.approxPolyDP(c, epsilon, True)
# Draw box for testing
tmp_im = cv2.cvtColor(im, cv2.COLOR_GRAY2BGR)
cv2.drawContours(tmp_im, [box], 0, (0, 255, 0), 2)
cv2.imshow("tmp_im", tmp_im)
box = np.squeeze(box).astype(np.float32) # Remove redundant dimensions
# Sorting the points order is top-left, top-right, bottom-right, bottom-left.
# Note:
# The method I am using is a bit of an "overkill".
# I am not sure if the implementation is correct.
# You may sort the corners using simple logic - find top left, bottom right, and match the other two points.
############################################################################
# Find the center of the contour
# https://docs.opencv.org/3.4/dd/d49/tutorial_py_contour_features.html
M = cv2.moments(c)
cx = M['m10']/M['m00']
cy = M['m01']/M['m00']
center_xy = np.array([cx, cy])
cbox = box - center_xy # Subtract the center from each corner
# For a square the angles of the corners are:
# -135 -45
#
#
# 135 45
ang = np.arctan2(cbox[:,1], cbox[:,0]) * 180 / np.pi # Compute the angles from the center to each corner
# Sort the corners of box counterclockwise (sort box elements according the order of ang).
box = box[ang.argsort()]
############################################################################
# Reorder points: top-left, top-right, bottom-left, bottom-right
coor = np.float32([box[0], box[1], box[3], box[2]])
return coor
input_image2 = cv2.imread("card_in_polygon_format.jpeg", cv2.IMREAD_GRAYSCALE) # Read image as Grayscale
input_image2 = cv2.threshold(input_image2, 0, 255, cv2.THRESH_OTSU)[1] # Convert to binary image (just in case...)
# orig_im_coor = np.float32([[640, 184], [1002, 409], [211, 625], [589, 940]])
# Find the corners of the card, and sort them
orig_im_coor = find_corners(input_image2)
height, width = 450, 350
new_image_coor = np.float32([[0, 0], [width, 0], [0, height], [width, height]])
P = cv2.getPerspectiveTransform(orig_im_coor, new_image_coor)
perspective = cv2.warpPerspective(input_image2, P, (width, height))
cv2.imshow("Perspective transformation", perspective)
cv2.waitKey(0)
cv2.destroyAllWindows()
Quadrangle fitting (not most accurate):
Answered By - Rotem Answer Checked By - Willingham (PHPFixing Volunteer)
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.