0%

OpenCV(十)-特征点检测与匹配

一、特征检测的基本概念

  • OpenCV特征的场景

    • 图像搜索,如以图搜图
    • 拼图游戏
    • 图像拼接,将两长有关联的图拼接到一起
  • 什么是特征

    • 图像特征就是有意义的图像区域,具有独特性、易于识别性,比如角点、斑点以及高密度区
  • 角点

    • 最重要
    • 灰度梯度的最大值对应的像素
    • 两条线的交点
    • 极值点(一阶导数最大值,但二阶导数为0)

二、Harris角点检测

image-20241118213557869

  • Harris点
    • 光滑地区,无论向哪里移动,衡量系数不变
    • 边缘地区,垂直边缘移动时,衡量系统变化剧烈
    • 在交点处,往哪个方向移动,衡量系统都变化剧烈
1
2
3
4
5
# Harris角点检测API
cornerHarris(img, dst, blockSize, ksize, k)
blockSize:检测窗口大小
ksize:Sobel的卷积核
k:权重系数,经验值,一般取0.02~0.04z
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import cv2
import numpy as numpy

blockSize = 2
ksize = 3
k = 0.04

img = cv2.imread('chess.png')

# 灰度化
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Harris角点检测
dst = cv2.cornerHarris(gray, blockSize, ksize, k)

# Harris角点的展示
img[dst>0.01*dst.max()] = [0,0,255]

cv2.imshow('harris', img)
cv2.waitKey(0)

image-20241118215302893

三、Shi-Tomasi角点检测

  • Shi-Tomasi是Harris角点检测的改进
  • Harris角点检测的稳定性和k经验值有关,不好设定最佳值
1
2
3
4
5
6
7
8
9
10
11
12
13
# API
def goodFeaturesToTrack(image: UMat,
maxCorners: int, # 角点的最大数,值为0表示无限制
qualityLevel: float, # 小于1.0的正数,一般在0.01-0.1之间
minDistance: float, # 角之间最小欧式距离,忽略小于此距离的点
mask: UMat, # 感兴趣的区域
blockSize: int, # 检测窗口
gradientSize: int,
corners: UMat | None=...,
useHarrisDetector: # 是否使用Harris算法
bool=...,
k: float=... # 默认是0.04
) -> UMat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import cv2
import numpy as np

maxCorners = 1000
qualityLevel = 0.01
minDistance = 10
img = cv2.imread('chess.png')

# 灰度化
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Harris角点检测
# dst = cv2.cornerHarris(gray, blockSize, ksize, k)
corners = cv2.goodFeaturesToTrack(gray, maxCorners, qualityLevel, minDistance)

# 得到的corners是32位,需要换为整型输出
corners = np.int0(corners)

# Shi-Tomasi绘制角点
for i in corners:
x,y = i.ravel() # 它是多维的,需要转化为一维
cv2.circle(img, (x,y), 3, (255,0,0), -1)

cv2.imshow('harris', img)
cv2.waitKey(0)

image-20241118220940288

四、SIFT关键点检测

Scale-Invariant Feature Transform:与缩放无关的特征点检测

==SIFT出现的原因==

  • Harris角点具有旋转不变的特性
  • 但是缩放后,原来的角点有可能就不是角点了

image-20241118221443997

==使用SIFT的步骤==

  • 创建SIFT对象
  • 进行检测,kp=sift.detect(img, …)
  • 绘制关键点,drawKeypoints(gray, kp, img)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import cv2
import numpy as np

# 读图片
img = cv2.imread('chess.png')

# 灰度化
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# 创建sift对象
sift = cv2.xfeatures2d.SIFT_create()
# 进行检测
kp = sift.detect(gray, None)

# 绘制Keypoints
cv2.drawKeypoints(gray, kp, img)

cv2.imshow('img', img)
cv2.waitKey(0)

image-20241118222217321

五、SIFT计算描述子

  • 关键点和描述子
    • 关键点:位置、大小和方向
    • 关键点描述子:记录了关键点周围对其有贡献的像素点的一组向量值,其不受仿射变换、光照变换等影响
1
2
3
# 计算描述子
kp,des = sift.compute(img,kp)
# 其作用是进行特征匹配
1
2
3
# 同时计算关键点和描述子
kp,des = sift.detectAndCompute(img, ...)
mask:指明对img中哪个区域进行计算
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
import cv2
import numpy as np

# 读图片
img = cv2.imread('chess.png')

# 灰度化
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# 创建sift对象
sift = cv2.xfeatures2d.SIFT_create()
# 进行检测
kp, des = sift.detectAndCompute(gray, None)

print(des[0])

# 绘制Keypoints
cv2.drawKeypoints(gray, kp, img)

cv2.imshow('img', img)
cv2.waitKey(0)

image-20241119072440618

六、SURF特征检测

Speeded-Up Robust Features:加速的Robust检测

  • SURF的优点
    • SIFT最大的问题是速度慢,因此才有SURF
1
2
3
# 使用SURF的步骤
surf = cv2.xfeatures2d.SURF_create()
kp, des = surf.detectAndCompute(img, mas)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import cv2
import numpy as np

# 读图片
img = cv2.imread('chess.png')

# 灰度化
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# 创建sift对象
# sift = cv2.xfeatures2d.SIFT_create()
# 船舰surf对象
surf = cv2.xfeatures2d.SURF_create()

# 进行检测
# kp, des = sift.detectAndCompute(gray, None)
# 使用surf进行检测
kp, des = surf.detectAndCompute(gray, None)

# 绘制Keypoints
cv2.drawKeypoints(gray, kp, img)

cv2.imshow('img', img)
cv2.waitKey(0)

image-20241119074420260

七、ORB特征检测

Oriented FAST and Rotated BRIEF

  • ORB优势
    • ORB可以做到实时检测
  • FAST
    • 可以做到特征点的实时检测,本身不带方向,ORB加了方向
  • BRIEF
    • 是对已检测到的特征点进行描述
    • 加快了特征描述符建立的速度
    • 降低了特征匹配的时间
1
2
3
# 使用ORB的步骤
orb = cv2.ORB_create()
kp, des = orb.detectAndCompute(img, mask)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import cv2
import numpy as np

# 读图片
img = cv2.imread('chess.png')

# 灰度化
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# 创建orb对象
orb = cv2.ORB_create()

# 使用orb进行检测
kp, des = orb.detectAndCompute(gray, None)

# 绘制Keypoints
cv2.drawKeypoints(gray, kp, img)

cv2.imshow('img', img)
cv2.waitKey(0)

image-20241119080618083

八、暴力特征匹配

  • 原理
    • 它使用第一组中的==每个特征==的描述子
    • 与第二组中的==所有特征描述子==进行匹配
    • 计算它们之间的差距,然后将最接近一个匹配返回
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# OpenCV特征匹配步骤
创建匹配器,BFMatcher(normType, crossCheck)
进行特征匹配,bf.match(des1, des2)
绘制匹配点,cv2.drawMatches(img1, kp1, img2, k2, ...)

# BFMatcher
normType: NORM_L1,NORM_L2, HAMMING1...
crossCheck: 是否进行交叉匹配,默认false

# match方法
参数为SIFT、SURF、ORB等计算的描述子

# drawMatches
搜索img,kp
匹配图img,kp
match()f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import cv2
import numpy as np

# 读图片
img1 = cv2.imread('opencv_search.png')
img2 = cv2.imread('opencv_orig.png')

# 灰度化
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

# 创建sift对象
sift = cv2.xfeatures2d.SIFT_create()

# 进行检测
kp1, des1 = sift.detectAndCompute(gray1, None)
kp2, des2 = sift.detectAndCompute(gray2, None)

bf = cv2.BFMatcher(cv2.NORM_L1)
match = bf.match(des1, des2)

img3 = cv2.drawMatches(img1, kp1, img2, kp2, match, None)

cv2.imshow('img3', img3)
cv2.waitKey(0)

image-20241122221633600

九、FLANN特征匹配

  • FLANN优点
    • 在进行批量特征匹配时,FLANN速度更快
    • 使用的是临近近似值,所以精度较差
  • ==使用FLANN特征匹配的步骤==
    • 创建FLANN匹配器,FlannBasedMatcher(…)
    • 进行特征匹配,flann.match/knnMatch(…)
    • 绘制匹配点,cv2.drawMatches/drawMatchesKnn(…)

image-20241122222610059

image-20241122222639807

image-20241122222653418

image-20241122222732609

image-20241122222812687

image-20241122222837683

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import cv2
import numpy as np

# 打开两个文件
img1 = cv2.imread('opencv_search.png')
img2 = cv2.imread('opencv_orig.png')

# 灰度化
g1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
g2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

# 创建sift特征检测器
sift = cv2.xfeatures2d.SIFT_create()

# 计算描述子和特征点
kp1, des1 = sift.detectAndCompute(g1, None)
kp2, des2 = sift.detectAndCompute(g2, None)

# 创建匹配器
index_params = dict(algorithm = 1, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)

# 对描述子进行匹配计算
matchs = flann.knnMatch(des1, des2, k = 2)

good = []
for i, (m,n) in enumerate(matchs):
if m.distance < 0.7 * n.distance:
good.append(m)

ret = cv2.drawMatchesKnn(img1, kp1, img2, kp2, [good], None)

cv2.imshow('result', ret)
cv2.waitKey(0)

image-20241122224119442

十一、图像查找

image-20241122224655622

image-20241122224747670

image-20241122224758055

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
import cv2
import numpy as np

# 打开两个文件
img1 = cv2.imread('opencv_search.png')
img2 = cv2.imread('opencv_orig.png')

# 灰度化
g1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
g2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

# 创建sift特征检测器
sift = cv2.xfeatures2d.SIFT_create()

# 计算描述子和特征点
kp1, des1 = sift.detectAndCompute(g1, None)
kp2, des2 = sift.detectAndCompute(g2, None)

# 创建匹配器
index_params = dict(algorithm = 1, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)

# 对描述子进行匹配计算
matches = flann.knnMatch(des1, des2, k = 2)

good = []
for i, (m,n) in enumerate(matches):
if m.distance < 0.7 * n.distance:
good.append(m)

if len(good) >= 4:
srcPts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
dstPts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2)

H, _ = cv2.findHomography(srcPts, dstPts, cv2.RANSAC, 5.0)

h, w = img1.shape[:2]
pts = np.float32([[0,0], [0, h-1], [w-1, h-1], [w-1, 0]]).reshape(-1, 1, 2)
dst = cv2. perspectiveTransform(pts, H)

cv2.polylines(img2, [np.int32(dst)], True, (0, 0, 255))
else:
print('the number of good is less than 4.')
exit()

ret = cv2.drawMatchesKnn(img1, kp1, img2, kp2, [good], None)

cv2.imshow('result', ret)
cv2.waitKey(0)

image-20241122231255131

十二、大作业-图像拼接基础知识

image-20241122231618519

image-20241122231717700

十三、大作业-图像拼接(一)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import cv2
import numpy as np

# 第一步,读取文件,将图片设置成一样大小640*480
# 第二步,找特征点,描述子,计算单应性矩阵
# 第三步,根据单应性矩阵对图像进行变换,然后平移
# 第四步,拼接并输出最终结果

img1 = cv2.imread('map1.png')
img2 = cv2.imread('map2.png')

# 将两张图片设置成同样大小
img1 = cv2.resize(img1, (640, 480))
img2 = cv2.resize(img2, (640, 480))

inputs = np.hstack((img1, img2))
cv2.imshow('input img', inputs)
cv2.waitKey(0)

image-20241122235547170

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
import cv2
import numpy as np

def stitch_image(img1, img2, H):
# 1. 获得每张图片的四个角点
# 2. 对图片进行变换(单应性矩阵使图进行旋转,平移)
# 3. 创建一张大图,将两张图拼接到一起
# 4. 将结果输出

#获得原始图的高/宽
h1, w1 = img1.shape[:2]
h2, w2 = img2.shape[:2]

img1_dims = np.float32([[0,0], [0, h1], [w1, h1], [w1, 0]]).reshape(-1, 1, 2)
img2_dims = np.float32([[0,0], [0, h2], [w2, h2], [w2, 0]]).reshape(-1, 1, 2)

img1_transform = cv2.perspectiveTransform(img1_dims, H)

# print(img1_dims)
# print(img2_dims)
# print(img1_transform)

result_dims = np.concatenate((img2_dims, img1_transform), axis=0)
#print(result_dims)

[x_min, y_min] = np.int32(result_dims.min(axis=0).ravel()-0.5)
[x_max, y_max ] = np.int32(result_dims.max(axis=0).ravel()+0.5)

#平移的距离
transform_dist = [-x_min, -y_min]

#[1, 0, dx]
#[0, 1, dy]
#[0, 0, 1 ]
transform_array = np.array([[1, 0, transform_dist[0]],
[0, 1, transform_dist[1]],
[0, 0, 1]])

result_img = cv2.warpPerspective(img1, transform_array.dot(H), (x_max-x_min, y_max-y_min))

result_img[transform_dist[1]:transform_dist[1]+h2,
transform_dist[0]:transform_dist[0]+w2] = img2

return result_img



def get_homo(img1, img2):

#1. 创建特征转换对象
#2. 通过特征转换对象获得特征点和描述子
#3. 创建特征匹配器
#4. 进行特征匹配
#5. 过滤特征,找出有效的特征匹配点

sift = cv2.xfeatures2d.SIFT_create()

k1, d1 = sift.detectAndCompute(img1, None)
k2, d2 = sift.detectAndCompute(img2, None)

#创建特征匹配器
bf = cv2.BFMatcher()
matches = bf.knnMatch(d1, d2, k=2)

#过滤特征,找出有效的特征匹配点
verify_ratio = 0.8
verify_matches = []
for m1, m2 in matches:
if m1.distance < 0.8 * m2.distance:
verify_matches.append(m1)

min_matches = 8
if len(verify_matches) > min_matches:

img1_pts = []
img2_pts = []

for m in verify_matches:
img1_pts.append(k1[m.queryIdx].pt)
img2_pts.append(k2[m.trainIdx].pt)
#[(x1, y1), (x2, y2), ...]
#[[x1, y1], [x2, y2], ...]

img1_pts = np.float32(img1_pts).reshape(-1, 1, 2)
img2_pts = np.float32(img2_pts).reshape(-1, 1, 2)
H, mask = cv2.findHomography(img1_pts, img2_pts, cv2.RANSAC, 5.0)
return H

else:
print('err: Not enough matches!')
exit()


#第一步,读取文件,将图片设置成一样大小640x480
#第二步,找特征点,描述子,计算单应性矩阵
#第三步,根据单应性矩阵对图像进行变换,然后平移
#第四步,拼接并输出最终结果

#读取两张图片
img1 = cv2.imread('map1.png')
img2 = cv2.imread('map2.png')

#将两张图片设置成同样大小
img1 = cv2.resize(img1, (640, 480))
img2 = cv2.resize(img2, (640, 480))

inputs = np.hstack((img1, img2))

#获得单应性矩阵
H = get_homo(img1, img2)

#进行图像拼接
result_image = stitch_image(img1, img2, H)



cv2.imshow('input img', result_image)
cv2.waitKey()

image-20241123004225535

-------------本文结束感谢您的阅读-------------