Part measuring method based on heterogeneous stereoscopic vision
Technical Field
The invention relates to the field of part size measurement, in particular to a part measurement method based on heterogeneous stereoscopic vision.
Background
The research of part detection based on machine vision is started from 90 years in the 20 th century, and gradually enters various industrial fields, and measuring means and methods are rapidly developed. Machine vision is to use a machine to replace human eyes for measurement and judgment, convert a shot target into an image signal through an image shooting device CMOS or CCD, transmit the image signal to a special image processing system, and convert the image signal into a digital signal according to information such as pixel distribution, brightness, color and the like; the image system performs various calculations on these signals to extract the features of the target, and then controls the operation of the on-site equipment according to the result of the discrimination.
Prior art 1:
inspired by human eyes for sensing the environmental depth by the parallax principle, the binocular stereo vision measurement system simultaneously observes a measured part (shown in figure 1) from two points in space, obtains two images of the measured part under different visual angles, calculates the position deviation between pixels by the triangulation principle according to the registration relation of the pixels between the two images, obtains the depth information of any point in a three-dimensional space, finally reconstructs the three-dimensional shape of the measured part, and performs multi-element measurement of the part.
Aiming at the problem of stepped shaft measurement, Wuxiang, Changchun university of industry establishes a set of binocular vision measurement system, and can measure a plurality of elements of a measured part. Experimental results show that the minimum and maximum measurement errors of the binocular measurement system are 0.1mm and 0.9mm, respectively (wuxiang. critical technology research of a binocular vision-based part dimension measurement system [ D ]. vinpocetine university, 2017.). Zhang Jun Yong of Wuhan science and technology university proposes a part multi-size measurement method and system based on binocular vision, and realizes multi-element three-dimensional measurement of a measured part through a series of processes such as calibration, polar line correction, polar line constraint matching, three-dimensional fitting, three-dimensional reconstruction and the like (Zhang Yong, Wushiqian, Xuzhang. a part multi-size measurement method and system based on binocular vision: China, CN107588721A [ P ]. 2018.01.16.). A binocular vision measuring and positioning device and a method for narrow space (Zhao 21089, Su Qing, Wu Fang Lin, Yang Kui, Zhang Xiao Cheng) are provided for three-dimensional measurement of targets in narrow space such as Zhao \21089ofBeijing university of aerospace, China, ZL201210191014.7[ P ]. 2012.11.05)
The binocular vision measurement is a method based on the bionic principle, has a plurality of advantages, and is ideally very suitable for online non-contact geometric precision inspection and quality control in a manufacturing field. However, binocular vision is not as applicable to other methods, and the most critical limiting factor is the measurement accuracy, and the accuracy of most binocular systems is only in the sub-millimeter level.
Prior art 2:
the theoretical basis of the structured light vision measurement is the laser triangulation distance measurement principle. By projecting a structural light source known by a spatial mathematical model to a measured part, a machine vision system projects and images the part and the structured light (shown in figure 2), and the spatial mathematical model of the structured light and the machine vision system imaging mathematical model are combined, so that the recovery depth information can be obtained by solving. The light projection mode includes four types of light, plane, curved surface and light beam, so that the common structured light vision measuring system also includes four types.
A structured light vision measuring system is constructed by the great bear university of Harbin industry, and the measuring range of the system is not less than 200mm multiplied by 200mm when the object distance is 250 mm. The system was used to make repeated measurements on standard gauge blocks of 8.845mm and 20.000mm thickness and width, respectively. In the thickness measurement experiment, the maximum and minimum measurement errors were 50 μm and 30 μm, respectively; in the width measurement experiment, the maximum and minimum measurement errors were 50 μm and 18 μm, respectively (bear great man. development of three-dimensional structured light vision measuring apparatus [ D ]. harabine: harabine university of industry, 2017.). The method mainly comprises the processes of calibrating a structural light vision system by a plane model method, collecting and preprocessing characteristic images, extracting characteristics, calculating characteristic parameters and the like, and has the application characteristics of high precision, high adaptability and high efficiency (Zhang from Peng, Hou, Cao Wen, Lu Lei, Yu of any world). The invention relates to a structured light measuring method and a device thereof for high-reflectivity parts, which mainly solve the problem that the high-reflectivity parts are difficult to measure (about ten thousand waves, Guo Yanyan, forward, and the like). A surface structured light three-dimensional measuring device and a method for the high-reflectivity parts are ZL201310717211.2[ P ] 2016.09.07 ]. The method comprises the steps of measuring the radial run-out error of a shaft part by using a structured light vision system in Tanchang, university of Jilin, and the like, calibrating a vision sensor and structured light by a two-step calibration method of a Zhangzhou plane and a template matching method respectively on the basis of establishing a radial run-out error structured light vision model, and solving a space coordinate of an intersection point of the structured light and the surface of the part by using the measurement model (Tanchang, Bahaohan, Zhang Yachao, and the like, an on-line measurement method for the radial run-out error of the shaft part based on structured light vision, China, CN107101582A [ P ] 2017.08.29.).
The structured light vision and the improvement method thereof both utilize a light source with a known pose to carry out active illumination, and the image depth information can be recovered only by extracting features from one image, so that the difficult problem of registration of a left image and a right image in binocular vision is solved, and therefore, the measurement accuracy of the structured light vision is obviously improved compared with that of the binocular vision. Nevertheless, the measurement accuracy of structured light vision still remains at the silk level. Moreover, compared with binocular vision, the point cloud density of structured light vision is obviously reduced, which is not beneficial to the three-dimensional reconstruction of the tiny features of the part. In addition, the calibration problem of the structured light, the high performance laser and the cost problem thereof, and the like, are still to be researched.
Prior art 3:
the multi-view vision and the binocular vision have the same theoretical basis and working principle, and are formed by additionally arranging a plurality of imaging sensors on the basis of the original binocular vision. The multi-vision simultaneously observes the measured part from three or more points in the space to obtain a plurality of images of the measured part under different vision, and according to the registration relation between the images and the triangulation distance measurement principle, a mathematical model representing the depth of any point in the three-dimensional space is established, so that the three-dimensional appearance of the measured part can be reconstructed, and the multi-element measurement of the part can be carried out. Compared with binocular vision, the multi-ocular vision introduces more constraint conditions and theoretically has higher measurement accuracy. The simplest multi-ocular vision is the three-ocular vision, and its application in industry presents an increasing trend.
Ye et al introduces a third camera into the binocular system to form a binocular vision measurement system, which adds more constraint conditions for stereo matching, reduces uncertainty of binocular vision system matching, eliminates interference information, and effectively improves result precision (Yepan, LiLi, JinWei-Qi, Jiang Yu-Tong. research. creating and transforming binocular vision [ C ]// Proc. SPIE9301, IPTA2014: image processing and Pattern recognition, Beijing, China, y13-15, 2014.). Lu and Shao establish a set of three-purpose system, design a spherical calibration piece, and obtain a translation and rotation matrix through singular value decomposition. The results show that the relative accuracy of the trinocular system is improved to 0.105%, the root mean square error is 0.026mm, and the accuracy and robustness of the system are proved (LuRui, ShaoMingwei. sphere-base calibration method for three-ocular vision sensor [ J ]. Opt. Laser Eng., 2017, 90: 119-127.). Aiming at the problem of low precision of binocular vision measurement of large-amplitude swinging objects, a target tracking method and a target tracking device based on trinocular vision are proposed by the technical company of intelligent-made-to-future (Beijing) robot system (Korea Xin. trinocular vision identification and tracking device and method: China, CN107507231A [ P ]. 2017.12.22.).
Inspired by multi-sensor fusion measurement technology, a three-eye or more-eye multi-vision system is constructed by introducing more imaging sensors on the basis of a binocular system, so that measurement uncertainty is reduced, and measurement precision is improved to a certain extent. However, the precision of the trinocular system still cannot meet the requirement of high-precision measurement of mechanical parts. The reason for this is that many cameras in the multi-view vision system all use pinhole imaging models, and the principle errors such as parallax and distortion caused by the common lens still cannot be eliminated.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a part measuring method based on heterogeneous stereoscopic vision, which effectively combines binocular stereoscopic vision and monocular telecentric vision, compensates the principle error of a common optical system through telecentric imaging, integrates the advantages of the binocular vision and the advantage of high telecentric vision measuring precision, and solves the problem of low three-dimensional non-contact measuring precision in a production field.
The technical scheme adopted by the invention is as follows:
a part measuring method based on heterogeneous stereoscopic vision comprises the following steps:
s1, determining the positions of a No. 1 telecentric industrial camera, a No. 2 ordinary industrial camera and a No. 3 ordinary industrial camera according to the size of the part to be measured and the depth of field of the cameras, keeping the position relation among the cameras, calibrating the cameras, and enabling the cameras to image the part to be measured to obtain a left image, a right image and a telecentric image;
s2, registering the left-right image pair and the left-right-telecentric image pair through processes including image preprocessing, feature extraction and feature matching;
s3, constructing a binocular vision system mathematical model according to the registration relation of the left image and the right image and the internal and external parameters calibrated by the common industrial camera to obtain the depth information of the characteristic points of the left image and the right image;
s4, grouping the feature points of the left-right-telecentric image into a group according to each 2 feature points, calculating the absolute difference of the depths of the 2 feature points in each group according to the depth information obtained in S3, and excluding the group if the absolute difference is greater than a preset threshold;
s5, projecting the feature points reserved in the S4 to a focal plane of a telecentric industrial camera, and calculating the distance of 2 projection points in each group along the horizontal and vertical coordinate directions;
s6, correcting the internal and external parameters marked in S1 according to the distance obtained in S5, reconstructing a binocular vision system mathematical model according to the registration relation of the left image and the right image obtained in S2 and the corrected internal and external parameters of the common industrial camera, and obtaining the depth information of the feature points of the updated left image and the updated right image;
and S7, constructing a three-dimensional model of the measured part according to the updated depth information, performing three-dimensional feature recognition, and extracting outline elements of the measured part, thereby realizing the measurement of the geometric features of the measured part.
Preferably, in S1, the telecentric industrial camera is calibrated as follows: and imaging the calibration piece with the known size by the telecentric camera, and acquiring the ratio of the actual size of the calibration piece to the pixel size to obtain the equivalent pixel of the telecentric camera.
Preferably, in S1, the general industrial camera is calibrated as follows: after the common industrial camera is installed, the common industrial camera is calibrated by adopting a Zhangyingyou plane calibration method, and internal and external parameters of the common industrial camera are determined based on a calibration result.
Preferably, in S4, feature points in the left image, the right image and the telecentric image are extracted, homonymous features of the left image and the right image are found, the euclidean distance between the two feature points is measured, and if the absolute difference is greater than a preset threshold, the group of feature points is rejected.
Preferably, the functional procedure of the part measurement method is as follows:
establishing a world coordinate systemCamera coordinate system of No. 1 telecentric industrial cameraCamera coordinate system of No. 2 common industrial cameraCamera coordinate system of No. 3 common industrial camera;
A certain point on the part to be measuredWorld coordinate isThe coordinates of the point under the coordinate systems of the No. 1, No. 2 and No. 3 cameras are respectively as follows:
(1)
in the formulaAndrepresentation from world coordinate system toj(j=1, 2, 3) coordinate transformation relations of the imaging system camera coordinate systems, also called imaging system extrinsic parameters;
by usingTo representIn the first placejAn image in an imaging system having coordinates ofWhereinj1, 2, 3, the relationship between the image point and the space point is:
wherein,j=1 (2)
wherein,j=2,3 (3)
in the formula、、、u j Andv j are parameters within the imaging system that are,is a picture pointThe pixel coordinates of (a).
Extracting the characteristic points of the images collected by the No. 1, No. 2 and No. 3 imaging systems, and establishing corresponding characteristic point combination sets、And;
from the collectionIn any one of the elements,Using search strategies and measure functions inFind out the 'same name' feature,Respectively to be separately provided withAnddeposit to New CollectionAndperforming the following steps;
from the collectionIn any one of the elements,In aIn which there is its "same name" characteristicUsing search strategies and measure functions in setsIs found inCharacteristic of (1),Verification using a measure functionAndwhether or not it isA pair of "homonymous" features, which if so, will、、Separately deposit into new collections、Andand if not, from the setIn the process of re-choosing an element,And isRepeating the above steps until the collection is finishedAll of the elements in (1);
the equations (1) and (3) are put together and rewritten as follows, and the spatial coordinates to be found are written to the left side of the equations:
(4)
in the formula,,k=1,2,3
Order to,,,i=1,2,…,mCharacteristic pointHas pixel coordinates ofI.e. byj= 2; characteristic pointHas pixel coordinates ofI.e. byj= 3; substituting the pixel coordinates into equation (4),And:
(5)
spatial coordinates in formula (5)Unknown, and the remaining parameters are known, and rewriting formula (5) into formula (6):
(6)
in the formula,Andsee, respectively, formula (7) and formula (8):
(7)
(8)
solving the formula (6) by the least square method to obtainDefining a new set with the solved spatial coordinates as elements(ii) a From the collectionTwo elements of ZhongrenAnd,and is andcalculating the difference of z-coordinateIf, ifIs less than a given non-negative constantGo to CollectionFind and space pointAndthe corresponding pixel points have pixel coordinates ofAndand measuring by using a No. 1 imaging system to obtain:
(9)
in the formula,andequivalent pixels in the horizontal and vertical directions of the No. 1 imaging system, respectively, in mm/pixel units;
the confidence level is 95%, and the extended uncertainty of the imaging system # 1 is,True value ofIt should satisfy:
(10)
in the formula,(ii) a Change ofpAndqrepeating the above process until the value is obtained() All are linked, so subscripts are used in formula (9) and formula (10)kThe value range is,Should not be less than;
Equation (6) is equivalent to the nonlinear unconstrained extremum problem shown in equation (11):
(11)
the measurement accuracy of the binocular vision system can reach 1/10mm, and in the field depth range of the telecentric vision system, based on the two points, the whole formula (10) is used as a constraint condition, and the formula (11) is changed into a nonlinear constraint extreme value problem:
(12)
using the solution of least squares as initial values, i.e.Solving equation (12) to obtain an updated solution(ii) a Will be provided withAnd the pixel coordinates are substituted into the formula (1) and the formula (3) to update the internal and external parameters of the camera to be、、、、、And(ii) a Substituting the updated camera internal and external parameters into the formula (6), and using the formula (6) to assemble the setAndthe element in (3) calculates its three-dimensional coordinates.
Compared with the prior art, the invention has the following implementation effects:
the invention organically combines the prior art I and a telecentric vision system, obtains a measurement result by fusing data of a plurality of sensors, increases the utilization rate of system information, enhances the reliability of data and improves the reliability of the system.
The invention does not need to use a high-performance laser light source, and has no problem of high cost of the laser light source and no problem of point cloud density reduction caused by interval motion of the structured light.
The third camera introduced by the invention is a telecentric industrial camera which has the characteristics of constant magnification factor, no parallax, small image distortion and the like, and has higher measurement precision.
Drawings
Fig. 1 is a schematic diagram of the measurement of a binocular stereo vision system in the prior art 1.
Fig. 2 is a schematic diagram of a line structured light vision measurement in prior art 2.
Fig. 3 is a schematic diagram of the measurement of a multi-vision system in prior art 3.
Fig. 4 is a schematic structural diagram of the present invention.
Fig. 5 is a schematic diagram of the principle of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Of course, the described embodiments are only some embodiments of the invention, and not all embodiments.
Referring to fig. 4 and 5, in the drawings:
1 (a): telecentric industrial camera No. 1;
2 (a): no. 2 general industrial camera;
3 (a): no. 3 general industrial camera;
1 (b): 1 (a);
2 (b): 2 (a);
3 (b): 3 (a);
4: a part to be tested;
5: any measured point on measured partp;
6: any measured point on measured partq;
7: measured pointPThe image point in 1 (a);
8: measured pointqThe image point in 1 (a);
9: measured pointqThe image point in 2 (b);
10: measured pointPImage points in the imaging system 3;
11: and (5) installing a foundation for the heterogeneous stereoscopic vision system.
Before measurement, the positions of a No. 1 telecentric industrial camera, a No. 2 common industrial camera and a No. 3 common industrial camera are determined according to the size of a measured part and the depth of field of the cameras, the position relation among the cameras is maintained, a three-dimensional measurement space shown in figure 4 is established, and then the three cameras are calibrated.
Is a world coordinate system and is characterized by that,is a camera coordinate system of a No. 1 telecentric industrial camera,is the camera coordinate system of the general industrial camera No. 2,is the camera coordinate system of general industrial camera No. 3.
At a certain point on the part to be measured, its world coordinate isThe coordinates of the point under the coordinate systems of the No. 1, No. 2 and No. 3 cameras are respectively as follows:
(1)
in the formulaAndrepresentation from world coordinate system toj(j=1, 2, 3) imaging system camera standsAnd the coordinate transformation relation of the standard system is also called as an external parameter of the imaging system.
A rotation transformation matrix from the world coordinate system to the camera coordinate system is shown, if j =1, the rotation transformation matrix from the world coordinate system to the camera coordinate system of the 1# camera is shown, the matrix is 3 × 3, and the total number of 9 elements are included, so that the elements thereof are subscripted, including row and column marks, namely, the rotation transformation matrix in formula 1、、And the like.
Tj represents the translation transformation relationship from the world coordinate system to the camera coordinate system, i.e. a translation vector, which is a three-dimensional vector, and the elements are、、。
By usingTo representIn the first placejAn image in an imaging system having coordinates ofWhereinj=1,2,3then, the relationship between the image point and the space point is:
wherein j =1 (2)
Wherein j =2, 3 (3)
In the formula、、、u j Andv j are parameters within the imaging system that are,is a picture pointThe pixel coordinates of (a).
Generally, the telecentric imaging system No. 1 does not calibrate its equivalent pixels by calibrating its internal and external parameters, i.e. by imaging a calibration of known dimensions to obtain the ratio of the actual dimensions of the calibration to the pixel dimensions. There are various solutions to the calibration problem of the imaging systems No. 2 and No. 3, such as Zhangzhengyou, AflexibleNonw technique for Camera calibration [ J ]. IEEETransactionson Pattern analysis and machinery analysis, 2000, 22(11):1330-1334 ]. Now, it is assumed that all three imaging systems are calibrated, that is, the equivalent pixels of the imaging system No. 1, and the internal and external parameters of the imaging systems No. 2 and No. 3 are known.
Zeroing 3 camera pairsImaging the object to obtain a left image, a right image and a telecentric image, and assuming that the images acquired by the imaging systems No. 1, No. 2 and No. 3 are image1, image2 and image3 respectively. Now, feature point extraction is performed on image1, image2 and image3, and corresponding feature descriptors are established, such as the classical feature detection and feature descriptor algorithm, sift flow dg]International journal of computer Vision, 2004, 60(2): 91-110.). Collection、Andare the feature points of image1, image2, and image3, respectively.
From the collectionIn any one of the elements() Using search strategies and measure functions inFind out the 'same name' feature() I.e. to perform retrieval and registration of features. Commonly used search strategies are KD trees and their modified algorithms, such as BBF (Best-Bin-First, BBF) search strategy (beijs, lowedgboursearchinhigh-dimensionalspaces[C]// Conferenceon computer Vision and Pattern Recognition, puerto Rico, USA, 17-19June1997: 1000-. The most commonly used measure function is the euclidean distance. TheoreticallyAndshould be zero, in practice their euclidean distances are usually not zero. Therefore, a relatively small non-negative number is often takenWhen is coming into contact withAndhas a Euclidean distance of less thanTime, judgeAndis a pair of "homonymous" features. Respectively to be provided withAnddeposit to New CollectionAndin (1).
From the collectionIn any one of the elements() In aWherein the same name is characterized in that. According to the characteristic retrieval and registration algorithm of the last step, in the setIs found inCharacteristic of (1)() Verification using a measure functionAndwhether it is a pair of "homonymous" features, and if so, it will、、Separately deposit into new collections、Andand if not, from the setIn the process of re-choosing an element(And is) Repeating the above steps until the collection is finishedAll of the elements in (a).
Equations (1) and (3) are put together and rewritten as follows, while writing the spatial coordinates to be found to the left of the equations:
(4)
in the formula,,k=1,2,3.
Order to,,,i=1,2,…,mFeature pointHas pixel coordinates ofI.e. byj= 2; characteristic pointHas pixel coordinates ofI.e. byjAnd = 3. Substituting the pixel coordinates into equation (4),And:
(5)
formula (5) is a hollow coordinateAre unknown and the remaining parameters are known. Rewriting formula (5) to formula (6):
(6)
in the formula,Andsee, respectively, formula (7) and formula (8):
(7)
(8)
solving the formula (6) by the least square method to obtainDefining a new set with the solved spatial coordinates as elements. From the collectionTwo elements of ZhongrenAnd(and is and) Calculating z seatTarget difference valueIf, ifIs less than a given non-negative constantGo to CollectionFind and space pointAndthe corresponding pixel points have pixel coordinates ofAndand measuring by using a No. 1 imaging system to obtain:
(9)
in the formula,andequivalent pixels in the horizontal and vertical directions, unit mm/pixel,using camera No. 1 to make a click-through to two points in the measuring rangeMeasurement of line distance measurement.
The confidence level is 95%, and the extended uncertainty of the imaging system # 1 is,True value ofIt should satisfy:
(10)
in the formula,。
change ofpAndqrepeating the above process until the value is obtained() All are linked, so subscripts are used in formula (9) and formula (10)kThe value range is,Should not be less than。
Equation (6) is equivalent to the nonlinear unconstrained extremum problem shown in equation (11):
(11)
the telecentric vision system has good optical performance and the measurement precision is far higher than that of a binocular vision system. In addition, the measurement accuracy of the binocular vision system can reach 1/10mm, and the depth of field of the telecentric vision system is within the range. Therefore, based on the above two points, all equation (10) is used as the constraint condition, and equation (11) becomes the nonlinear constraint extremum problem:
(12)
using the solution of least squares as initial values, i.e.Solving equation (12) to obtain an updated solution. Will be provided withAnd the pixel coordinates are substituted into the formula (1) and the formula (3) to update the internal and external parameters of the camera to be、、、、、And. Substituting the updated camera internal and external parameters into the formula (6), and using the formula (6) to assemble the setAndthe element in (3) calculates its three-dimensional coordinates.
And finally, reconstructing a three-dimensional model of the part by using the three-dimensional coordinates obtained in the last step, and performing three-dimensional multi-factor measurement.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.