[go: nahoru, domu]

CN110487286B - Robot pose judgment method based on point feature projection and laser point cloud fusion - Google Patents

Robot pose judgment method based on point feature projection and laser point cloud fusion Download PDF

Info

Publication number
CN110487286B
CN110487286B CN201910733133.2A CN201910733133A CN110487286B CN 110487286 B CN110487286 B CN 110487286B CN 201910733133 A CN201910733133 A CN 201910733133A CN 110487286 B CN110487286 B CN 110487286B
Authority
CN
China
Prior art keywords
point
robot
pose
information
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910733133.2A
Other languages
Chinese (zh)
Other versions
CN110487286A (en
Inventor
梁生贤
管兆杰
史运伟
徐学敏
李园园
汪双灿
吴燕茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Electrical Equipment Testing Co ltd
Shanghai Tilva Certification Technology Co ltd
Shanghai Electrical Apparatus Research Institute Group Co Ltd
Original Assignee
Shanghai Electrical Equipment Testing Co ltd
Shanghai Tilva Certification Technology Co ltd
Shanghai Electrical Apparatus Research Institute Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Electrical Equipment Testing Co ltd, Shanghai Tilva Certification Technology Co ltd, Shanghai Electrical Apparatus Research Institute Group Co Ltd filed Critical Shanghai Electrical Equipment Testing Co ltd
Priority to CN201910733133.2A priority Critical patent/CN110487286B/en
Publication of CN110487286A publication Critical patent/CN110487286A/en
Application granted granted Critical
Publication of CN110487286B publication Critical patent/CN110487286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to a robot pose judgment method based on point feature projection and laser point cloud fusion. And extracting image feature points by using an ORB algorithm, projecting the feature point coordinates to a two-dimensional plane, fusing with the laser radar point cloud information to obtain surrounding environment information, matching with the existing map, judging the pose of the robot, and obtaining the accurate pose of the robot. Compared with the existing algorithm, the method can effectively improve the matching degree of the point cloud information and the map, improve the positioning precision of the robot and have better stability.

Description

Robot pose judgment method based on point feature projection and laser point cloud fusion
Technical Field
The invention belongs to the technical field of robot detection methods.
Background
For many years, SLAM technology has been the key and core of autonomous robot navigation, and is known to play an important role in patrol robots, warehouse transfer robots, and automobile autopilot. SLAM refers to incremental establishment of an environment map by acquiring environment information through a sensor carried by a mobile robot in the moving process of an unknown environment, and meanwhile, the established map is used for updating the position and the pose of the mobile robot. SLAM technology is largely divided into laser SLAM and visual SLAM.
The laser SLAM is used for estimating the pose of the robot under a frame based on particle filtering by using a laser radar as a main sensor in the synchronous positioning and map building process of the robot so as to complete the positioning of the robot. Scholars such as foreign Doucet propose an SLAM method based on Rao-Blackwellized Particle filter RBPF (Rao-Blackwellized Particle Filters), the algorithm can better approximate the joint probability density of the pose of the mobile robot and an environment map, but the algorithm has the problems of higher computational complexity, larger occupied memory, poorer real-time performance and the like, so that a plurality of scholars perform improvement in various forms aiming at the problems of the algorithm; castellanos J A and the like research SLAM fused with laser radar and monocular vision, combine certain relevant characteristics in the environment where the mobile robot is located to form a meaningful environment landmark, and although the uncertainty of data is effectively reduced, the calculated amount is large, and the real-time performance is poor; luo Yuan, su Qin, zhang Yi, etc. propose an RBPF algorithm based on annealing parameter optimization mixing proposal distribution, which not only can reduce the number of required particles, but also keeps the particle diversity, but the system robustness is low through experimental verification.
The vision SLAM mainly uses a vision sensor to acquire surrounding information to complete the positioning and mapping process of the robot. Davison proposed the first real-time monocular vision SLAM system with milestone significance in 2007, but the application scene is narrow, the number of road signs is limited, and sparse feature points are easy to lose track. In the same year, klein et al proposed a PTAM (parallel tracking and mapping), which realizes parallelization of tracing and mapping and is applicable to augmented reality, and is the first system to use nonlinear optimization. After inheriting the PTAM, mur-Artal et al proposed ORB-SLAM in 2015, which is suitable for monocular cameras, can establish images and position in real time on a main CPU, and becomes one of the most perfect visual SLAM systems at present. It contains all the procedures of the classical visual SLAM framework: camera image reading, visual odometry, back-end optimization, loop detection, map construction and the like.
Positioning and navigation of robots are one of the key technologies in the current research field of autonomous mobile robots. Aiming at the problem of robot positioning, dellaert, fox et al propose Monte Carlo positioning (MCL) of a mobile robot, and a posterior probability distribution of the robot pose is estimated by a certain number of particles. But this method requires the use of a fixed number of samples to estimate the initial pose situation of the robot. Even if the pose is determined, the number of particle samples to be sampled cannot be reduced, and the waste of computing resources is caused. Later Fox et al introduced KLD (Kullback-Leibler conversion) sampling with adaptive sample sets, i.e. adaptive sampling of particle filtering, which could change the number of particle samples over time, thereby significantly reducing the number of samples needed for particle sampling.
In the existing method, a laser coordinate system attitude change matrix is obtained according to the chassis attitude change matrix, and a positioning result is obtained according to the laser coordinate system attitude change matrix and a laser matching algorithm; and obtaining a laser coordinate system attitude change matrix according to the chassis attitude change matrix, and obtaining a positioning result according to the laser coordinate system attitude change matrix and a laser matching algorithm.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the existing robot vision positioning navigation method has low positioning precision.
In order to solve the technical problem, the technical scheme of the invention is to provide a robot pose judgment method based on point feature projection and laser point cloud fusion, which is characterized by comprising the following steps of:
step 1, placing k different markers at key positions of an operating environment, and extracting feature points of each marker by using a binocular camera to obtain k feature points, wherein each feature point is used as a word and forms a dictionary by k words;
step 2, obtaining sample particles;
step 3, positioning the robot, estimating the pose of the robot in the particle filter prediction stage by a certain number of particles obtained in the previous step, and utilizing a odometer motion model according to a state transfer function
Figure GDA0003898148560000021
The pose of the robot is subjected to state transition transformation to obtain the suggested distribution of the pose of the robot,
Figure GDA0003898148560000022
representing the pose condition, u, of the particle i at time t t-1 The odometer data input at the t-1 moment are represented;
step 4, in the running process of the robot in the running environment, extracting the characteristic points of the surrounding environment when scanning the surrounding environment by using the binocular camera, and extracting the characteristic points p according to the internal and external parameters of the binocular camera i The pixel coordinates of (a) are converted into coordinates in a camera coordinate system, and are set as (x) i ,y i ,z i );
Step 5, extracting the characteristic points p extracted in the step 4 i Matching with words in the dictionary established in the step 1, if the matching is successful, entering the step 6, otherwise, entering the step 8;
step 6, feature points p i Three-dimensional space coordinate (x) i ,y i ,z i ) Projected onto a two-dimensional plane, then the feature point p i Has the coordinates of (x) i ,y i ) At the same time, the point cloud information collected by the laser radar is M, and M is set j (x j ,y j ) Setting the laser radar origin to M for the point cloud information as one point in M j (x j ,y j ) If the straight line of (1) is L, if the feature point p i If the distance to the straight line L is less than the threshold value one, the characteristic point p is indicated i Falling on the straight line L, entering a step 7, and otherwise, entering a step 8;
step 7, if the characteristic point p i And M j (x j ,y j ) If the Euclidean distance of (d) is less than the threshold value two, the characteristic point p is indicated i And M j (x j ,y j ) If the visual information and the laser radar information are the same point, the fusion is successful, the fusion observation information at the time t is obtained, and the fusion observation information is used as the observation information z at the time t t Go to step 9, if the feature point p i And M j (x j ,y j ) The Euclidean distance is not less than a second threshold value, and the step 8 is entered;
step 8, using the point cloud information M collected by the laser radar as observation information z at the moment t t Entering step 9;
step 9, inputting the observation information z obtained in the previous step in the resampling stage of the particle filter t And calculating the weight of the particles used in the particle filter prediction stage when one iteration is performed, judging the quality of each particle by using the weight, reserving the particles with large weight, discarding the particles with small weight, returning to the step 3 for the next iteration, and obtaining the correct pose of the robot after the set iteration times are reached.
Preferably, in step 4, the surrounding environment is subjected to feature point extraction by using an ORB algorithm.
Preferably, in step 9, the calculation formula of the weight of the particle is:
Figure GDA0003898148560000031
in the formula (I), the compound is shown in the specification,
Figure GDA0003898148560000032
representing the weight of the particle i at time t, η representing the normalization factor, m t-1 Which represents the map information at time t-1.
The invention has the following advantages:
(1) In the visual positioning and navigation process of the robot, the existing method only depends on point characteristics to estimate the pose of the robot aiming at the aspect of image processing, so that the estimation accuracy of the pose of the robot is low, and the error is large. Aiming at the problem of obstacle recognition, the method places markers at key positions of the surrounding environment, utilizes a binocular camera to store images, extracts characteristic points, and stores the characteristic points as a dictionary (the characteristic points are defined as words, and different words form the dictionary). When the robot passes by the marker, a dictionary is called out, and feature points are matched.
(2) In the process of the robot vision positioning navigation, environment matching is mainly carried out by using image characteristic point information with single vision, the robot positioning is completed, the received information is single, and the positioning precision is low. The invention takes a particle filter algorithm as a frame, projects the coordinates of characteristic points in a camera coordinate system to an x.y plane, and converts the coordinates to a laser radar coordinate system through rotation transformation. Taking a laser radar as an origin, and taking one laser beam angle as a straight line. If the characteristic point is on the straight line, the threshold value judgment is carried out. If the Euclidean distance between the point and the point cloud scanned by the linear laser radar is smaller than a threshold value, the visual characteristic point and the point cloud of the laser radar are successfully matched, and the matching of surrounding environment obstacles is completed.
(3) The environment information can be better processed, more detailed environment information can be obtained and matched with a map, and the positioning accuracy of the robot is improved.
Drawings
Fig. 1 is a flowchart for judging the pose of a robot.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
As shown in fig. 1, the invention provides a robot pose determination method based on point feature projection and laser point cloud fusion, which comprises the following steps:
step 1, placing k different markers at key positions of an operating environment, and extracting feature points of each marker by using a binocular camera to obtain k feature points, wherein each feature point is used as a word and forms a dictionary by k words;
step 2, obtaining sample particles;
step 3, positioning the robot, estimating the pose of the robot in a particle filter prediction phase by a certain number of particles obtained in the previous step, and utilizing a odometer motion model according to a state transfer function
Figure GDA0003898148560000041
The pose of the robot is subjected to state transition transformation to obtain the suggested distribution of the pose of the robot,
Figure GDA0003898148560000042
representing the pose condition, u, of the particle i at time t t-1 The odometer data input at the t-1 moment are represented;
step 4, in the running process of the robot in the running environment, when the binocular camera is used for scanning the surrounding environment, the ORB (ordered Brief) algorithm is used for extracting the characteristic points of the surrounding environment, and according to the internal and external parameters of the binocular camera, the characteristic points p are extracted i The pixel coordinates of (a) are converted into coordinates in a camera coordinate system, and are set as (x) i ,y i ,z i );
Step 5, extracting the characteristic points p extracted in the step 4 i Matching with words in the dictionary established in the step 1, if the matching is successful, entering the step 6, otherwise, entering the step 8;
step 6, feature points p i Three-dimensional space coordinate (x) i ,y i ,z i ) Projected onto a two-dimensional plane, then the feature point p i Has the coordinates of (x) i ,y i ) At the same time, the point cloud information collected by the laser radar is M, and M is set j (x j ,y j ) Setting the laser radar origin to M for the point cloud information as one point in M j (x j ,y j ) The straight line of (a) is L, and the straight line L is: ax + By + C =0. If the characteristic pointp i A distance to the line L smaller than a threshold value Δ indicates a characteristic point p i Falls on the straight line L, i.e.
Figure GDA0003898148560000051
If yes, entering step 7, otherwise entering step 8;
step 7, if the characteristic point p i And M j (x j ,y j ) Is less than a threshold value of two lambda, i.e.
Figure GDA0003898148560000052
If true, it indicates the feature point p i And M j (x j ,y j ) If the visual information and the laser radar information are the same point, the fusion is successful, the fusion observation information at the time t is obtained, and the fusion observation information is used as the observation information z at the time t t Go to step 9, if the feature point p i And M j (x j ,y j ) The Euclidean distance is not less than a second threshold value, and the step 8 is entered;
step 8, using the point cloud information M collected by the laser radar as observation information z at the moment t t Entering step 9;
step 9, inputting the observation information z obtained in the previous step in the resampling stage of the particle filter t Calculating the weight of the particles used in the particle filter prediction stage when one iteration is performed, judging the quality of each particle by using the weight, reserving the particles with large weight, discarding the particles with small weight, returning to the step 3 for the next iteration, and obtaining the correct pose of the robot after the set iteration times are reached, wherein:
the calculation formula of the weight of the particle is:
Figure GDA0003898148560000053
in the formula (I), the compound is shown in the specification,
Figure GDA0003898148560000061
representing the weight of the particle i at time t, η representing the normalization factor, m t-1 Which represents the map information at time t-1.

Claims (3)

1. A robot pose judgment method based on point feature projection and laser point cloud fusion is characterized by comprising the following steps:
step 1, placing k different markers at key positions of an operating environment, and extracting feature points of each marker by using a binocular camera to obtain k feature points, wherein each feature point is used as a word and forms a dictionary by k words;
step 2, obtaining sample particles;
step 3, positioning the robot, estimating the pose of the robot in the particle filter prediction stage by a certain number of particles obtained in the previous step, and utilizing a odometer motion model according to a state transfer function
Figure FDA0003898148550000011
The pose of the robot is subjected to state transition transformation to obtain the suggested distribution of the pose of the robot,
Figure FDA0003898148550000012
showing the pose status u of the particle i at time t t-1 The odometer data input at the t-1 moment are represented;
step 4, in the running process of the robot in the running environment, extracting the characteristic points of the surrounding environment when scanning the surrounding environment by using the binocular camera, and extracting the characteristic points p according to the internal and external parameters of the binocular camera i The pixel coordinates of (a) are converted into coordinates in a camera coordinate system, and are set as (x) i ,y i ,z i );
Step 5, extracting the characteristic points p extracted in the step 4 i Matching with words in the dictionary established in the step 1, if the matching is successful, entering the step 6, otherwise, entering the step 8;
step 6, feature point p i Three-dimensional space coordinate (x) i ,y i ,z i ) Projected onto a two-dimensional plane, then the feature point p i Has the coordinates of (x) i ,y i ) At the same time, the point cloud information collected by the laser radar is M, and M is set j (x j ,y j ) Setting the laser radar original point to M for the point cloud information as one point in M j (x j ,y j ) Is L, if the feature point p i The distance to the straight line L is less than the threshold value one, indicating the characteristic point p i Falling on the straight line L, entering a step 7, and otherwise, entering a step 8;
step 7, if the characteristic point p i And M j (x j ,y j ) Is less than the threshold value two, the characteristic point p is indicated i And M j (x j ,y j ) If the point is the same, the visual information and the laser radar information are successfully fused to obtain fused observation information at the time t, and the fused observation information is used as observation information z at the time t t Go to step 9, if the feature point p i And M j (x j ,y j ) The Euclidean distance is not less than a second threshold value, and the step 8 is entered;
step 8, using the point cloud information M collected by the laser radar as observation information z at the moment t t Entering step 9;
step 9, inputting the observation information z obtained in the previous step in the resampling stage of the particle filter t And calculating the weight of the particles used in the particle filter prediction stage when one iteration is performed, judging the quality of each particle by using the weight, reserving the particles with large weight, discarding the particles with small weight, returning to the step 3 for the next iteration, and obtaining the correct pose of the robot after the set iteration times are reached.
2. The robot pose determination method based on the fusion of the point feature projection and the laser point cloud as claimed in claim 1, wherein in step 4, the feature point extraction is performed on the surrounding environment by using an ORB algorithm.
3. The method for determining the pose of the robot based on the fusion of the point feature projection and the laser point cloud as claimed in claim 1, wherein in step 9, the formula for calculating the weight of the particles is as follows:
Figure FDA0003898148550000021
in the formula (I), the compound is shown in the specification,
Figure FDA0003898148550000022
representing the weight of the particle i at time t, η representing the normalization factor, m t-1 Which represents the map information at time t-1.
CN201910733133.2A 2019-08-09 2019-08-09 Robot pose judgment method based on point feature projection and laser point cloud fusion Active CN110487286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910733133.2A CN110487286B (en) 2019-08-09 2019-08-09 Robot pose judgment method based on point feature projection and laser point cloud fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910733133.2A CN110487286B (en) 2019-08-09 2019-08-09 Robot pose judgment method based on point feature projection and laser point cloud fusion

Publications (2)

Publication Number Publication Date
CN110487286A CN110487286A (en) 2019-11-22
CN110487286B true CN110487286B (en) 2022-12-20

Family

ID=68550416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910733133.2A Active CN110487286B (en) 2019-08-09 2019-08-09 Robot pose judgment method based on point feature projection and laser point cloud fusion

Country Status (1)

Country Link
CN (1) CN110487286B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553937B (en) * 2020-04-23 2023-11-21 东软睿驰汽车技术(上海)有限公司 Laser point cloud map construction method, device, equipment and system
CN112764053B (en) * 2020-12-29 2022-07-15 深圳市普渡科技有限公司 Fusion positioning method, device, equipment and computer readable storage medium
CN113421356B (en) * 2021-07-01 2023-05-12 北京华信傲天网络技术有限公司 Inspection system and method for equipment in complex environment
CN114489036B (en) * 2021-07-25 2023-07-14 西北农林科技大学 Indoor robot navigation control method based on SLAM
CN114018236B (en) * 2021-09-30 2023-11-03 哈尔滨工程大学 Laser vision strong coupling SLAM method based on self-adaptive factor graph
CN114721001A (en) * 2021-11-17 2022-07-08 长春理工大学 Mobile robot positioning method based on multi-sensor fusion

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085422A (en) * 2017-01-04 2017-08-22 北京航空航天大学 A kind of tele-control system of the multi-functional Hexapod Robot based on Xtion equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9020637B2 (en) * 2012-11-02 2015-04-28 Irobot Corporation Simultaneous localization and mapping for a mobile robot
US9037396B2 (en) * 2013-05-23 2015-05-19 Irobot Corporation Simultaneous localization and mapping for a mobile robot

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085422A (en) * 2017-01-04 2017-08-22 北京航空航天大学 A kind of tele-control system of the multi-functional Hexapod Robot based on Xtion equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Corrective Gradient Refinement for Mobile Robot Localization";Joydeep Biswas;《2011 IEEE/RSJ International Conference on Intelligent Robots and Systems》;20111231;正文第73-78页 *
"基于激光雷达传感器的RBPF-SLAM 系统优化设计";王依人 等;《传感器与微系统》;20171231;第36卷(第9期);正文第77-80页 *
基于图优化的同时定位与地图创建综述;梁明杰等;《机器人》;20130715(第04期);正文第500-511页 *

Also Published As

Publication number Publication date
CN110487286A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110487286B (en) Robot pose judgment method based on point feature projection and laser point cloud fusion
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN110223348B (en) Robot scene self-adaptive pose estimation method based on RGB-D camera
EP3633615A1 (en) Deep learning network and average drift-based automatic vessel tracking method and system
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
CN113516664A (en) Visual SLAM method based on semantic segmentation dynamic points
CN105652871A (en) Repositioning method for mobile robot
CN112965063B (en) Robot mapping and positioning method
CN110986956A (en) Autonomous learning global positioning method based on improved Monte Carlo algorithm
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
WO2022228391A1 (en) Terminal device positioning method and related device therefor
CN108469729B (en) Human body target identification and following method based on RGB-D information
CN110751123A (en) Monocular vision inertial odometer system and method
CN115496900A (en) Sparse fusion-based online carbon semantic map construction method
CN116912404A (en) Laser radar point cloud mapping method for scanning distribution lines in dynamic environment
CN113554705B (en) Laser radar robust positioning method under changing scene
CN113379915B (en) Driving scene construction method based on point cloud fusion
CN112733971B (en) Pose determination method, device and equipment of scanning equipment and storage medium
CN114387576A (en) Lane line identification method, system, medium, device and information processing terminal
CN113838129A (en) Method, device and system for obtaining pose information
CN117333846A (en) Detection method and system based on sensor fusion and incremental learning in severe weather
CN117419719A (en) IMU-fused three-dimensional laser radar positioning and mapping method
CN117075158A (en) Pose estimation method and system of unmanned deformation motion platform based on laser radar
Zhong et al. A factor graph optimization mapping based on normaldistributions transform
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in automatic driving scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant