[go: nahoru, domu]

WO2020186493A1 - Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot - Google Patents

Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot Download PDF

Info

Publication number
WO2020186493A1
WO2020186493A1 PCT/CN2019/078963 CN2019078963W WO2020186493A1 WO 2020186493 A1 WO2020186493 A1 WO 2020186493A1 CN 2019078963 W CN2019078963 W CN 2019078963W WO 2020186493 A1 WO2020186493 A1 WO 2020186493A1
Authority
WO
WIPO (PCT)
Prior art keywords
position information
area
mobile robot
candidate
cleaning
Prior art date
Application number
PCT/CN2019/078963
Other languages
French (fr)
Chinese (zh)
Inventor
周圣靓
崔彧玮
李重兴
Original Assignee
珊口(深圳)智能科技有限公司
珊口(上海)智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 珊口(深圳)智能科技有限公司, 珊口(上海)智能科技有限公司 filed Critical 珊口(深圳)智能科技有限公司
Priority to PCT/CN2019/078963 priority Critical patent/WO2020186493A1/en
Priority to CN201980060807.5A priority patent/CN112867424B/en
Priority to CN202210292338.3A priority patent/CN114947652A/en
Publication of WO2020186493A1 publication Critical patent/WO2020186493A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated

Definitions

  • This application relates to the field of mobile communication technology, and in particular to a method and system for navigating, dividing a clean area, and a mobile and cleaning robot.
  • Mobile robots are mechanical devices that perform tasks automatically. It can accept human command, run pre-arranged programs, or act according to principles and programs formulated with artificial intelligence technology. This type of mobile robot can be used indoors or outdoors, can be used in industry or home, can be used to replace security patrols, replace people to clean the ground, can also be used for family companions, auxiliary office, etc.
  • the mobile robot can construct map data of the site where the robot is located on the one hand, and on the other hand, it can also provide route planning and route planning based on the constructed map data Adjustment and navigation services, which make mobile robots move more efficiently.
  • the location of the physical objects is not marked on the constructed map. Therefore, the mobile robot cannot accurately locate the physical objects in the scene, and the mobile robot cannot realize accurate navigation route planning based on the physical objects in the scene. And the division of regions.
  • the purpose of this application is to provide a method and system for navigating and dividing a clean area, a mobile and cleaning robot, which is used to solve the problem that mobile robots in the prior art cannot accurately locate physical objects in the scene. Therefore, it is impossible to achieve precise navigation route planning and area division based on the physical objects in the scene.
  • the first aspect of the present application provides a navigation method for a mobile robot.
  • the mobile robot includes a measuring device and a camera.
  • the method includes the following steps: making the measuring device measure the Position information of obstacles in the area where the mobile robot is located relative to the mobile robot, and determine the position information occupied by the candidate recognition object in the area; according to the determined position information of the candidate recognition object, make the camera device Obtain an image containing the candidate identification object, and determine the entity object information corresponding to the candidate identification object; determine the navigation route of the mobile robot in the area according to the entity object information and its position information.
  • the step of determining the position information occupied by the candidate recognition object in the area includes: obtaining a scan based on measuring the position information of each obstacle measurement point in the area Contour and its position information; according to the discontinuous part on the scanning contour, the scanning contour is divided into a plurality of candidate recognition objects, and the position information occupied by each candidate recognition object is determined.
  • the step of obtaining a scan profile and its occupied position information based on measuring the position information of each obstacle measurement point in the area includes: based on the measurement device The measured position information plane array of each obstacle measurement point, fits the traveling plane of the mobile robot, and determines the scanning contour and its occupied position information on the traveling plane; or based on the measurement by the measuring device The obtained position information line array parallel to the traveling plane determines the scanning contour and the position information occupied by it on the traveling plane.
  • the step of dividing the scan contour into a plurality of candidate recognition objects based on the discontinuous part on the scan contour includes: Part of the gap formed, determine the corresponding candidate recognition object as the first candidate recognition object containing the gap; determine the corresponding candidate recognition object as the second candidate recognition object that hinders the movement of the mobile robot by dividing the continuous part separated by the discontinuous part on the scan contour Candidate recognition object.
  • the step of determining the corresponding candidate recognition object as the first candidate recognition object including the gap based on the gap formed by the discontinuous part on the scan contour includes: The screening conditions for screening the formed gaps; wherein, the screening conditions include: the gap is located along the line where the continuous part of at least one side adjacent to the gap is located, and/or the preset gap width threshold; and based on the post-screening The gap determines that the corresponding candidate recognition object is the first candidate recognition object that includes the gap.
  • the step of causing the measuring device to measure the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located includes: making the measuring device measure the camera device The position information of obstacles relative to the mobile robot in the field of view.
  • the step of causing the camera device to obtain an image containing the candidate identification object according to the determined position information occupied by the candidate identification object includes: The camera device captures the image of the candidate recognition object projected onto the traveling plane of the mobile robot; or according to the obtained position information of the candidate recognition object, controls the mobile robot to move, and causes the camera device to capture images containing the corresponding candidate Identify the image of the object.
  • the step of determining the entity object information corresponding to the candidate recognition object includes: determining the corresponding information in the image according to the angle range in the position information occupied by the candidate recognition object An image area within an angle range; performing feature recognition on the image area to determine the entity object information corresponding to the candidate recognition object.
  • the candidate recognition object includes the first candidate recognition object with a gap; correspondingly, the determination is made based on the angle range in the position information occupied by the candidate recognition object
  • the step of the image area in the corresponding angle range in the image includes: determining at least one angle range based on the position information of the two ends of the candidate recognition object; and determining the corresponding first candidate from the image according to the determined angle range
  • the image area of the entity object information to identify the object.
  • the candidate recognition object includes a first candidate recognition object with a gap; correspondingly, the step of determining the entity object information corresponding to the candidate recognition object includes: The position information occupied by the first candidate recognition object, at least two characteristic lines representing perpendicular to the traveling plane are identified in the image; the first candidate recognition object is determined based on the identified characteristic lines It is the entity object information used to represent the door.
  • the step of determining the entity object information corresponding to the candidate identification object includes: identifying the candidate in the image based on the feature information of a plurality of preset known entity objects Identify the entity object information of the identification object; use a preset image recognition algorithm to construct a mapping relationship between the candidate identification object in the image and various known entity object information to determine the entity object information corresponding to the candidate identification object.
  • it further includes: marking the determined entity object information and its location information in a map for setting a navigation route.
  • the cleaning area includes any one of the following: a room area determined based on the entity object information; according to a preset area range and entities located within the area range The area divided by the location information occupied by the object information.
  • the determined physical object information when the determined physical object information includes a physical door, it further includes the step of setting a virtual wall at the location information corresponding to the physical door; And the area where the mobile robot is located divide the cleaning area of the mobile robot, and design a navigation route in the walking area.
  • a second aspect of the present application provides a method for dividing a cleaning area for a cleaning robot, the cleaning robot includes a measuring device and a camera, and the method includes the following steps:
  • the measuring device measures the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located, and determines the position information occupied by the candidate doors in the area; according to the position information occupied by the determined candidate doors, the
  • the camera device acquires an image containing the candidate door, and determines that the candidate door is a physical door; and divides the cleaning area of the cleaning robot according to the physical door and its position information to restrict the walking range of the cleaning robot.
  • the step of determining the position information occupied by the candidate door in the area includes: obtaining a scanning profile according to the position information of the measurement points of each obstacle in the area And its occupied position information; according to the discontinuous part on the scan contour, the position information occupied by each candidate door is determined.
  • the step of obtaining a scan profile and its occupied position information based on measuring the position information of each obstacle measurement point in the area includes: based on the measurement device A surface array of the measured position information of each obstacle measurement point, fitting the traveling plane of the cleaning robot, and determining the scanning contour on the traveling plane and the position information occupied by it; based on the measurement device measured A line array of position information parallel to the travel plane to determine the scan profile and its occupied position information on the travel plane.
  • the step of determining the position information occupied by each candidate door based on the discontinuous part on the scan contour includes: according to a preset screening condition, the step of determining the position information of the discontinuous part The formed gaps are screened, and it is determined that the screened gaps belong to candidate gates; wherein, the screening conditions include: the gap is located along the line where the continuous part of at least one side adjacent to the gap is located, and/or a preset gap width threshold.
  • the step of causing the measuring device to measure the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located includes: causing the measuring device to measure the camera device The position information of the obstacle relative to the cleaning robot in the field of view.
  • the step of causing the camera device to obtain an image containing the candidate door according to the determined position information occupied by the candidate door includes: making the camera device Capture the image of the candidate door projected onto the travel plane of the cleaning robot; or control the movement of the cleaning robot according to the obtained position information of the candidate door, and make the camera device capture an image containing the corresponding candidate door.
  • the step of determining that the candidate door is a physical door includes: determining the corresponding angle range in the image according to the angle range in the position information occupied by the candidate door Image area; performing feature recognition on the image area to determine that the candidate door is a solid door.
  • the step of determining the image area within the corresponding angle range in the image according to the angle range in the position information occupied by the candidate door includes: The position information of the terminal determines at least one angular range; and an image area for identifying whether the candidate door is a physical door is determined from the image according to the determined angular range.
  • the step of determining that the candidate door is a physical door includes: identifying in the image at least two characteristic lines that are perpendicular to the traveling plane, and The candidate door is determined to be a solid door based on the identified characteristic line.
  • it further includes: marking the determined physical door and its location information in a map for setting a cleaning route.
  • the step of dividing the cleaning area of the cleaning robot according to the physical door and its position information includes: setting a virtual wall at the physical door; and according to the virtual The wall and the area where the cleaning robot is located divide the cleaning area of the cleaning robot.
  • the cleaning area includes any one of the following: a room area determined based on the physical door; according to a preset area range and physical doors located within the area range Area divided by location information.
  • a third aspect of the present application provides a navigation system for a mobile robot, which is characterized by comprising: a measuring device, which is provided in the mobile robot and is used to measure the area where the mobile robot is located. Position information of the obstacle relative to the mobile robot; a camera device, which is provided in the mobile robot, and is used to obtain an image containing the candidate recognition object; a processing device, which connects the measurement device and the camera device, and is used to run at least one program To perform any of the navigation methods described above.
  • the camera device is embedded in the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  • the measuring device is embedded on the body side of the mobile robot, and the measuring device includes: a distance measuring sensor device and an angle sensor device, or a TOF measuring device .
  • a fourth aspect of the present application provides a mobile robot, including: a measuring device, which is provided on the mobile robot, and is used to measure the obstacles relative to the mobile robot in the area where the mobile robot is located. Location information; a camera device, which is provided in the mobile robot, and is used to obtain an image containing the candidate recognition object; a first processing device, which is connected to the measurement device and the camera device, and is used to run at least one program to execute any of the above 1.
  • the navigation method described above to generate a navigation route a mobile device arranged in the mobile robot for controlled adjustment of the position and posture of the mobile robot; a second processing device connected to the first processing device And a mobile device, configured to run at least one program to control the mobile device to adjust the position and posture based on the navigation route provided by the first processing device to move autonomously along the navigation route.
  • the camera device is embedded in the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  • the measuring device is embedded on the body side of the mobile robot, and the measuring device includes: a distance measuring sensor device and an angle sensor device, or a TOF measuring device .
  • a fifth aspect of the present application provides a system for dividing a cleaning area for a cleaning robot, including: a measuring device provided in the cleaning robot for measuring the area where the cleaning robot is located Position information of the internal obstacle relative to the cleaning robot; a camera device, which is provided in the cleaning robot, and is used to obtain an image containing the candidate door; a processing device, which is connected to the measuring device and the camera device, and is used to run at least one program , In order to implement any one of the above-mentioned methods for dividing a clean area, so as to set a navigation route in the generated clean area.
  • the camera device is embedded in the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
  • the measuring device is embedded on the body side of the cleaning robot, and the measuring device includes: a distance measuring sensor device and an angle sensor device, or a TOF measuring device .
  • a sixth aspect of the present application provides a cleaning robot, including: a measuring device, which is provided in the cleaning robot, and is used to measure the obstacles relative to the cleaning robot in the area where the cleaning robot is located. Location information; a camera device, which is provided in the cleaning robot, and is used to obtain an image containing the candidate recognition object; a first processing device, which is connected to the measurement device and the camera device, and is used to run at least one program to perform any of the above 1.
  • the method for dividing a cleaning area, and using the obtained cleaning area to generate a navigation route a mobile device provided on the cleaning robot for controlled adjustment of the position and posture of the cleaning robot; a cleaning device, provided The cleaning robot is used to clean the traveling plane passed by during the movement of the cleaning robot; the second processing device is connected to the first processing device and controls the cleaning device and the moving device respectively, and is used to run at least one program to Based on the navigation route provided by the first processing device, the mobile device is controlled to adjust the position and posture to move autonomously along the navigation route, and the cleaning device is controlled to perform a cleaning operation.
  • the camera device is embedded in the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
  • the measuring device is embedded on the body side of the cleaning robot, and the measuring device includes: a distance measuring sensor device and an angle sensor device, or a TOF measuring device .
  • a seventh aspect of the present application provides a data processing device for a mobile robot, including: a data interface for connecting the camera device and measurement device of the mobile robot; a storage unit At least one program is stored; a processing unit, connected to the storage unit and a data interface, is used to obtain the position information provided by the measurement device through the data interface, and to obtain the image taken by the camera device, and use The at least one program is executed to execute the navigation method as described above; or the method for dividing a clean area as described above is executed.
  • the eighth aspect of the present application provides a computer-readable storage medium, which is characterized by storing at least one program, and the at least one program executes any one of the foregoing when called The navigation method described above; or the method of dividing a clean area as described above.
  • the method and system for navigating and dividing the cleaning area, and the mobile and cleaning robot of the present application can measure the obstacle relative to the mobile robot in the area where the mobile robot is located according to the distance measuring sensor device and the angle sensor device, or the TOF measuring device.
  • Accurately determine the position information of the candidate recognition object in the area and make the camera obtain the image containing the candidate recognition object, and then determine the corresponding entity object information of the candidate recognition object, and according to the entity
  • the object information and its position information determine the navigation route of the mobile robot in the area.
  • this application directly plans an accurate navigation route and partitions the area according to the entity object information, increases the accuracy of navigation route planning and area division, and improves the mobile robot’s performance Human-computer interaction.
  • FIG. 1 shows a schematic flowchart of a specific embodiment of the mobile robot navigation method of this application.
  • FIG. 2 shows a schematic diagram of the process of determining the position information occupied by the candidate recognition object in the area in a specific embodiment of this application.
  • FIG. 3 shows a schematic diagram of a plane array containing the position information of the stool obtained according to the installation position of the measuring device in the cleaning robot in a specific embodiment of this application.
  • FIG. 4 shows a schematic diagram of the projection of the foot of a stool on the ground determined based on the position information plane array of FIG. 3 in a specific embodiment of this application.
  • FIG. 5 shows a top view of the scanning profile projected on the traveling plane obtained by the measurement device in a specific embodiment of this application.
  • Fig. 6 is a top view of the scanning contour projected on the traveling plane after linearizing the scanning contour shown in Fig. 5.
  • FIG. 7 shows a schematic diagram of the mobile robot in the corresponding physical space with the entity object a when the mobile robot shoots a projection image containing the entity object a.
  • FIG. 8 shows a schematic diagram of scene application in a specific embodiment of this application.
  • FIG. 9 shows a schematic diagram of a scenario application in a specific embodiment of this application.
  • FIG. 10 shows a schematic flowchart of a method for dividing a clean area according to this application in a specific embodiment.
  • FIG. 11 is a schematic diagram of a process of determining the position information occupied by candidate doors in an area in a specific embodiment of this application.
  • FIG. 12 shows a schematic diagram of the composition of the navigation system of the mobile robot of this application in a specific embodiment.
  • FIG. 13 shows a schematic diagram of the composition of the mobile robot of this application in a specific embodiment.
  • FIG. 14 shows a schematic diagram of the composition of the system for dividing a clean area of this application in a specific embodiment.
  • FIG. 15 shows a schematic diagram of the composition of the cleaning robot in a specific embodiment of this application.
  • FIG. 16 shows a schematic diagram of the composition of the data processing device of this application in a specific embodiment.
  • first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element.
  • the first preset threshold may be referred to as the second preset threshold, and similarly, the second preset threshold may be referred to as the first preset threshold without departing from the scope of the various described embodiments.
  • the first preset threshold and the preset threshold are both describing a threshold, but unless the context clearly indicates otherwise, they are not the same preset threshold.
  • the similar situation also includes the first volume and the second volume.
  • A, B or C or "A, B and/or C” means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C” .
  • An exception to this definition will only occur when the combination of elements, functions, steps or operations is inherently mutually exclusive in some way.
  • the mobile robot performs mobile operations based on navigation control technology. Among them, affected by the scene applied by the mobile robot, when the mobile robot is in an unknown location in an unknown environment, using VSLAM (Visual Simultaneous Localization and Mapping, vision-based instant positioning and map construction) technology can help the mobile robot build maps and perform navigation operating. Specifically, the mobile robot constructs a map through the visual information provided by the visual sensor and the movement information provided by the mobile sensor, and provides the mobile robot with navigation capabilities according to the constructed map, so that the mobile robot can move autonomously.
  • the visual sensor includes an imaging device for example, and the corresponding visual information is image data (hereinafter referred to as image for short).
  • the mobile robot moves on the travel plane of its area according to a pre-built map, and the constructed map only displays the location information of the objects included in the application scene.
  • the user remotely controls the mobile robot to send
  • the user needs to identify the position to be indicated in the map saved by the mobile robot, and then send a control instruction to the mobile robot according to the coordinates of the corresponding position in the map. Brings the problem of poor human-computer interaction.
  • This application provides a method for navigating a mobile robot.
  • the position of the obstacle relative to the mobile robot is accurately measured by a measuring device, and the image containing the obstacle is measured according to the camera device. Recognize and obtain the specific physical object corresponding to the obstacle, and then determine the navigation route of the mobile robot in the area according to the located physical object and its position information.
  • the physical object includes any physical object that can be formed from obstacles measured by the measuring device in the physical space moved by the mobile robot.
  • the physical object is a physical entity, such as but not limited to: balls, shoes, walls, Doors, flower pots, coats and hats, trees, tables, chairs, refrigerators, TVs, sofas, socks, cups, etc.
  • the camera device includes but is not limited to any one of a fisheye camera module and a wide-angle (or non-wide-angle) camera module.
  • the mobile robots include, but are not limited to: family companion mobile robots, cleaning robots, patrol mobile robots, glass cleaning robots, and the like.
  • FIG. 1 shows a schematic flowchart of a specific embodiment of a navigation method for a mobile robot according to this application.
  • the navigation method of the mobile robot may be executed by a processing device included in the mobile robot.
  • the processing device is an electronic device capable of performing numerical operations, logical operations, and data analysis, which includes, but is not limited to: CPU, GPU, FPGA, etc., and an easy-to-use device for temporarily storing intermediate data generated during operations.
  • the mobile robot includes a measuring device and a camera device.
  • the camera device includes but is not limited to any one of a fisheye camera module and a wide-angle (or non-wide-angle) camera module.
  • the mobile robots include, but are not limited to: family companion mobile robots, cleaning robots, patrol mobile robots, glass cleaning robots, and the like.
  • the measuring device may be installed on the body side of the mobile robot, and the measuring device may be, for example, a scanning laser or a TOF (Time of Flight) sensor.
  • the scanning laser includes an angle sensing device and a distance measuring sensor, and the angle information corresponding to the distance information measured by the distance measuring sensor is obtained through the angle sensor, and the obstacle measurement point is measured at the scanning laser through laser or infrared. The distance from the distance measuring sensor at the current angle.
  • the scanning laser is a laser that changes direction, starting point or pattern of propagation with time relative to a fixed frame of reference.
  • the scanning laser is based on the principle of laser distance measurement, which forms a two-dimensional scanning surface through a rotatable optical component (laser transmitter) to achieve area scanning and profile measurement functions.
  • the ranging principle of a scanning laser includes: a laser transmitter emits a laser pulse wave, when the laser wave hits an object, part of the energy returns, when the laser receiver receives the return laser wave, and the energy of the return wave is sufficient to trigger the threshold, then Scan the laser to calculate its distance to the object.
  • the scanning laser continuously emits laser pulse waves.
  • the laser pulse waves hit the high-speed rotating mirror surface and emit the laser pulse waves in all directions to form a two-dimensional area scan.
  • the scanning of this two-dimensional area can, for example, realize the following two functions: 1) Set protection areas of different shapes within the scanning range of the scanning laser, and send out an alarm signal when an object enters the area; 2) Scanning the laser Within the range, the scanning laser outputs the distance of each obstacle measurement point. According to this distance information, the outline and coordinate positioning of the object can be calculated.
  • the TOF measuring device is based on TOF technology.
  • TOF technology is one of the optical non-contact three-dimensional depth measurement and perception methods. It continuously sends light pulses to the target, and then uses the sensor to receive the light returned from the object, and detects the flight (round trip) time of these transmitted and received light pulses. Get the target distance.
  • TOF's irradiating unit is to emit high-frequency light after high-frequency modulation.
  • LED or laser including laser diode and VCSEL (Vertical Cavity Surface Emitting Laser)
  • a laser is used to emit high-performance pulsed light.
  • the pulse can reach about 100MHz, and infrared light is mainly used.
  • the wavelength of the lighting module is generally in the infrared band, and high frequency modulation is required.
  • the TOF photosensitive module is similar to the ordinary mobile phone camera module, which is composed of chips, lenses, circuit boards and other components. Each pixel of the TOF photosensitive chip records the specific phase between the camera and the object that emits light waves. Through data processing The unit extracts the phase difference and calculates the depth information by the formula.
  • the TOF measuring device is small in size and can directly output the depth data of the detected object, and the depth calculation result of the TOF measuring device is not affected by the grayscale and characteristics of the surface of the object, so it can perform three-dimensional detection very accurately.
  • the navigation method S1 of the mobile robot includes steps S11 to S13 as shown in FIG. 1.
  • the measuring device is asked to measure the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located, and determine the position information occupied by the candidate recognition object in the area;
  • the area is, for example, a room, and the obstacle may be any physical object in the room that can reflect the measurement medium.
  • the position information of the obstacle relative to the measurement device can be measured to obtain the contour information of the obstacle, and the contour information is used to determine the candidate recognition object and its occupation in the area.
  • Location information includes: deflection angle information and corresponding distance information, and the distance information and deflection angle information are called the position information of the obstacle relative to the cleaning robot, or simply the position information of the obstacle.
  • FIG. 2 shows a schematic diagram of the process of determining the position information of the candidate recognition object in the area in a specific embodiment of this application. That is, in some embodiments, the step of determining the position information occupied by the candidate recognition object in the area in step S11 includes step S111 and step S112 shown in FIG. 2.
  • the processing device may obtain a scan profile and its occupied position information according to the position information of the measurement points of the obstacles in the measurement area.
  • the measurement device mentioned in any of the above examples is used to traversely measure the obstacles in the two-dimensional or three-dimensional plane in the area, and obtain the scanning contour of the obstacle measurement points in the two-dimensional or three-dimensional plane in the area.
  • the obstacle measurement point is a reflection point on the obstacle that is used to reflect the measurement medium emitted by the ranging sensor.
  • the measurement medium is, for example, a laser beam, an LED light beam, or an infrared beam.
  • the obtained scan profile is a lattice matrix composed of the position information of each obstacle measurement point, where the position information includes distance information and deflection angle information of the obstacle measurement point relative to the measurement device, or simply referred to as obstacle The location information of the object measurement point.
  • the two-dimensional or three-dimensional array formed by the measured position information of the measured obstacle points is used to construct the scanning contour of the obstacle.
  • the step S111 includes: fitting the traveling plane of the mobile robot based on the position information area array of the measurement points of each obstacle measured by the measuring device, and determining to scan on the traveling plane Information about the contour and its position.
  • the measurement device as a TOF measurement device and the TOF measurement device including a laser sensor array as an example, the position information plane array is measured by the laser sensor array.
  • the description will be made by taking the mobile robot as a cleaning robot as an example.
  • the measuring device is installed on the side of the body close to the traveling plane, for example, the measuring device is installed on the side of the cleaning robot. Therefore, the acquired position information area array of each obstacle measurement point may include the position information of the measurement points of various obstacles, such as the ground, objects placed on the surface, and objects suspended in the air.
  • the measured obstacle measurement points usually include the traveling plane of the cleaning robot, such as the ground, and the plane formed by the obstacle measurement points is determined by the plane fitting method. It is considered as the traveling plane, and then according to the determined traveling plane, the scanning contour placed on the traveling plane and the position information occupied by it are determined.
  • the position information of a number of obstacle measurement points from the position information surface array, and select a plane using a plane fitting method, where the number of obstacle measurement points constituting the plane is the largest, and the The obstacle measurement points on the selected plane in the position information plane array are taken as obstacle measurement points on the traveling plane of the cleaning robot; according to the position information of each pixel in the position information plane array, the obstacle measurement points are located in the traveling plane.
  • the position information of the pixel points on the upper part of the plane is projected onto the travel plane, thereby obtaining scan contours on the travel plane and the position information occupied by them.
  • Figure 3 only schematically provides a schematic diagram of a plane array containing the position information of the stool obtained according to the installation position of the measuring device in the cleaning robot, and Figure 4 is based on the position information of Figure 3.
  • a schematic diagram of the projection of the foot of the stool on the ground determined by the surface array.
  • the processing device will project the position information from the height of the stool foot to the ground to the travel plane according to the obtained position information area array Obtain a block projection corresponding to the foot of the stool.
  • the step S111 includes: based on the line array of position information parallel to the traveling plane measured by the measuring device, determining the scanning profile on the traveling plane and the position information occupied by it.
  • the measurement device as a scanning laser as an example, the line array of the position information is measured by the scanning laser.
  • the laser scanner may be installed on the top middle, top edge or body side of the cleaning robot.
  • the laser emission direction of the scanning laser can be parallel to the traveling plane, and the scanning laser is rotated and scanned at an angle of 360 degrees at the position where the cleaning robot is located, and is transmitted through the angle of the scanning laser.
  • the sensing device acquires the angle of each obstacle measurement point with respect to the mobile robot, and scans the laser ranging sensing device (laser or infrared ranging device) to measure the distance between the obstacle measurement point and the cleaning robot Distance, and then obtain a position information line array parallel to the travel plane, and since the position information line array is parallel to the travel plane, it can be directly determined to scan on the travel plane according to the position information line array Information about the contour and its position.
  • the line array of position information obtained by the scanning laser can indicate the position of obstacles on the ground that hinder the movement of the cleaning robot information.
  • the processing device causes the measuring device to measure the position information of the obstacle relative to the cleaning robot in the field of view of the camera device, so that the camera device can obtain information including The image of the obstacle measured by the measuring device.
  • the processing device screens the position information measured by the measuring device, that is, eliminates the position information of the obstacle measurement point of the measuring device in the area beyond the imaging range of the imaging device, so as to obtain the measurement based on the remaining effective position information.
  • the position information of the obstacle in the field of view of the camera device measured by the device relative to the cleaning robot.
  • the effective position information is used to obtain the scan profile and its occupied position information.
  • the processing device enables the measuring device to obtain position information within a preset distance, and the preset distance is a fixed value.
  • the preset distance is determined according to the usual indoor use area to ensure that the measuring device can obtain the position information of the obstacles in a room, and obtain the scan contour and its occupied position information according to the obtained position information.
  • the processing device can use a combination of feature lines, feature points, etc. to determine the candidate recognition object depicted by the scan contour and its position information.
  • the processing device also uses a linearization algorithm to linearize the dot matrix information that constitutes the scan contour to obtain a scan contour described by long lines and short lines.
  • examples of the linearization algorithm include: expansion and erosion algorithms. Refer to Figures 5 and 6, where Figure 5 only schematically shows the top view of the scanning profile projected on the travel plane measured by the measuring device, where Figure 6 shows the scanning profile corresponding to Figure 5 after being linearized. A schematic top view of the scan profile projected on the travel plane.
  • the originally acquired scan contour includes contour parts B1-B2 composed of obstacle measurement points whose intervals are less than a preset threshold, and contour parts B2-B3, and B4 composed of obstacle measurement points whose intervals are greater than the preset threshold. -B5.
  • the scan contour processed by the linearization algorithm includes contour parts A1-A2 composed of continuous long lines, and contour parts A2-A3 and A4-A5 composed of discontinuous short lines.
  • the scan profile can be composed of continuous and discontinuous parts.
  • the condition pre1 constituting the continuous part includes at least one or more of the following combinations: 1) The distance between adjacent obstacle measurement points in the scan profile is less than a preset length threshold and these obstacles are measured The contour part formed by obstacle measurement points whose number of points is greater than the preset number threshold, such as B1-B2 shown in Figure 5; 2) The contour part formed by continuous lines whose line length is greater than the preset length threshold in the scan contour, for example , A1-A2 shown in Fig.
  • the position information of each obstacle meets the preset continuous change condition, wherein the continuous
  • the changing conditions include: the difference between the distance information of adjacent obstacle measurement points is less than the preset distance mutation threshold.
  • the B4-B5 outline part shown in FIG. 5 and the A4-A5 outline part shown in FIG. 6 do not constitute a continuous part.
  • the aforementioned complete scan profile is composed of a discontinuous part and a continuous part. Therefore, the discontinuous part and the continuous part can be regarded as a logical “or” relationship.
  • the contour parts of B2-B3 and B4-B5 in Fig. 5 are discontinuous parts
  • the contour parts of A2-A3 and A4-A5 in Fig. 6 are discontinuous parts.
  • the condition pre2 constituting the discontinuous part includes at least one or more of the following combinations: 1) The distance between adjacent obstacle measurement points in the scan profile is greater than a preset length threshold, and at least two ends
  • the obstruction measurement point is the contour part connected with the continuous part, such as B2-B3 and B4-B5 shown in Figure 5; 2)
  • the scanning contour is composed of at least one continuous short line whose line length is less than the preset length threshold
  • the aforementioned complete scan profile is composed of a discontinuous part and a continuous part. Therefore, the discontinuous part and the continuous part can be regarded as a logical “or” relationship.
  • the contour parts of B2-B3 and B4-B5 in Fig. 5 are discontinuous parts
  • the contour parts of A2-A3 and A4-A5 in Fig. 6 are discontinuous parts.
  • step S11 includes step S112, which is the step of dividing the scanning contour into a plurality of candidate recognition objects according to the discontinuous parts on the scanning contour, and determining the position information occupied by each candidate recognition object .
  • the processing device performs segmentation processing on the scanned contour at the boundary of the discontinuous part to obtain a contour part composed of a continuous part and a contour part composed of a discontinuous part.
  • the continuous part and the discontinuous part are respectively used as candidate recognition objects, and the position information of the corresponding candidate recognition object is determined according to the position information of the obstacle measurement points in the continuous part and the discontinuous part respectively.
  • at least one candidate recognition object is determined from the continuous part using a combination of preset feature lines, feature points, etc., and the discontinuous part is used as a separate candidate recognition object, and the continuous part and the discontinuous part
  • the position information of the obstacle measurement point determines the position information occupied by the corresponding candidate recognition object.
  • the processing device may also perform segmentation processing of the scan contour according to the boundary of the continuous part, which should be regarded as the same or similar to the method of segmenting the scan contour according to the boundary of the discontinuous part.
  • the method of determining the candidate recognition object based on the discontinuous part in the scan contour includes: determining the corresponding candidate recognition object as the first candidate recognition object including the gap based on the gap formed by the discontinuous part on the scan contour And according to the continuous part separated by the discontinuous part on the scan contour, the corresponding candidate identification object is determined as the second candidate identification object that hinders the movement of the cleaning robot.
  • the first candidate recognition object and the second candidate recognition object represent the entity object information to be recognized named according to the type of the object.
  • the naming of the categories is exemplified but not limited to: doors, windows, walls, tables, chairs, balls, cabinets, socks, etc.
  • a candidate identification object may contain one or more entity object information to be identified.
  • the second candidate recognition object intends to include the entity object to be recognized that hinders the movement of the cleaning robot, and examples thereof include but are not limited to at least one of the following: wall, cabinet, fan, sofa, box, socks, ball, table ( Chair) feet etc.
  • the first candidate recognition object is intended to represent a physical object to be recognized that can connect/separate two spatial regions; wherein, when the two spatial regions are connected, the physical object can form a gap in the scan contour.
  • the physical object is a door. When the door is open, it connects the two space areas inside and outside the house. When the door is closed, it separates the two space areas inside and outside the house.
  • the first candidate identification object is mainly used to provide further screening and confirmation candidate identification objects for the physical door in an open state.
  • the gap formed on the scanning contour may also be caused by the gap formed between two solid objects or the shape of the solid object of.
  • the gap in the scanned contour can be caused by the interval between two wardrobes, the interval between the wardrobe and the wall, and so on.
  • the gaps in the scan contour are caused by the space between the legs of the table. Therefore, it is necessary to further screen and identify the obtained first candidate identification object.
  • step S1121 and step S1122 can be performed.
  • step S1121 the formed gaps are screened according to preset screening conditions; wherein, the screening conditions include: the gap is located along the line where the continuous part of at least one side adjacent to it is located, and/or the preset Gap width threshold.
  • the screening condition includes that a gap is located along a continuous portion of at least one side adjacent to the gap, and the gap is a gap corresponding to the first candidate identification object.
  • the gap is the gap corresponding to the physical door. Since the physical door is generally set up by attaching to the wall, at least one side wall of the inlaid physical door is located along the continuous part adjacent to the gap. Therefore, the corresponding gap formed when the physical door is opened is the gap corresponding to the first candidate recognition object.
  • the gaps corresponding to the two stool legs of the stool are generally placed independently in the physical space. Therefore, the two stool legs of the stool are not located along any continuous part. They are isolated gaps. Then these isolated gaps correspond to The candidate recognition objects are excluded from the first candidate recognition object, and the corresponding gaps are screened out.
  • the screening condition includes a preset gap width threshold.
  • the gap width threshold can be a single value or a range of values. For example, if the width of the gap is within a preset gap width threshold (for example, 60 cm to 120 cm), the gap is a gap corresponding to the first candidate recognition object.
  • the processing device calculates the width of the gap on the position information of the obstacle measurement point that constitutes the gap, and screens out the obtained gap according to the screening conditions, that is, the gap is too large or the size is not the gap corresponding to the first identification object .
  • the screening condition includes that the gap is located along a continuous part of at least one side adjacent to the gap, and the width of the corresponding gap is within the preset gap width threshold range.
  • the processing device determines that the candidate recognition object corresponding to the gap is the first candidate recognition object including the gap according to the screening condition. In other words, for the gaps on the scan profile that are not located along the adjacent continuous part on both sides, or the width of the gap is not within the preset gap width threshold range, it is determined that it needs to be filtered out The gap.
  • step S1122 based on the screened gaps, it is determined that the corresponding candidate recognition object is the first candidate recognition object containing the gap.
  • the corresponding candidate recognition object is determined to be the first candidate recognition object that includes the gap. For example, when the notch obtained after screening is located along the line of the continuous part of at least one side adjacent to it and the width of the notch is within the preset notch width threshold range, the notch and both ends are determined to include The first candidate recognition object of the gap. For another example, when the gap is located along the line where the continuous part on at least one side adjacent to it is located or the width of the gap is within the preset gap width threshold range, the gap and its two ends are determined to include the gap The first candidate for recognition.
  • step S12 according to the determined position information of the candidate recognition object, the camera device is made to acquire the candidate recognition object. Identify the image of the object.
  • the mobile robot includes at least one camera.
  • the camera device captures a physical object in the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image.
  • a mobile robot includes a camera device, which is arranged on the top, shoulder or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  • a mobile robot includes a plurality of camera devices, and the main optical axis of one camera device is perpendicular to the traveling plane of the mobile robot.
  • the camera device included in the cleaning robot is embedded on the side or top of the body and the main optical axis has a non-vertical tilt angle with the traveling plane; the tilt angle is, for example, between 0° and 60° The tilt angle.
  • the main optical axis of the camera device is perpendicular to the traveling plane, and the plane where the two-dimensional image captured by the camera device is located has a parallel relationship with the traveling plane.
  • FIG. 7 shows a schematic diagram of the mobile robot in the corresponding physical space with the entity object a when it shoots a projection image containing the entity object a.
  • the main optical axis of at least one camera device of the mobile robot in FIG. 7 is perpendicular to the traveling plane of the mobile robot.
  • the position D1 and the same solid object a are projected to the position D2 in the traveling plane M2, where the positions D1 and D2 have the same angle characteristics relative to the position D of the mobile robot.
  • the processing device causes the measurement device to measure the position information of the obstacle relative to the mobile robot within the field of view of the camera device; and causes the camera device to capture the candidate recognition object and project it to the mobile robot Image of the plane of travel.
  • the position of the candidate recognition object in the image captured by the camera is used to indicate the position of the candidate recognition object projected onto the traveling plane of the mobile robot, and the position of the candidate recognition object in the image is used
  • the angle relative to the moving direction of the mobile robot characterizes the angle of the position of the candidate recognition object projected onto the traveling plane of the mobile robot relative to the moving direction of the mobile robot.
  • the mobile robot further includes a mobile device.
  • the processing device is The parameter controls the operation of the mobile device, that is, controls the movement of the cleaning robot according to the obtained position information of the candidate identification object to capture an image containing the candidate identification object.
  • the imaging parameters include field of view range, zoom interval, etc.
  • the main optical axis of the camera device is perpendicular to the traveling plane, and the processing device controls the mobile device to move in the angular direction indicated by the angle information of the candidate recognition object provided by the measuring device, and causes the camera device to capture the The candidate recognition object is projected onto the image of the traveling plane of the cleaning robot.
  • the processing device controls the moving device to move in the angular direction indicated by the angle information of the candidate recognition object provided by the measuring device, and makes The camera device captures an image containing a candidate for recognition.
  • the mobile robot may be a cleaning robot
  • the moving device of the cleaning robot may include a walking mechanism and a walking driving mechanism, wherein the walking mechanism may be provided at the bottom of the robot body, and the walking driving mechanism is built in Inside the robot body.
  • the walking mechanism may, for example, include a combination of two straight walking wheels and at least one auxiliary steering wheel, the two straight walking wheels are respectively provided on opposite sides of the bottom of the robot body, and the two straight walking wheels can be driven by
  • the corresponding two traveling driving mechanisms realize independent driving, that is, the left straight traveling wheel is driven by the left traveling drive mechanism, and the right straight traveling wheel is driven by the right traveling drive mechanism.
  • the universal walking wheel or the straight walking wheel may have a bias drop suspension system, which is fastened in a movable manner, for example, is rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Bias. The spring bias allows the universal traveling wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain ground force.
  • the walking driving mechanism may include a driving motor and a control circuit that controls the driving motor, and the driving motor can drive the walking wheels in the walking mechanism to move.
  • the drive motor can be, for example, a reversible drive motor, and a speed change mechanism can also be provided between the drive motor and the axle of the walking wheel.
  • the walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly, assembly and maintenance.
  • the step S12 includes step S121, based on preset known feature information of multiple entity objects, identifying the entity object information of the candidate identification object in the image; the feature information may be the Image features of various entity objects, the image features can identify entity object information in the image, and the image features are, for example, contour features about the entity object information.
  • the preset known multiple entity objects include, but are not limited to: tables, chairs, sofas, flower pots, shoes, socks, doors, cabinets, cups, etc.
  • the image feature includes a preset graphic feature corresponding to the type of entity object, or an image feature obtained through an image processing algorithm.
  • the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, angle extraction, line extraction, and image processing algorithms obtained through machine learning.
  • Image processing algorithms obtained through machine learning include, but are not limited to: neural network algorithms, clustering algorithms, etc.
  • the processing device can identify the respective corresponding entity object information from the second candidate recognition object and the first candidate recognition object divided based on the continuous part and the discontinuous part of the scan contour. For example, an image processing algorithm is used to determine whether the first candidate recognition object is a physical door, the second candidate recognition object is determined to include a wall, a wardrobe, etc., and the determined physical object information and location information are obtained.
  • the step S12 includes step S122, using a preset image recognition algorithm to construct the mapping relationship between the candidate recognition object in the image and the known multiple entity object information to determine the candidate recognition object.
  • Corresponding entity object information the program stored in the storage device in the mobile robot includes the network structure and connection mode of the neural network model.
  • the neural network model may be a convolutional neural network, and the network structure includes an input layer, at least one hidden layer, and at least one output layer.
  • the input layer is used to receive the captured image or the preprocessed image;
  • the hidden layer includes a convolutional layer and an activation function layer, and may even include a normalization layer, a pooling layer, and a fusion layer.
  • the output layer is used to output images marked with object type tags.
  • the connection mode is determined according to the connection relationship of each layer in the neural network model. For example, the connection relationship between the front and back layers is set based on data transmission, the connection relationship with the previous layer data is set based on the size of the convolution kernel in each hidden layer, and the full connection is set.
  • the neural network model classifies each object recognized from the image.
  • its corresponding feature information may include two feature lines perpendicular to the traveling plane of the cleaning robot, and the distance between the two feature lines is within a preset width threshold range, That is: the image recognition algorithm is used to construct the mapping relationship between the candidate recognition object in the image and the known entity object information, and the candidate recognition object in the image is found to correspond to the known entity door, then The entity object information corresponding to the candidate recognition object is determined to be the entity object information corresponding to the door.
  • the processing device can obtain the entity object information corresponding to the recognizer in the second candidate recognition object and the first candidate recognition object obtained based on the continuous part and the discontinuous part of the scanning contour.
  • the step S12 includes step S123 and step S124.
  • step S123 the image area within the corresponding angle range in the image is determined according to the angle range in the position information occupied by the candidate recognition object .
  • step S124 feature recognition is performed on the image area to determine the entity object information corresponding to the candidate recognition object.
  • the main optical axis of the camera device is perpendicular to the traveling plane, and referring to FIG. 3 and related descriptions, the angle range of the candidate recognition object in the image can represent the entity object corresponding to the candidate recognition object
  • the angle range projected to the traveling plane of the mobile robot is determined by using the angle range in the position information occupied by the candidate recognition object measured by the measuring device to determine the image area within the corresponding angle range in the image.
  • the processing device may use the recognition method provided in step S121 or S122 to recognize candidate recognition objects in the image area to improve the efficiency of recognition calculations.
  • the step S123 may include step S1231 and step S1232.
  • step S1231 determine at least one angle range based on the position information of the two ends of the first candidate recognition object; in the step S1232, determine from the image according to the determined angle range to identify the corresponding first The image area of the entity object information of the candidate recognition object.
  • an angle range including the position information of the two ends of the candidate recognition object is determined, that is, the angle range includes the entire gap of the first candidate recognition object, and According to the angle range containing the gap corresponding to the candidate recognition object, it is used as the image area for identifying the entity object information of the first candidate recognition object.
  • FIG. 8 shows an example of this application. A schematic diagram of scene application in a specific embodiment. In FIG.
  • the first candidate recognition object is, for example, a candidate door 91
  • the mobile robot is a cleaning robot 92
  • the angle between the two ends of the candidate door 91 and the moving direction of the cleaning robot 92 is 10 Degrees and 25 degrees
  • the angle between the candidate door 91 and the moving direction of the cleaning robot 92 is 10 Degrees and 25 degrees
  • select a second angle of 24 degrees to 26 degrees with respect to the direction of movement of the cleaning robot 92 Range select a second angle of 24 degrees to 26 degrees with respect to the direction of movement of the cleaning robot 92 Range
  • the first angle range and the second angle range are selected as the image area for identifying the entity object information of the candidate door 91.
  • the step S124 may include the foregoing step S121 or S122, that is, the processing device may recognize the first candidate recognition object in the selected image area according to the recognition method provided in the foregoing step S121 or S122.
  • the projection of the door frame of the solid door will show the feature of vanishing point in the image, then the first candidate obtained by the measuring device If the recognition object can be recognized as a solid door, the selected image area will contain the characteristic lines corresponding to the above characteristics.
  • the vertical angle relationship between the main optical axis of the camera and the traveling plane of the cleaning robot as an example. In practical applications, the height of the cleaning robot is generally low. The angle of the camera to the door is generally from bottom to top.
  • the step S124 further includes steps S1241 and S1242, that is, the processing device determines whether the first candidate recognition object is a physical door by executing the following steps S1241 and S1242.
  • Step S1241 According to the position information occupied by the first candidate recognition object, at least two characteristic lines used to indicate perpendicular to the traveling plane are recognized in the image.
  • Step S1242 Based on the identified characteristic line, it is determined that the first candidate identification object is the entity object information used to represent the door.
  • the image area within the angle range related to the position information is recognized, and at least three feature lines are identified in the image area where the straight lines intersect at one point, Determine the at least three feature lines that are used to indicate perpendicular to the travel plane; and then determine that the first candidate identification object is the entity object information used to represent the door based on the identified feature lines, that is, determine the first identification candidate
  • the entity object information of the object is the door.
  • the navigation method S1 of the mobile robot further includes a step of marking the determined physical object information and its position information in a map for setting a navigation route.
  • the map is a grid map, the mapping relationship between the unit grid size and the unit size of the physical space is predetermined, and the obtained entity object information and its position information are marked on the map The corresponding grid position.
  • the text description, image identification, or number corresponding to each entity object information can be marked on the map, and the text description can be a name description of the type of each entity object information, such as tables, chairs, Description of the names of flower pots, televisions, refrigerators, etc
  • the name corresponding to the table is described as "table”
  • the name corresponding to the TV is described as "television”.
  • the image identifier may be an actual image icon corresponding to the type of entity object information.
  • the number may be a digital label that is arranged in advance corresponding to the entity object information. For example, "001" represents a refrigerator, "002" represents a chair, "003" represents a table, and "004" represents a door.
  • the mobile robot is a cleaning robot.
  • the mobile robot designs a navigation route to traverse the cleaning area based on a predetermined cleaning area. For example, according to the marking information of the physical objects located in the cleaning area on the map, the mobile robot Determine the navigation route that is convenient for cleaning according to the corresponding marking information.
  • the cleaning area includes but is not limited to at least one of the following: a cleaning area divided according to a preset number of grids, a cleaning area divided according to a room, and the like. For example, in a cleaning area in the acquired map, the table and its position information are marked. Therefore, when designing a navigation route, the design includes a navigation route rotating around the table legs.
  • the navigation method further includes step S13, that is, determining the navigation route of the mobile robot in the area according to the physical object information and its position information.
  • the mobile robot is a cleaning robot, and the cleaning area of the mobile robot is divided according to the physical object information and the area where the cleaning robot is located, and a navigation route in the walking area is designed.
  • the position information includes the distance and angle of the measurement point of the physical object relative to the cleaning robot.
  • the cleaning area is a room area determined based on the physical object information; for example, when the room area composed of the physical object "wall” and the physical object "door” contains the physical object "bed” , The room area containing the physical object of the bed is the bedroom. And when the room area composed of the solid object "wall” and the solid object "door” contains the solid object of the sofa, the room area containing the solid object of the sofa is the living room.
  • the cleaning robot may preset a range of cleaning units to traverse the cleaning area, each of the cleaning unit ranges may include nine grid areas, and each time the cleaning robot plans the next nine grids to be cleaned. After the nine grid areas have been cleaned, plan the next cleaning unit range for the cleaning robot, and when the planned cleaning unit range cannot be planned to nine due to obstacles (such as walls or cabinets).
  • obstacles such as walls or cabinets.
  • the obstacle is the cut-off point, and the grid area not blocked by the obstacle is used as the cleaning range that the cleaning robot needs to traverse next. For example, due to the barrier of the wall, the next planned When the cleaning area can only reach six grid areas, the six grid areas are used as the cleaning area that the cleaning robot needs to traverse next, and so on, until the cleaning robot traverses the current cleaning area .
  • the cleaning area is an area divided according to a preset area range and location information occupied by physical object information within the area range. And when the determined physical object information includes a physical door, it further includes the step of setting a virtual wall at the location information corresponding to the physical door; so as to divide the cleaning area of the cleaning robot according to the virtual wall and the area where the cleaning robot is located , And design a navigation route in the walking area.
  • the preset area range is, for example, the user's home.
  • the user's home may include three areas: a living room, a bedroom, a kitchen, and a bathroom, and each area has a physical door, and when passing through the measuring device and the camera After obtaining the location information of each entity object, a virtual wall is set at the location information corresponding to the physical door, and the combination of the virtual wall and the physical wall connected to the virtual wall forms an independent area, and then according to the The virtual wall and the area where the cleaning robot is located divide the cleaning area of the cleaning robot. For example, the area of the user's home is divided into four cleaning areas according to the virtual wall, which are a living room, a bedroom, a kitchen, and a bathroom. And traverse cleaning is performed in each of the cleaning areas in a preset traversal manner.
  • the step of the navigation route in the internal object information further includes: based on the instruction information containing the entity object information, setting a navigation route to the entity object information; in this embodiment, the entity object information is, for example, the type of each entity object information
  • the description of the name for example, includes the description of the names of objects such as tables, chairs, flower pots, TVs, refrigerators, and doors.
  • the method of obtaining the instruction information including the entity object information includes but is not limited to: voice mode, text mode, etc.
  • the instruction may also include an execution instruction of the mobile robot.
  • the instructions also include cleaning instructions, patrol instructions, remote control instructions, and the like.
  • the step of setting a navigation route for navigating to the entity object information based on the instruction information containing the entity object information may include: acquiring a piece of voice information, and identifying the entity object contained in the voice information Information instructions.
  • the mobile robot can directly receive the user's voice information and recognize the instruction of the entity object information included in the information.
  • the user can directly voice "table" to the mobile robot, and the mobile robot moves to the table after receiving the instruction to perform preset corresponding processing.
  • the navigation route for the mobile robot to move from the current position to the table can be planned according to the information of the entity objects passing by the route.
  • the mobile robot moves from the current position to the navigation route of the table and can pass through flower pots, televisions, sofas, etc.
  • the mobile robot plans a navigation route according to the constructed map after receiving an instruction from the user containing the entity object information, so that the mobile robot can move to the location corresponding to the entity object information for cleaning .
  • the mobile robot forms a navigation route based on the flowerpot, TV, and sofa according to the constructed map, and the mobile robot The robot moves to the table and performs a cleaning operation after passing the navigation route formed according to the flowerpot, TV and sofa.
  • the voice information is not limited to short instructions that only indicate physical object information, but may also be long instructions that include physical object information. For example, if the user voice "go to the table", the mobile robot can recognize the information included in the voice information. Entity object information "table” instruction, and then follow-up operations.
  • the step of setting a navigation route for navigating to the entity object information based on the instruction information containing the entity object information further includes: obtaining the instruction containing the entity object information from a terminal device.
  • the terminal device is wirelessly connected with the mobile robot.
  • the user inputs an instruction containing physical object information in a text manner via a terminal device.
  • the user enters "table" in text form through a mobile phone APP.
  • it is used to input an instruction containing physical object information via a terminal device in a voice manner.
  • the user enters "table” by voice through the mobile APP.
  • the voice information input by the user is not limited to short instructions that only indicate physical object information, but can also be long instructions that include physical object information.
  • the terminal device will translate it into text and extract it.
  • keywords such as table are matched with the translated text to the corresponding instruction and sent to the mobile robot.
  • the terminal device can be connected with the mobile robot in a wireless manner such as wifi connection, near field communication, or Bluetooth pairing, so as to transmit the instructions received by the terminal device to the mobile robot for subsequent operations.
  • the terminal device is, for example, a smart phone, a tablet computer, a wearable device, or other smart devices with smart processing functions.
  • the navigation method of the mobile robot of the present application can accurately determine the angle and distance of the obstacle relative to the mobile robot in the area where the mobile robot is located according to the distance measuring sensor device and the angle sensor device, or the TOF measuring device.
  • the location information of the candidate recognition object in the camera, and the camera device can obtain the image containing the candidate recognition object, and then determine the corresponding entity object information of the candidate recognition object, and determine that the mobile robot is at the location based on the entity object information and its location information.
  • this application directly plans the navigation route based on the entity object information after obtaining more accurate location information about the entity object information, which increases the accuracy of the navigation route planning and improves the movement of the navigation method.
  • the human-computer interaction of the robot can accurately determine the angle and distance of the obstacle relative to the mobile robot in the area where the mobile robot is located according to the distance measuring sensor device and the angle sensor device, or the TOF measuring device.
  • the location information of the candidate recognition object in the camera, and the camera device can obtain the image containing the candidate recognition object, and then
  • some cleaning robots design the navigation route to traverse the corresponding area by dividing the area according to the preset length and width dimensions, so as to complete the cleaning work during the movement.
  • Some other cleaning robots design the navigation route traversing the room area according to the room division method, so as to complete the cleaning work during the movement.
  • the cleaning robot is easy to move out of the corresponding room to clean other rooms when the room has not been cleaned. This is because the cleaning robot designs the priority of adjacent cleaning areas according to a preset direction when dividing areas, which results in areas that need to be cleaned up in the room.
  • the cleaning robot is likely to misjudge the area of a single room. It also appears that when a room has not been cleaned, it moves out of the corresponding room and cleans other rooms. This is because the cleaning robot mistakenly uses the door as a passage in the room, and incorrectly divides the room. As a result, there will be areas in the room that need to be cleaned up. When the cleaning robot leaves too many supplementary sweeping areas, the supplementary sweeping areas need to be supplemented one by one, which reduces the working efficiency of the cleaning robot.
  • this application also provides a method for dividing the cleaning area, which aims to identify physical doors, especially those in an open state, as opposed to the cleaning robot’s Location, to refer to the physical door and its occupied position when dividing the cleaning area, so as to reduce the cleaning area in the room and improve the single cleaning efficiency.
  • FIG. 10 shows a schematic flowchart of a method for dividing a clean area according to a specific embodiment of the present application.
  • the method S2 of dividing a cleaning area is applied to a cleaning robot, and the method S2 of dividing a cleaning area can be executed by the cleaning robot.
  • the cleaning robot includes a processing device, a measuring device, a camera device, and the like.
  • the processing device is an electronic device capable of performing numerical operations, logical operations, and data analysis, including but not limited to: CPU, GPU, FPGA, etc., and is used to temporarily store the volatility of intermediate data generated during operations Memory, non-volatile memory for storing programs that can execute the method, etc.
  • the cleaning robot includes a measuring device and a camera device.
  • the camera device includes but is not limited to any one of a fisheye camera module and a wide-angle (or non-wide-angle) camera module.
  • the measuring device may be installed on the body side of the cleaning robot, and the measuring device may be, for example, a scanning laser or a TOF sensor.
  • the scanning laser includes an angle sensing device and a distance measuring sensor, and the angle information corresponding to the distance information measured by the distance measuring sensor is obtained through the angle sensor, and the obstacle measurement point is measured at the scanning laser through laser or infrared. The distance from the distance measuring sensor at the current angle.
  • the scanning laser is a laser that changes direction, starting point or pattern of propagation with time relative to a fixed frame of reference.
  • the scanning laser is based on the principle of laser distance measurement, which forms a two-dimensional scanning surface through a rotatable optical component (such as a laser transmitter) to realize area scanning and profile measurement functions.
  • the ranging principle of a scanning laser includes: a laser transmitter emits a laser pulse wave, when the laser wave hits an object, part of the energy returns, when the laser receiver receives the return laser wave, and the energy of the return wave is sufficient to trigger the threshold, then Scan the laser to calculate its distance to the object.
  • the scanning laser continuously emits laser pulse waves.
  • the laser pulse waves hit the high-speed rotating mirror surface and emit the laser pulse waves in all directions to form a two-dimensional area scan.
  • the scanning of this two-dimensional area can, for example, realize the following two functions: 1) Set protection areas of different shapes within the scanning range of the scanning laser, and send out an alarm signal when an object enters the area; 2) Scanning the laser Within the range, the scanning laser outputs the distance of each obstacle measurement point. According to this distance information, the outline and coordinate positioning of the object can be calculated.
  • the TOF measuring device is based on TOF technology.
  • TOF technology is one of the optical non-contact three-dimensional depth measurement and perception methods. It continuously sends light pulses to the target, and then uses the sensor to receive the light returned from the object, and detects the flight (round trip) time of these transmitted and received light pulses. Get the target distance.
  • TOF's irradiating unit all emits after high-frequency modulation of light.
  • LED or laser including laser diode and VCSEL (Vertical Cavity Surface Emitting Laser)
  • a laser is used to emit high-performance pulsed light.
  • the pulse can reach about 100MHz, and infrared light is mainly used.
  • the wavelength of the lighting module is generally in the infrared band, and high frequency modulation is required.
  • the TOF photosensitive module is similar to the ordinary mobile phone camera module, which is composed of chips, lenses, circuit boards and other components. Each pixel of the TOF photosensitive chip records the specific phase between the camera and the object that emits light waves. Through data processing The unit extracts the phase difference and calculates the depth information by the formula.
  • the TOF measuring device is small in size and can directly output the depth data of the detected object, and the depth calculation result of the TOF measuring device is not affected by the grayscale and characteristics of the surface of the object, so it can perform three-dimensional detection very accurately.
  • the method S2 for dividing a cleaning area includes steps S21 to S23 as shown in FIG. 10, for obtaining the position of the physical door in the area where the cleaning robot is located according to the measuring device and the camera device of the cleaning robot , And restrict the cleaning range of the cleaning robot according to the physical door and its position information.
  • the measuring device is caused to measure the position information of the obstacle in the area where the cleaning robot is located relative to the cleaning robot, and determine the position information occupied by the candidate door in the area.
  • the area is, for example, a room.
  • the obstacle may be any physical object in the room that can reflect the measurement medium.
  • the position information of the obstacle relative to the measuring device can be measured to obtain the contour information of the obstacle, and the contour information is used to determine the candidate doors in the area and their occupation location information.
  • the position information includes: deflection angle information and corresponding distance information, and the distance information and deflection angle information are called the position information of the obstacle relative to the cleaning robot, or simply the position information of the obstacle.
  • FIG. 11 is a schematic diagram of a process for determining the position information of candidate doors in an area in a specific embodiment of this application. That is, in some embodiments, the step of determining the position information occupied by the candidate door in the area in step S21 includes step S211 and step S212 shown in FIG. 11.
  • the processing device may obtain a scan profile and its occupied position information according to the position information of the measurement points of the obstacles in the measurement area.
  • the measurement device mentioned in any of the above examples is used to traversely measure the obstacles in the two-dimensional or three-dimensional plane in the area, and obtain the scanning contour of the obstacle measurement points in the two-dimensional or three-dimensional plane in the area.
  • the obstacle measurement point is a reflection point on the obstacle that is used to reflect the measurement medium emitted by the ranging sensor.
  • the measurement medium is, for example, a laser beam, an LED light beam, or an infrared beam.
  • the obtained scan profile is a lattice matrix composed of the position information of each obstacle measurement point, where the position information includes distance information and deflection angle information of the obstacle measurement point relative to the measurement device, or simply referred to as obstacle The location information of the object measurement point.
  • the two-dimensional or three-dimensional array formed by the measured position information of the measured obstacle points is used to construct the scanning contour of the obstacle.
  • the step S211 includes: fitting the travel plane of the cleaning robot based on the area array of the position information of each obstacle measurement point measured by the measuring device, and determining to scan on the travel plane Information about the contour and its position.
  • the measurement device as a TOF measurement device and the TOF measurement device including a laser sensor array as an example, the position information plane array is measured by the laser sensor array.
  • the measuring device In order to measure obstacles around the cleaning robot, the measuring device is installed on the side of the body close to the traveling plane, for example, the measuring device is installed on the side of the cleaning robot. Therefore, the acquired position information area array of each obstacle measurement point may include the position information of the measurement points of various obstacles, such as the ground, objects placed on the surface, and objects suspended in the air.
  • the measured obstacle measurement points usually include the traveling plane of the cleaning robot, such as the ground, and the plane formed by the obstacle measurement points is determined by the plane fitting method. It is considered as the traveling plane, and then according to the determined traveling plane, the scanning contour placed on the traveling plane and the position information occupied by it are determined.
  • the position information of a number of obstacle measurement points from the position information surface array, and select a plane using a plane fitting method, where the number of obstacle measurement points constituting the plane is the largest, and the The obstacle measurement points on the selected plane in the position information plane array are taken as obstacle measurement points on the traveling plane of the cleaning robot; according to the position information of each pixel in the position information plane array, the obstacle measurement points are located in the traveling plane.
  • the position information of the pixel points on the upper part of the plane is projected onto the travel plane, thereby obtaining scan contours on the travel plane and the position information occupied by them.
  • FIGS. 3 and 4 where FIG.
  • FIG. 3 only schematically provides a schematic diagram of the position information surface array obtained according to the installation position of the measuring device in the cleaning robot
  • FIG. 4 is based on the position information surface array of FIG.
  • the processing device will project the position information from the height of the stool foot to the ground to the travel plane according to the obtained position information area array Obtain a block projection corresponding to the foot of the stool.
  • the step S211 further includes: determining the scanning profile on the traveling plane and the position information occupied by it based on the line array of position information parallel to the traveling plane measured by the measuring device. Taking the measurement device as a scanning laser as an example, the line array of the position information is measured by the scanning laser.
  • the laser scanner may be installed on the top middle, top edge or body side of the cleaning robot.
  • the laser emission direction of the scanning laser can be parallel to the traveling plane, and the scanning laser is rotated and scanned at an angle of 360 degrees at the position where the cleaning robot is located, and is transmitted through the angle of the scanning laser.
  • the sensing device acquires the angle of each obstacle measurement point with respect to the mobile robot, and scans the laser ranging sensing device (laser or infrared ranging device) to measure the distance between the obstacle measurement point and the cleaning robot Distance, and then obtain a position information line array parallel to the travel plane, and since the position information line array is parallel to the travel plane, it can be directly determined to scan on the travel plane according to the position information line array Information about the contour and its position.
  • the line array of position information obtained by the scanning laser can indicate the position of obstacles on the ground that hinder the movement of the cleaning robot information.
  • the measuring range of the measuring device can reach 8 meters, and the camera device usually cannot capture clear images at the corresponding distance.
  • the processing device causes the measuring device to measure the position information of the obstacle relative to the cleaning robot in the field of view of the camera device, so that the camera device can obtain information including The image of the obstacle measured by the measuring device.
  • the processing device screens the location information measured by the measurement device, that is, removes the location information of the obstacle measurement point in the area beyond the imaging range of the camera by the measurement device to obtain the remaining effective location information
  • the measurement device measures the position information of the obstacle relative to the cleaning robot in the field of view of the camera device.
  • the effective position information is used to obtain the scan profile and its occupied position information.
  • the processing device enables the measuring device to obtain position information within a preset distance, and the preset distance is a fixed value.
  • the preset distance is determined according to the usual indoor use area to ensure that the measuring device can obtain the position information of the obstacles in a room, and obtain the scan contour and its occupied position information according to the obtained position information.
  • the processing device can use a combination of feature lines, feature points, etc. to determine the candidate recognition object depicted by the scan contour and its position information.
  • the processing device also uses a linearization algorithm to linearize the dot matrix information that constitutes the scan contour to obtain a scan contour described by long lines and short lines.
  • examples of the linearization algorithm include: expansion and erosion algorithms. Refer to Figures 5 and 6, where Figure 5 only schematically shows the top view of the scanning profile projected on the travel plane measured by the measuring device, where Figure 6 shows the scanning profile corresponding to Figure 5 after being linearized. A schematic top view of the scan profile projected on the travel plane. Among them, in Fig.
  • the originally acquired scan contour includes contour parts B1-B2 formed by obstacle measurement points whose intervals are less than a preset threshold, and contour parts B2-B3 and B4- formed by obstacle measurement points whose intervals are greater than the preset threshold.
  • B5 contour parts A1-A2 composed of continuous long lines
  • contour parts A2-A3 and A4-A5 composed of discontinuous short lines.
  • the scan profile can be composed of continuous and discontinuous parts.
  • the condition pre1 constituting the continuous part includes at least one or more of the following combinations: 1) The distance between adjacent obstacle measurement points in the scan profile is less than a preset length threshold and these obstacles are measured The contour part formed by obstacle measurement points whose number of points is greater than the preset number threshold, such as B1-B2 shown in Figure 5; 2) The contour part formed by continuous lines whose line length is greater than the preset length threshold in the scan contour, for example , A1-A2 shown in Fig.
  • the position information of each obstacle meets the preset continuous change condition, wherein the continuous
  • the changing conditions include: the difference between the distance information of adjacent obstacle measurement points is less than the preset distance mutation threshold.
  • the B4-B5 outline part shown in FIG. 5 and the A4-A5 outline part shown in FIG. 6 do not constitute a continuous part.
  • the aforementioned complete scan profile is composed of a discontinuous part and a continuous part. Therefore, the discontinuous part and the continuous part can be regarded as a logical “or” relationship.
  • the contour parts of B2-B3 and B4-B5 in Fig. 5 are discontinuous parts
  • the contour parts of A2-A3 and A4-A5 in Fig. 6 are discontinuous parts.
  • the condition pre2 constituting the discontinuous part includes at least one or more of the following combinations: 1) The distance between adjacent obstacle measurement points in the scan profile is greater than a preset length threshold, and at least two ends
  • the obstruction measurement point is the contour part connected with the continuous part, such as B2-B3 and B4-B5 shown in Figure 5; 2)
  • the scanning contour is composed of at least one continuous short line whose line length is less than the preset length threshold
  • the aforementioned complete scan profile is composed of a discontinuous part and a continuous part. Therefore, the discontinuous part and the continuous part can be regarded as a logical “or” relationship.
  • the contour parts of B2-B3 and B4-B5 in Fig. 5 are discontinuous parts
  • the contour parts of A2-A3 and A4-A5 in Fig. 6 are discontinuous parts.
  • the step S21 includes step S212, which is to determine the position information occupied by each candidate door according to the discontinuous part on the scan contour.
  • the candidate door is mainly used to provide further screening and confirmation for the physical door in the open state.
  • the processing device performs segmentation processing on the scanned contour at the boundary of the discontinuous part to obtain a contour part composed of a continuous part and a contour part composed of a discontinuous part.
  • the discontinuous part is used as a candidate recognition object, and the position information occupied by each candidate door is determined according to the position information of the obstacle measurement point in the discontinuous part.
  • the discontinuous part is used as a separate candidate identification object by using a combination of preset characteristic lines and characteristic points, and the position information of the obstacle measurement point in the discontinuous part is used to determine the corresponding candidate door location information.
  • the processing device may also perform segmentation processing of the scan contour according to the boundary of the continuous part, which should be regarded as the same or similar to the method of segmenting the scan contour according to the boundary of the discontinuous part.
  • the discontinuous part of the scan contour may form a gap corresponding to the corresponding solid object.
  • the physical object is a door. When the door is opened, it connects the two space areas inside and outside the house. When the door is closed, it separates the two space areas inside and outside the house.
  • the gap formed on the scanning contour may also be caused by the gap formed between two solid objects or the shape of the solid object of.
  • the gap in the scanned contour can be caused by the interval between two wardrobes, the interval between the wardrobe and the wall, and so on.
  • the gaps in the scan contour are caused by the space between the legs of the table. Therefore, it is necessary to further screen and identify the obtained gaps.
  • step S2121 may be performed.
  • the gaps formed by the discontinuous parts are screened according to preset screening conditions, and it is determined that the screened gaps belong to the candidate gate.
  • the filtering conditions include: the gap is located along the line where the continuous part of at least one side adjacent to the gap is located, and/or a preset gap width threshold.
  • the screening condition includes that a gap is located along a continuous portion of at least one side adjacent to the gap, and the gap is a gap corresponding to the candidate door.
  • the gap is the gap corresponding to the physical door. Since the physical door is generally set up by attaching to the wall, at least one side wall of the inlaid physical door is located along the continuous part adjacent to the gap. Therefore, the corresponding gap formed when the physical door is opened is the gap corresponding to the candidate door.
  • the gaps corresponding to the two stool legs of the stool are generally placed independently in the physical space. Therefore, the two stool legs of the stool are not located along any continuous part. They are isolated gaps. Then these isolated gaps correspond to The object is excluded from the candidate door, and the corresponding gap is screened out.
  • the screening condition includes a preset gap width threshold.
  • the gap width threshold can be a single value or a range of values.
  • the width between the door frames of a door is generally between 60cm and 120cm, so this parameter can also be used as a condition for screening candidate doors, that is, the width of the gap is within the preset gap width threshold (for example, 60cm ⁇ 120cm), then The gap is a gap corresponding to the candidate door image.
  • the processing device calculates the gap width on the position information of the obstacle measurement points constituting the gap, and screens out the obtained gap according to the screening conditions, that is, the gap is too large or the size is not the gap corresponding to the candidate door.
  • the screening condition includes that the gap is located along a continuous part of at least one side adjacent to the gap, and the width of the corresponding gap is within the preset gap width threshold range.
  • the processing device determines the candidate door corresponding to the gap according to the screening condition. In other words, for the gaps on the scan profile that are not located along the adjacent continuous part on both sides, or the width of the gap is not within the preset gap width threshold range, it is determined that it needs to be filtered out The gap.
  • step S22 according to the determined position information of the candidate door, the camera device is made to acquire an image containing the candidate door, and determine the The candidate door is a physical door.
  • the mobile robot includes at least one camera.
  • the camera device captures a physical object in the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image.
  • a mobile robot includes a camera device, which is arranged on the top, shoulder or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  • a mobile robot includes a plurality of camera devices, and the main optical axis of one camera device is perpendicular to the traveling plane of the mobile robot.
  • the camera device included in the cleaning robot is embedded on the side or top of the body and the main optical axis has a non-vertical tilt angle with the traveling plane; the tilt angle is, for example, between 0° and 60° The tilt angle.
  • the main optical axis of the camera device is perpendicular to the traveling plane, and the plane where the two-dimensional image captured by the camera device is located has a parallel relationship with the traveling plane.
  • FIG. 7 shows a schematic diagram of the mobile robot in the corresponding physical space with the entity object a when it shoots a projection image containing the entity object a.
  • the main optical axis of at least one camera device of the mobile robot in FIG. 7 is perpendicular to the traveling plane of the mobile robot.
  • the position D1 and the same solid object a are projected to the position D2 in the traveling plane M2, where the positions D1 and D2 have the same angle characteristics relative to the position D of the mobile robot.
  • the processing device causes the measuring device to measure the position information of the obstacle relative to the cleaning robot within the field of view of the imaging device, and causes the imaging device to capture the candidate door and project the progress of the cleaning robot.
  • Flat image wherein, the position of the candidate door in the image captured by the camera is used to indicate the position of the candidate door projected onto the travel plane of the cleaning robot, and the position of the candidate door in the image relative to the
  • the angle of the moving direction of the cleaning robot represents the angle of the position of the candidate door projected to the traveling plane of the cleaning robot relative to the moving direction of the cleaning robot.
  • the cleaning robot further includes a mobile device, and when the position information of the candidate door measured by the measuring device is outside the field of view of the camera device, the processing device
  • the camera parameters of the device control the operation of the mobile device, that is, control the movement of the cleaning robot according to the obtained position information of the candidate door to capture an image containing the candidate door.
  • the imaging parameters include field of view range, zoom interval, etc.
  • the main optical axis of the camera device is perpendicular to the traveling plane, and the processing device controls the mobile device to move in the angular direction indicated by the angle information of the candidate door provided by the measuring device, and causes the camera device to capture the candidate door.
  • the door projects an image of the traveling plane of the cleaning robot.
  • the processing device controls the moving device to move in the angular direction indicated by the angle information of the candidate door provided by the measuring device, and makes the The camera device captures an image including candidate doors.
  • the moving device of the cleaning robot may include a walking mechanism and a walking driving mechanism, wherein the walking mechanism may be provided at the bottom of the robot body, and the walking driving mechanism is built in the robot body.
  • the walking mechanism may, for example, include a combination of two straight walking wheels and at least one auxiliary steering wheel, the two straight walking wheels are respectively provided on opposite sides of the bottom of the robot body, and the two straight walking wheels can be driven by
  • the corresponding two traveling driving mechanisms realize independent driving, that is, the left straight traveling wheel is driven by the left traveling drive mechanism, and the right straight traveling wheel is driven by the right traveling drive mechanism.
  • the universal walking wheel or the straight walking wheel may have a bias drop suspension system, which is fastened in a movable manner, for example, is rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Bias. The spring bias allows the universal traveling wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain ground force.
  • the walking driving mechanism may include a driving motor and a control circuit that controls the driving motor, and the driving motor can drive the walking wheels in the walking mechanism to move.
  • the drive motor can be, for example, a reversible drive motor, and a speed change mechanism can also be provided between the drive motor and the axle of the walking wheel.
  • the walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly, assembly and maintenance.
  • the step S22 includes step S221 and step S222.
  • step S221 the image area within the corresponding angle range in the image is determined according to the angle range in the position information occupied by the candidate door.
  • step S222 feature recognition is performed on the image area to determine that the candidate door is a solid door.
  • the main optical axis of the camera device is perpendicular to the traveling plane, and referring to FIG. 3 and related descriptions, the angle range of the candidate door in the image can represent the projection of the entity object corresponding to the candidate door to The angular range of the traveling plane of the mobile robot uses the angular range in the position information of the candidate door measured by the measuring device to determine the image area within the corresponding angular range in the image.
  • step S221 also includes step S2211 and step S2212.
  • step S2211 determine at least one angular range based on the position information of the two ends of the candidate door; in the step S2212, determine from the image according to the determined angular range to identify whether the candidate door is an entity The image area of the door.
  • an angle range containing the position information of the two ends of the candidate door is determined, that is, the angle range includes the entire gap of the candidate door, and according to the inclusion and the candidate door
  • the angle range of the gap corresponding to the door is used as an image area for identifying whether the candidate door is a physical door.
  • FIG. 8 shows a schematic diagram of a scene application in a specific embodiment of this application.
  • the angles between the two ends of the candidate door 81 and the moving direction of the cleaning robot 82 are 10 degrees and 25 degrees, and the area within the angle range of 10 degrees to 25 degrees is selected as the Identify whether the candidate door is an image area of a physical door.
  • the two ends of the candidate door are selected with a single-ended small angle range, that is, two small angle ranges with respect to the two ends of the candidate door are selected and used as the identification Whether the candidate door is the image area of the entity door.
  • FIG. 9 shows a schematic diagram of a scenario application in a specific embodiment of this application.
  • the angles between the two ends of the candidate door 91 and the moving direction of the cleaning robot 92 are 10 degrees and 25 degrees
  • the angle between the candidate door 91 and the moving direction of the cleaning robot 92 is 10 degrees and 25 degrees.
  • the angle formed is one end of 10 degrees, and a first angle range of 9 degrees to 11 degrees with respect to the moving direction of the cleaning robot 92 is selected, and the angle between the candidate door 91 and the moving direction of the cleaning robot 92 is selected. At the other end with an angle of 25 degrees, a second angle range of 24 degrees to 26 degrees with respect to the moving direction of the cleaning robot 92 is selected, and the first angle range and the second angle range are selected as the candidate door 91 is the image area of the physical door.
  • the projection of the door frame of the solid door will show the feature of vanishing point in the image.
  • the candidate door obtained by the measuring device is available Recognized as a solid gate, the selected image area will contain the characteristic lines corresponding to the above characteristics.
  • the vertical angle relationship between the main optical axis of the camera and the traveling plane of the cleaning robot is generally low.
  • the angle of the camera to the door is generally from bottom to top.
  • the step S222 may include step S2221, that is, the processing device determines whether the candidate door is a physical door by executing step S2221.
  • the processing device determines whether the candidate door is a physical door by executing step S2221.
  • the step S2221 at least two feature lines representing perpendicular to the traveling plane are identified in the image, and the candidate door is determined to be a solid door based on the identified feature lines.
  • the image area within the angle range related to the position information is recognized, and the line where at least three characteristic lines intersect at one point is identified in the image area, and the At least three feature lines are used to represent the characteristic lines perpendicular to the traveling plane; and the candidate door is determined to be a solid door based on the identified feature lines.
  • the candidate door may be determined to be a physical door based on preset known characteristic information of the entity door; the characteristic information of the entity door may be the image characteristic of the entity door, and the image
  • the feature can identify the entity door in the image, and the image feature is, for example, a contour feature about the entity door.
  • the image feature includes a preset graphic feature corresponding to the entity gate, or an image feature obtained through an image processing algorithm.
  • the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, angle extraction, line extraction, and image processing algorithms obtained through machine learning.
  • Image processing algorithms obtained through machine learning include, but are not limited to: neural network algorithms, clustering algorithms, etc.
  • the program stored in the memory includes the network structure and connection mode of the neural network model.
  • the neural network model may be a convolutional neural network, and the network structure includes an input layer, at least one hidden layer, and at least one output layer.
  • the input layer is used to receive the captured image or the preprocessed image;
  • the hidden layer includes a convolutional layer and an activation function layer, and may even include a normalization layer, a pooling layer, and a fusion layer.
  • the output layer is used to output images marked with object type tags.
  • connection mode is determined according to the connection relationship of each layer in the neural network model. For example, the connection relationship between the front and back layers is set based on data transmission, the connection relationship with the previous layer data is set based on the size of the convolution kernel in each hidden layer, and the full connection is set.
  • the neural network model classifies each object recognized from the image.
  • the feature information corresponding to the physical door may include two feature lines perpendicular to the traveling plane of the cleaning robot, and the distance between the two feature lines is within a preset width threshold range, that is, using the image recognition
  • the algorithm constructs the mapping relationship between the candidate door and the entity door in the image, and then determines that the candidate door is the entity door.
  • the method S2 for dividing a cleaning area further includes a step of marking the determined physical door and its position information in a map for setting a cleaning route.
  • the map is a grid map, the mapping relationship between the unit grid size and the unit size of the physical space is predetermined, and the obtained physical door and its position information are marked on the map.
  • the text description, image identification, or number of the corresponding physical door can be marked on the map, and the text description can be a description of the name of the physical door, for example, the name of the physical door is described as "door".
  • the image identifier may be an icon corresponding to the actual image of the physical door.
  • the number can be a preset number label related to the physical door, such as "001".
  • the cleaning robot designs a navigation route to traverse the cleaning area based on a predetermined cleaning area, and determines a cleaning route that is convenient for cleaning according to the marking information of the physical door located in the cleaning area on the map.
  • the method for dividing the cleaning area further includes step S23, that is, dividing the cleaning area of the cleaning robot according to the physical door and its position information to restrict the walking of the cleaning robot range.
  • a virtual wall is provided at the physical door; and the cleaning area of the cleaning robot is divided according to the virtual wall and the area where the cleaning robot is located.
  • the cleaning area is a room area determined based on the physical door; for example, each of the room areas is composed of the virtual wall and the physical wall, and it is determined based on the set virtual wall and measurement The solid wall can determine multiple room areas, and then divide the cleaning area in the area where the cleaning robot is located.
  • the cleaning robot may preset a range of cleaning units to traverse the cleaning area, each of the cleaning unit ranges may include nine grid areas, and each time the cleaning robot plans the next nine grids to be cleaned. After the nine grid areas have been cleaned, plan the next cleaning unit range for the cleaning robot, and when the planned cleaning unit range cannot be planned to nine due to obstacles (such as walls or cabinets). When there are two grid areas, the obstacle is the cut-off point, and the grid area not blocked by the obstacle is used as the cleaning range that the cleaning robot needs to traverse next. For example, due to the barrier of the wall, the next planned When the cleaning area can only reach six grid areas, the six grid areas are used as the cleaning area that the cleaning robot needs to traverse next, and so on, until the cleaning robot traverses the current cleaning area .
  • obstacles such as walls or cabinets
  • the cleaning area is an area divided according to a preset area range and location information of physical doors located within the area range.
  • the cleaning area of the cleaning robot is divided according to the virtual wall and the area where the cleaning robot is located, and the walking range of the cleaning robot is restricted.
  • the preset area range is, for example, the user's home.
  • the user's home may include three areas: a living room, a bedroom, a kitchen, and a bathroom, and each area has a physical door, and when passing through the measuring device and the camera After obtaining the location information of each entity object, a virtual wall is set at the location information corresponding to the physical door, and the combination of the virtual wall and the physical wall connected to the virtual wall forms an independent area, and then according to the The virtual wall and the area where the cleaning robot is located divide the cleaning area of the cleaning robot. For example, the area of the user's home is divided into four cleaning areas according to the virtual wall, which are a living room, a bedroom, a kitchen, and a bathroom. And traverse cleaning is performed in each of the cleaning areas in a preset traversal manner.
  • the method of dividing the cleaning area in this application can be based on the distance measurement sensor device and the angle sensor device, or the TOF measuring device to measure the angle and distance of the obstacle relative to the cleaning robot in the area where the cleaning robot is located, and accurately determine the area
  • the position information of the candidate door in the camera, and the camera device acquires an image containing the candidate door, and then determines that the candidate door is a physical door, and divides the cleaning area of the cleaning robot according to the physical door and its position information to restrict all Regarding the walking range of the cleaning robot, this application directly divides the cleaning area of the cleaning robot according to the physical door after obtaining more accurate position information about the physical door, which can more accurately and reasonably divide the cleaning area, which is in line with the user's usual area
  • the method of dividing the cleaning area can improve the human-computer interaction of the cleaning robot running the method of dividing the cleaning area.
  • FIG. 12 shows a schematic diagram of the composition of the navigation system of the mobile robot of this application in a specific embodiment.
  • the navigation system 30 of the mobile robot includes a measurement device 31, a camera device 32 and a processing device 33.
  • the mobile robots include, but are not limited to: family companion mobile robots, cleaning robots, patrol mobile robots, glass cleaning robots, and the like.
  • the measuring device 31 is provided in the mobile robot, and is used to measure the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located.
  • the measuring device may be installed on the body side of the mobile robot (embedded on the body side of the mobile robot), and the measuring device 31 may be, for example, a scanning laser or a TOF sensor.
  • the scanning laser includes an angle sensing device and a distance measuring sensor, and the angle information corresponding to the distance information measured by the distance measuring sensor is obtained through the angle sensor, and the obstacle measurement point is measured at the scanning laser through laser or infrared. The distance from the distance measuring sensor at the current angle.
  • the scanning laser is a laser that changes direction, starting point or pattern of propagation with time relative to a fixed frame of reference.
  • the scanning laser is based on the principle of laser distance measurement, which forms a two-dimensional scanning surface through a rotatable optical component (laser transmitter) to achieve area scanning and profile measurement functions.
  • the ranging principle of a scanning laser includes: a laser transmitter emits a laser pulse wave, when the laser wave hits an object, part of the energy returns, when the laser receiver receives the return laser wave, and the energy of the return wave is sufficient to trigger the threshold, then Scan the laser to calculate its distance to the object.
  • the scanning laser continuously emits laser pulse waves.
  • the laser pulse waves hit the high-speed rotating mirror surface and emit the laser pulse waves in all directions to form a two-dimensional area scan.
  • the scanning of this two-dimensional area can, for example, realize the following two functions: 1) Set protection areas of different shapes within the scanning range of the scanning laser, and send out an alarm signal when an object enters the area; 2) Scanning the laser Within the range, the scanning laser outputs the distance of each obstacle measurement point. According to this distance information, the outline and coordinate positioning of the object can be calculated.
  • the TOF measuring device 31 is based on TOF technology.
  • TOF technology is one of the optical non-contact three-dimensional depth measurement and perception methods. It continuously sends light pulses to the target, and then uses the sensor to receive the light returned from the object, and detects the flight (round trip) time of these transmitted and received light pulses. Get the target distance.
  • TOF's irradiating unit all emits after high-frequency modulation of light.
  • LED or laser including laser diode and VCSEL (Vertical Cavity Surface Emitting Laser)
  • a laser is used to emit high-performance pulsed light.
  • the pulse can reach about 100MHz, and infrared light is mainly used.
  • the wavelength of the lighting module is generally in the infrared band, and high frequency modulation is required.
  • the TOF photosensitive module is similar to the ordinary mobile phone camera module, which is composed of chips, lenses, circuit boards and other components. Each pixel of the TOF photosensitive chip records the specific phase between the camera and the object that emits light waves. Through data processing The unit extracts the phase difference and calculates the depth information by the formula.
  • the TOF measuring device 31 has a small size and can directly output the depth data of the detected object, and the depth calculation result of the TOF measuring device 31 is not affected by the grayscale and features of the surface of the object, and can perform three-dimensional detection very accurately.
  • the camera device 32 is set in the mobile robot and is used to obtain an image containing the candidate recognition object; the camera device 32 includes but is not limited to any of a fisheye camera module and a wide-angle (or non-wide-angle) camera module.
  • the mobile robot includes at least one camera device 32.
  • the camera device 32 captures a physical object in the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image.
  • a mobile robot includes a camera device, which is arranged on the top, shoulder or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  • a mobile robot includes a plurality of camera devices 32, and the main optical axis of one camera device 32 is perpendicular to the traveling plane of the mobile robot.
  • the projection image formed by the projection of the image captured by the imaging device 32 set up in the above manner on the traveling plane of the mobile robot is equivalent to the vertical projection of the image captured by the imaging device 32 on the traveling plane, for example,
  • the camera device 32 is embedded in the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  • the processing device 33 is connected to the measuring device 31 and the camera device 32.
  • the processing device 33 is an electronic device capable of performing numerical operations, logical operations, and data analysis, including but not limited to: CPU, GPU, FPGA, etc. , And volatile memory used to temporarily store intermediate data generated during operation.
  • the processing device 33 is used to run at least one program to execute the navigation method of the mobile robot. For the navigation method of the mobile robot, refer to FIG. 1 and the related description about FIG. 1, which will not be repeated here.
  • the mobile robot 40 includes a measuring device 41, an imaging device 42, a first processing device 43, a moving device 44, and a second processing device 45.
  • the measuring device 41 is provided in the mobile robot, and is used to measure the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located.
  • the measuring device 41 can be installed on the body side of the mobile robot (embedded on the body side of the mobile robot), and the measuring device 41 can be, for example, a scanning laser or a TOF sensor.
  • the camera device 42 is set in the mobile robot and is used to obtain an image containing the candidate recognition object; the camera device 42 includes but is not limited to any of a fisheye camera module and a wide-angle (or non-wide-angle) camera module.
  • the mobile robot includes at least one camera 42.
  • the camera device 42 captures a physical object in the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image.
  • a mobile robot includes a camera device, which is arranged on the top, shoulder or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  • a mobile robot includes a plurality of camera devices 42, wherein the main optical axis of one camera device 42 is perpendicular to the traveling plane of the mobile robot.
  • the projection image formed by the projection of the image captured by the imaging device 42 in the above-mentioned manner on the traveling plane of the mobile robot is equivalent to the vertical projection of the image captured by the imaging device 42 on the traveling plane, for example,
  • the camera device 42 is embedded in the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  • the first processing device 43 is connected to the measuring device 41 and the camera 42.
  • the first processing device 43 is an electronic device capable of performing numerical operations, logical operations, and data analysis. It includes but is not limited to: CPU, GPU, FPGA, etc., and volatile memory used to temporarily store intermediate data generated during the operation.
  • the first processing device 43 is configured to run at least one program to execute the navigation method of the mobile robot to generate a navigation route. For the navigation method of the mobile robot, refer to FIG. 1 and the related description about FIG. 1, which will not be repeated here.
  • the mobile device 44 is provided on the mobile robot for controlling the position and posture of the mobile robot; the mobile device 44 may include a walking mechanism and a walking driving mechanism, wherein the walking mechanism may be set at At the bottom of the robot body, the walking driving mechanism is built in the robot body.
  • the walking mechanism may, for example, include a combination of two straight walking wheels and at least one auxiliary steering wheel, the two straight walking wheels are respectively provided on opposite sides of the bottom of the robot body, and the two straight walking wheels can be driven by
  • the corresponding two traveling driving mechanisms realize independent driving, that is, the left straight traveling wheel is driven by the left traveling drive mechanism, and the right straight traveling wheel is driven by the right traveling drive mechanism.
  • the universal walking wheel or the straight walking wheel may have a bias drop suspension system, which is fastened in a movable manner, for example, is rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Bias.
  • the spring bias allows the universal traveling wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain ground force.
  • the traveling driving mechanism may include a driving motor, and the traveling wheels in the traveling mechanism can be driven to move by using the driving motor.
  • the drive motor can be, for example, a reversible drive motor, and a speed change mechanism can also be provided between the drive motor and the axle of the walking wheel.
  • the walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly, assembly and maintenance.
  • the second processing device 45 is connected to the first processing device 43 and the mobile device 44 for running at least one program to control the mobile device 44 to adjust based on the navigation route provided by the first processing device 43 Position and posture to move autonomously along the navigation route.
  • the second processing device 45 is, for example, a control circuit that controls the operation of the driving motor (motor) of the mobile device 44.
  • the second processing device 45 sends the navigation route sent by the first processing device 43 to the
  • the driving motor sends a driving command to control the moving device to adjust the position and posture, and move several unit grids according to a preset grid map, so that the mobile robot moves according to the navigation route.
  • FIG. 14 shows a schematic diagram of the composition of the system for dividing a clean area of this application in a specific embodiment.
  • the system 50 for dividing a cleaning area is used for a cleaning robot, and the system 50 for dividing a cleaning area includes a measuring device 51, a camera 52 and a processing device 53.
  • the measuring device 51 is provided in the cleaning robot and is used to measure the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located.
  • the measuring device 51 may be installed on the body side of the cleaning robot (embedded on the body side of the mobile robot), and the measuring device 51 may be, for example, a scanning laser or a TOF sensor.
  • the camera device 52 is arranged in the cleaning robot and is used to obtain images including candidate doors; the camera device 52 includes but is not limited to any one of a fisheye camera module and a wide-angle (or non-wide-angle) camera module.
  • the cleaning robot includes at least one camera device 52.
  • the camera device 52 captures a solid object in the field of view at the location of the cleaning robot and projects it onto the traveling plane of the cleaning robot to obtain a projected image.
  • the cleaning robot includes a camera device 52, which is arranged on the top, shoulder or back of the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
  • the cleaning robot includes a plurality of camera devices 52, and the main optical axis of one camera device 52 is perpendicular to the traveling plane of the cleaning robot.
  • the projection image formed by the projection of the image captured by the imaging device 52 in the above manner on the traveling plane of the cleaning robot is equivalent to the vertical projection of the image captured by the imaging device 52 on the traveling plane, for example,
  • the camera device 52 is embedded in the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
  • the processing device 53 is connected to the measuring device 51 and the camera device 52.
  • the processing device 53 is an electronic device capable of performing numerical operations, logical operations, and data analysis, including but not limited to: CPU, GPU, FPGA, etc. , And volatile memory used to temporarily store intermediate data generated during operation.
  • the processing device 53 is configured to run at least one program to execute the method of dividing a clean area. Refer to FIG. 10 and the related description about FIG. 10 for the method of dividing the clean area, and details are not repeated here.
  • FIG. 15 shows a schematic diagram of the composition of the cleaning robot in a specific embodiment of the present application.
  • the cleaning robot 60 includes a measuring device 61, an imaging device 62, a first processing device 63, a moving device 64, a cleaning device 65, and a second processing device 66.
  • the measuring device 61 is provided in the cleaning robot, and is used to measure the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located.
  • the measuring device 41 may be installed on the body side of the cleaning robot (embedded on the body side of the cleaning robot), and the measuring device 41 may be, for example, a scanning laser or a TOF sensor.
  • the camera device 62 is provided in the cleaning robot and is used to obtain an image containing the candidate recognition object; the camera device 62 includes but is not limited to any of a fisheye camera module and a wide-angle (or non-wide-angle) camera module.
  • the cleaning robot includes at least one camera device 62.
  • the camera device 62 captures a physical object in the field of view at the location of the cleaning robot and projects it onto the traveling plane of the cleaning robot to obtain a projected image.
  • the cleaning robot includes a camera, which is arranged on the top, shoulder or back of the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
  • the cleaning robot includes a plurality of camera devices 62, and the main optical axis of one camera device 42 is perpendicular to the traveling plane of the cleaning robot.
  • the projection image formed by the projection of the image captured by the imaging device 62 in the above manner on the traveling plane of the cleaning robot is equivalent to the vertical projection of the image captured by the imaging device 62 on the traveling plane, for example,
  • the camera device 62 is embedded in the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
  • the first processing device 63 is connected to the measuring device 61 and the camera 62.
  • the first processing device 63 is an electronic device capable of performing numerical operations, logical operations, and data analysis, including but not limited to: CPU, GPU, FPGA, etc., and volatile memory used to temporarily store intermediate data generated during the operation.
  • the first processing device 63 is configured to run at least one program to execute the method of dividing a clean area, and use the obtained clean area to generate a navigation route. Refer to FIG. 10 and the related description about FIG. 10 for the method of dividing the clean area, and details are not repeated here.
  • the moving device 64 is provided on the cleaning robot for controlled adjustment of the position and posture of the cleaning robot; the moving device 64 may include a walking mechanism and a walking driving mechanism, wherein the walking mechanism may be set at At the bottom of the robot body, the walking driving mechanism is built in the robot body.
  • the walking mechanism may, for example, include a combination of two straight walking wheels and at least one auxiliary steering wheel, the two straight walking wheels are respectively provided on opposite sides of the bottom of the robot body, and the two straight walking wheels can be driven by
  • the corresponding two traveling driving mechanisms realize independent driving, that is, the left straight traveling wheel is driven by the left traveling drive mechanism, and the right straight traveling wheel is driven by the right traveling drive mechanism.
  • the universal walking wheel or the straight walking wheel may have a bias drop suspension system, which is fastened in a movable manner, for example, is rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Bias.
  • the spring bias allows the universal traveling wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain ground force.
  • the traveling driving mechanism may include a driving motor, and the traveling wheels in the traveling mechanism can be driven to move by using the driving motor.
  • the drive motor can be, for example, a reversible drive motor, and a speed change mechanism can also be provided between the drive motor and the axle of the walking wheel.
  • the walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly, assembly and maintenance.
  • the cleaning device 65 may at least include a cleaning component and a dust suction component.
  • the cleaning assembly may include a side cleaning brush located at the bottom of the housing of the cleaning robot and a side brush motor for controlling the side cleaning brush, wherein the number of the side cleaning brush may be two, respectively They are symmetrically arranged on opposite sides of the rear end of the housing, and the cleaning side brush can be a rotating cleaning side brush, which can be rotated under the control of the side brush motor.
  • the dust collection assembly may include a dust collection chamber and a vacuum cleaner, wherein the dust collection chamber is placed in the housing, the air outlet of the vacuum cleaner is in communication with the dust collection chamber, and the air inlet of the vacuum cleaner is arranged at The bottom of the housing.
  • the second processing device 66 is connected to the first processing device 63 and controls the cleaning device 65 and the mobile device 64 respectively, and is used to run at least one program to control based on the navigation route provided by the first processing device 63
  • the moving device 64 adjusts the position and posture to move autonomously along the navigation route, and controls the cleaning device 65 to perform cleaning operations.
  • the second processing device 66 After the second processing device 66 receives the navigation route sent by the first processing device 63, it sends a driving command to the driving motor of the mobile device 63 to control the mobile device to adjust the position and posture, and according to the preset Grid map, moving several unit grids to make the cleaning robot move according to the navigation route, and while the cleaning robot is moving, the second processing device 66 sends control to the side brush motor Command to make the side brush motor drive the side cleaning brush to rotate and control the vacuum cleaner to start working.
  • FIG. 16 shows a schematic diagram of the composition of the data processing device of this application in a specific embodiment.
  • the data processing device 70 is used for a mobile robot, and the data processing device 70 includes a data interface 71, a storage unit 72 and a processing unit 73.
  • the data interface 71 is used to connect a camera device and a measuring device of the mobile robot; the camera device captures a physical object in the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image .
  • a mobile robot includes a camera device, which is arranged on the top, shoulder or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  • a mobile robot includes a plurality of camera devices, and the main optical axis of one camera device is perpendicular to the traveling plane of the mobile robot.
  • the storage unit 72 is used to store at least one program
  • the processing unit 73 is connected to the storage unit 72 and the data interface 71, and is used to obtain the position information provided by the measuring device and the image taken by the camera device through the data interface 71; and To perform the navigation method or the method of dividing a clean area.
  • the navigation method refer to FIG. 1 and related descriptions about FIG. 1, and for the method of dividing a clean area, refer to FIG. 10 and related descriptions about FIG. 10, which will not be repeated here.
  • a computer-readable storage medium stores at least one program, and the at least one program executes the navigation method or partition when called Methods of cleaning the area.
  • the navigation method refer to FIG. 1 and related descriptions about FIG. 1, and for the method of dividing a clean area, refer to FIG. 10 and related descriptions about FIG. 10, which will not be repeated here.
  • the storage medium stores at least one program, and the program executes any of the aforementioned navigation methods when called.
  • the technical solution of the present application essentially or the part that contributes to the prior art can be embodied in the form of a software product.
  • the computer software product can include one or more machine executable instructions stored thereon.
  • a machine-readable medium when these instructions are executed by one or more machines, such as a computer, a computer network, or other electronic devices, can cause the one or more machines to perform operations according to the embodiments of the present application. For example, perform the steps in the robot positioning method.
  • Machine-readable media may include, but are not limited to, floppy disks, optical disks, CD-ROM (compact disk-read only memory), magneto-optical disks, ROM (read only memory), RAM (random access memory), EPROM (erasable Except programmable read-only memory), EEPROM (electrically erasable programmable read-only memory), magnetic or optical cards, flash memory, or other types of media/machine-readable media suitable for storing machine-executable instructions.
  • the storage medium may be located in a robot or a third-party server, such as a server that provides an application mall. There are no restrictions on specific application malls, such as Huawei App Store, and Apple App Store.
  • This application can be used in many general or special computing system environments or configurations. For example: personal computers, server computers, handheld devices or portable devices, tablet devices, multi-processor systems, microprocessor-based systems, set-top boxes, programmable consumer electronic devices, network PCs, small computers, large computers, including Distributed computing environment of any of the above systems or equipment, etc.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • This application can also be practiced in distributed computing environments. In these distributed computing environments, remote processing devices connected through a communication network perform tasks.
  • program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A method and a system for navigating and dividing a cleaning region, a mobile robot, and a cleaning robot. The method comprises: measuring, on the basis of a range sensing device and an angle sensing device or a TOF measurement device and with respect to a mobile robot, an angle and a distance to an obstacle in a region in which the mobile robot is located, accurately determining position information of a candidate identification object in the region, enabling a camera device to acquire an image containing the candidate identification object, accordingly determining physical object information corresponding to the candidate identification object, and determining, according to the physical object information and position information thereof, a navigation route for the mobile robot in the region. A cleaning robot directly performs, after sufficiently accurate position information related to physical object information is acquired, precision navigation route planning and region division according to the physical object information, thereby enhancing accuracy of the navigation route planning and region division, and improving human machine interaction of mobile robots.

Description

导航、划分清洁区域方法及系统、移动及清洁机器人Method and system for navigating, dividing cleaning area, mobile and cleaning robot 技术领域Technical field
本申请涉及移动通信技术领域,特别是涉及一种导航、划分清洁区域方法及系统、移动及清洁机器人。This application relates to the field of mobile communication technology, and in particular to a method and system for navigating, dividing a clean area, and a mobile and cleaning robot.
背景技术Background technique
移动机器人是自动执行工作的机器装置。它既可以接受人类指挥,又可以运行预先编排的程序,也可以根据以人工智能技术制定的原则纲领行动。这类移动机器人可用在室内或室外,可用于工业或家庭,可用于取代保安巡视、取代人们清洁地面,还可用于家庭陪伴、辅助办公等。Mobile robots are mechanical devices that perform tasks automatically. It can accept human command, run pre-arranged programs, or act according to principles and programs formulated with artificial intelligence technology. This type of mobile robot can be used indoors or outdoors, can be used in industry or home, can be used to replace security patrols, replace people to clean the ground, can also be used for family companions, auxiliary office, etc.
移动机器人基于视觉传感器所提供的视觉信息并结合其他移动传感器所提供的移动数据,一方面能够构建机器人所在场地的地图数据,另一方面,还可基于所构建的地图数据提供路线规划、路线规划调整及导航服务,这使得移动机器人的移动效率更高。然而在实际应用中,所构建的地图上未标记实体对象的位置,因此,移动机器人无法对场景中实体对象实现精准的定位,进而移动机器人无法根据场景中的实体对象实现精确的导航路线的规划和区域的划分。Based on the visual information provided by the visual sensor and combined with the movement data provided by other mobile sensors, the mobile robot can construct map data of the site where the robot is located on the one hand, and on the other hand, it can also provide route planning and route planning based on the constructed map data Adjustment and navigation services, which make mobile robots move more efficiently. However, in actual applications, the location of the physical objects is not marked on the constructed map. Therefore, the mobile robot cannot accurately locate the physical objects in the scene, and the mobile robot cannot realize accurate navigation route planning based on the physical objects in the scene. And the division of regions.
发明内容Summary of the invention
鉴于以上所述现有技术的缺点,本申请的目的在于提供一种导航、划分清洁区域方法及系统、移动及清洁机器人,用于解决现有技术中移动机器人无法对场景中实体对象实现精准的定位,进而无法根据场景中的实体对象实现精确的导航路线的规划和区域的划分的问题。In view of the above-mentioned shortcomings of the prior art, the purpose of this application is to provide a method and system for navigating and dividing a clean area, a mobile and cleaning robot, which is used to solve the problem that mobile robots in the prior art cannot accurately locate physical objects in the scene. Therefore, it is impossible to achieve precise navigation route planning and area division based on the physical objects in the scene.
为实现上述目的及其他相关目的,本申请的第一方面提供一种移动机器人的导航方法,所述移动机器人包含测量装置和摄像装置,所述方法包括以下步骤:令所述测量装置测量所述移动机器人所在区域内障碍物相对于移动机器人的位置信息,并确定在所述区域内的候选识别对象所占的位置信息;根据所确定的候选识别对象所占的位置信息,令所述摄像装置获取包含所述候选识别对象的图像,并确定所述候选识别对象相应的实体对象信息;依据所述实体对象信息及其位置信息确定所述移动机器人在所述区域内的导航路线。In order to achieve the above and other related purposes, the first aspect of the present application provides a navigation method for a mobile robot. The mobile robot includes a measuring device and a camera. The method includes the following steps: making the measuring device measure the Position information of obstacles in the area where the mobile robot is located relative to the mobile robot, and determine the position information occupied by the candidate recognition object in the area; according to the determined position information of the candidate recognition object, make the camera device Obtain an image containing the candidate identification object, and determine the entity object information corresponding to the candidate identification object; determine the navigation route of the mobile robot in the area according to the entity object information and its position information.
在本申请的第一方面的某些实施方式中,所述确定在区域内候选识别对象所占的位置信息的步骤包括:依据测量所述区域内各障碍物测量点的位置信息以获得一个扫描轮廓及其所占位置信息;按照所述扫描轮廓上的不连续部分,将所述扫描轮廓划分为多个候选识别对 象,并确定各候选识别对象所占的位置信息。In some implementation manners of the first aspect of the present application, the step of determining the position information occupied by the candidate recognition object in the area includes: obtaining a scan based on measuring the position information of each obstacle measurement point in the area Contour and its position information; according to the discontinuous part on the scanning contour, the scanning contour is divided into a plurality of candidate recognition objects, and the position information occupied by each candidate recognition object is determined.
在本申请的第一方面的某些实施方式中,所述依据测量所述区域内各障碍物测量点的位置信息以获得一个扫描轮廓及其所占位置信息的步骤包括:基于所述测量装置所测得的各障碍物测量点的位置信息面阵列,拟合所述移动机器人的行进平面,以及确定位于所述行进平面上扫描轮廓及其所占位置信息;或者基于所述测量装置所测得的平行于行进平面的位置信息线阵列,确定位于所述行进平面上扫描轮廓及其所占位置信息。In some implementations of the first aspect of the present application, the step of obtaining a scan profile and its occupied position information based on measuring the position information of each obstacle measurement point in the area includes: based on the measurement device The measured position information plane array of each obstacle measurement point, fits the traveling plane of the mobile robot, and determines the scanning contour and its occupied position information on the traveling plane; or based on the measurement by the measuring device The obtained position information line array parallel to the traveling plane determines the scanning contour and the position information occupied by it on the traveling plane.
在本申请的第一方面的某些实施方式中,所述基于扫描轮廓上的不连续部分,将所述扫描轮廓划分为多个候选识别对象的步骤包括:基于所述扫描轮廓上由不连续部分所形成的缺口,确定相应候选识别对象为包含缺口的第一候选识别对象;将所述扫描轮廓上由不连续部分所分隔的连续部分,确定相应候选识别对象为妨碍移动机器人移动的第二候选识别对象。In some embodiments of the first aspect of the present application, the step of dividing the scan contour into a plurality of candidate recognition objects based on the discontinuous part on the scan contour includes: Part of the gap formed, determine the corresponding candidate recognition object as the first candidate recognition object containing the gap; determine the corresponding candidate recognition object as the second candidate recognition object that hinders the movement of the mobile robot by dividing the continuous part separated by the discontinuous part on the scan contour Candidate recognition object.
在本申请的第一方面的某些实施方式中,所述基于扫描轮廓上由不连续部分所形成的缺口,确定相应候选识别对象为包含缺口的第一候选识别对象的步骤包括:按照预设的筛选条件,对所形成的缺口进行筛选;其中,所述筛选条件包含:缺口位于与其相邻的至少一侧的连续部分所在沿线上、和/或预设的缺口宽度阈值;以及基于筛选后的缺口确定相应候选识别对象为包含所述缺口的第一候选识别对象。In some implementations of the first aspect of the present application, the step of determining the corresponding candidate recognition object as the first candidate recognition object including the gap based on the gap formed by the discontinuous part on the scan contour includes: The screening conditions for screening the formed gaps; wherein, the screening conditions include: the gap is located along the line where the continuous part of at least one side adjacent to the gap is located, and/or the preset gap width threshold; and based on the post-screening The gap determines that the corresponding candidate recognition object is the first candidate recognition object that includes the gap.
在本申请的第一方面的某些实施方式中,所述令测量装置测量所述移动机器人所在区域内障碍物相对于移动机器人的位置信息的步骤包括:令所述测量装置测量所述摄像装置的视场范围内障碍物相对于移动机器人的位置信息。In some implementations of the first aspect of the present application, the step of causing the measuring device to measure the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located includes: making the measuring device measure the camera device The position information of obstacles relative to the mobile robot in the field of view.
在本申请的第一方面的某些实施方式中,所述根据所确定的候选识别对象所占的位置信息,令所述摄像装置获取包含所述候选识别对象的图像的步骤包括:令所述摄像装置摄取所述候选识别对象投影至所述移动机器人的行进平面的图像;或者根据所得到的候选识别对象所占位置信息,控制所述移动机器人移动,并令所述摄像装置摄取包含相应候选识别对象的图像。In some implementation manners of the first aspect of the present application, the step of causing the camera device to obtain an image containing the candidate identification object according to the determined position information occupied by the candidate identification object includes: The camera device captures the image of the candidate recognition object projected onto the traveling plane of the mobile robot; or according to the obtained position information of the candidate recognition object, controls the mobile robot to move, and causes the camera device to capture images containing the corresponding candidate Identify the image of the object.
在本申请的第一方面的某些实施方式中,所述确定候选识别对象相应的实体对象信息的步骤包括:根据所述候选识别对象所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域;对所述图像区域进行特征识别以确定所述候选识别对象相应的实体对象信息。In some implementation manners of the first aspect of the present application, the step of determining the entity object information corresponding to the candidate recognition object includes: determining the corresponding information in the image according to the angle range in the position information occupied by the candidate recognition object An image area within an angle range; performing feature recognition on the image area to determine the entity object information corresponding to the candidate recognition object.
在本申请的第一方面的某些实施方式中,若所述候选识别对象包含带有缺口的第一候选识别对象;对应地,所述根据候选识别对象所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域的步骤包括:基于所述候选识别对象两端的位置信息确定至少一个角度范围;按照所确定的角度范围从所述图像中确定用于识别相应第一候选识别对象的实体 对象信息的图像区域。In some implementations of the first aspect of the present application, if the candidate recognition object includes the first candidate recognition object with a gap; correspondingly, the determination is made based on the angle range in the position information occupied by the candidate recognition object The step of the image area in the corresponding angle range in the image includes: determining at least one angle range based on the position information of the two ends of the candidate recognition object; and determining the corresponding first candidate from the image according to the determined angle range The image area of the entity object information to identify the object.
在本申请的第一方面的某些实施方式中,所述候选识别对象包含带有缺口的第一候选识别对象;对应地,所述确定候选识别对象相应的实体对象信息的步骤包括:根据所述第一候选识别对象所占的位置信息,在所述图像中识别出至少两条用于表示垂直于所述行进平面的特征线;基于所识别出的特征线确定所述第一候选识别对象为用于表示门的实体对象信息。In some implementations of the first aspect of the present application, the candidate recognition object includes a first candidate recognition object with a gap; correspondingly, the step of determining the entity object information corresponding to the candidate recognition object includes: The position information occupied by the first candidate recognition object, at least two characteristic lines representing perpendicular to the traveling plane are identified in the image; the first candidate recognition object is determined based on the identified characteristic lines It is the entity object information used to represent the door.
在本申请的第一方面的某些实施方式中,所述确定候选识别对象相应的实体对象信息的步骤包括:基于预设已知的多种实体对象的特征信息,识别所述图像中的候选识别对象的实体对象信息;利用预设的图像识别算法,构建所述图像中候选识别对象与已知的多种实体对象信息的映射关系,以确定候选识别对象所对应的实体对象信息。In some implementations of the first aspect of the present application, the step of determining the entity object information corresponding to the candidate identification object includes: identifying the candidate in the image based on the feature information of a plurality of preset known entity objects Identify the entity object information of the identification object; use a preset image recognition algorithm to construct a mapping relationship between the candidate identification object in the image and various known entity object information to determine the entity object information corresponding to the candidate identification object.
在本申请的第一方面的某些实施方式中,还包括:将所确定的实体对象信息及其位置信息标记在用于设置导航路线的地图中。In some implementation manners of the first aspect of the present application, it further includes: marking the determined entity object information and its location information in a map for setting a navigation route.
在本申请的第一方面的某些实施方式中,所述移动机器人为清洁机器人;所述依据实体对象信息及其位置信息确定所述移动机器人在所述区域内的导航路线的步骤包括:依据所述实体对象信息及所述移动机器人所在区域划分移动机器人的清洁区域,并设计在所述行走区域中的导航路线。In some implementations of the first aspect of the present application, the mobile robot is a cleaning robot; the step of determining the navigation route of the mobile robot in the area based on the entity object information and its position information includes: The physical object information and the area where the mobile robot is located divide the cleaning area of the mobile robot, and design a navigation route in the walking area.
在本申请的第一方面的某些实施方式中,所述清洁区域包括以下任一种:基于所述实体对象信息而确定的房间区域;按照预设区域范围和位于所述区域范围内的实体对象信息所占位置信息而划分的区域。In some implementations of the first aspect of the present application, the cleaning area includes any one of the following: a room area determined based on the entity object information; according to a preset area range and entities located within the area range The area divided by the location information occupied by the object information.
在本申请的第一方面的某些实施方式中,当所确定的实体对象信息包含实体门时,还包括在所述实体门所对应的位置信息处设置虚拟墙的步骤;以便依据所述虚拟墙及所述移动机器人所在区域划分移动机器人的清洁区域,并设计在所述行走区域中的导航路线。In some implementations of the first aspect of the present application, when the determined physical object information includes a physical door, it further includes the step of setting a virtual wall at the location information corresponding to the physical door; And the area where the mobile robot is located divide the cleaning area of the mobile robot, and design a navigation route in the walking area.
为实现上述目的及其他相关目的,本申请的第二方面提供一种划分清洁区域的方法,用于清洁机器人,所述清洁机器人包含测量装置和摄像装置,所述方法包括以下步骤:令所述测量装置测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息,并确定在所述区域内的候选门所占的位置信息;根据所确定的候选门所占的位置信息,令所述摄像装置获取包含所述候选门的图像,并确定所述候选门为实体门;依据所述实体门及其位置信息划分所述清洁机器人的清洁区域,以约束所述清洁机器人的行走范围。In order to achieve the foregoing and other related objectives, a second aspect of the present application provides a method for dividing a cleaning area for a cleaning robot, the cleaning robot includes a measuring device and a camera, and the method includes the following steps: The measuring device measures the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located, and determines the position information occupied by the candidate doors in the area; according to the position information occupied by the determined candidate doors, the The camera device acquires an image containing the candidate door, and determines that the candidate door is a physical door; and divides the cleaning area of the cleaning robot according to the physical door and its position information to restrict the walking range of the cleaning robot.
在本申请的第二方面的某些实施方式中,所述确定在区域内候选门所占的位置信息的步骤包括:依据测量所述区域内各障碍物测量点的位置信息以获得一个扫描轮廓及其所占位置信息;按照所述扫描轮廓上的不连续部分,确定各候选门所占的位置信息。In some implementation manners of the second aspect of the present application, the step of determining the position information occupied by the candidate door in the area includes: obtaining a scanning profile according to the position information of the measurement points of each obstacle in the area And its occupied position information; according to the discontinuous part on the scan contour, the position information occupied by each candidate door is determined.
在本申请的第二方面的某些实施方式中,所述依据测量所述区域内各障碍物测量点的位 置信息以获得一个扫描轮廓及其所占位置信息的步骤包括:基于所述测量装置所测得的各障碍物测量点的位置信息面阵列,拟合所述清洁机器人的行进平面,以及确定位于所述行进平面上扫描轮廓及其所占位置信息;基于所述测量装置所测得的平行于行进平面的位置信息线阵列,确定位于所述行进平面上扫描轮廓及其所占位置信息。In some implementation manners of the second aspect of the present application, the step of obtaining a scan profile and its occupied position information based on measuring the position information of each obstacle measurement point in the area includes: based on the measurement device A surface array of the measured position information of each obstacle measurement point, fitting the traveling plane of the cleaning robot, and determining the scanning contour on the traveling plane and the position information occupied by it; based on the measurement device measured A line array of position information parallel to the travel plane to determine the scan profile and its occupied position information on the travel plane.
在本申请的第二方面的某些实施方式中,所述基于扫描轮廓上的不连续部分,确定各候选门所占的位置信息的步骤包括:按照预设的筛选条件对由不连续部分所形成的缺口进行筛选,并确定筛选后的缺口属于候选门;其中,所述筛选条件包含:缺口位于与其相邻的至少一侧的连续部分所在沿线上、和/或预设的缺口宽度阈值。In some implementations of the second aspect of the present application, the step of determining the position information occupied by each candidate door based on the discontinuous part on the scan contour includes: according to a preset screening condition, the step of determining the position information of the discontinuous part The formed gaps are screened, and it is determined that the screened gaps belong to candidate gates; wherein, the screening conditions include: the gap is located along the line where the continuous part of at least one side adjacent to the gap is located, and/or a preset gap width threshold.
在本申请的第二方面的某些实施方式中,所述令测量装置测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息的步骤包括:令所述测量装置测量所述摄像装置的视场范围内障碍物相对于清洁机器人的位置信息。In some implementations of the second aspect of the present application, the step of causing the measuring device to measure the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located includes: causing the measuring device to measure the camera device The position information of the obstacle relative to the cleaning robot in the field of view.
在本申请的第二方面的某些实施方式中,所述根据所确定的候选门所占的位置信息,令所述摄像装置获取包含所述候选门的图像的步骤包括:令所述摄像装置摄取所述候选门投影至所述清洁机器人的行进平面的图像;或者根据所得到的候选门所占位置信息,控制所述清洁机器人移动,并令所述摄像装置摄取包含相应候选门的图像。In some implementation manners of the second aspect of the present application, the step of causing the camera device to obtain an image containing the candidate door according to the determined position information occupied by the candidate door includes: making the camera device Capture the image of the candidate door projected onto the travel plane of the cleaning robot; or control the movement of the cleaning robot according to the obtained position information of the candidate door, and make the camera device capture an image containing the corresponding candidate door.
在本申请的第二方面的某些实施方式中,所述确定候选门为实体门的步骤包括:根据所述候选门所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域;对所述图像区域进行特征识别以确定所述候选门为实体门。In some implementation manners of the second aspect of the present application, the step of determining that the candidate door is a physical door includes: determining the corresponding angle range in the image according to the angle range in the position information occupied by the candidate door Image area; performing feature recognition on the image area to determine that the candidate door is a solid door.
在本申请的第二方面的某些实施方式中,所述根据候选门所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域的步骤包括:基于所述候选门两端的位置信息确定至少一个角度范围;按照所确定的角度范围从所述图像中确定用于识别该候选门是否为实体门的图像区域。In some implementation manners of the second aspect of the present application, the step of determining the image area within the corresponding angle range in the image according to the angle range in the position information occupied by the candidate door includes: The position information of the terminal determines at least one angular range; and an image area for identifying whether the candidate door is a physical door is determined from the image according to the determined angular range.
在本申请的第二方面的某些实施方式中,所述确定候选门为实体门的步骤包括:在所述图像中识别出至少两条用于表示垂直于所述行进平面的特征线,并基于所识别出的特征线确定所述候选门为实体门。In some implementations of the second aspect of the present application, the step of determining that the candidate door is a physical door includes: identifying in the image at least two characteristic lines that are perpendicular to the traveling plane, and The candidate door is determined to be a solid door based on the identified characteristic line.
在本申请的第二方面的某些实施方式中,还包括:将所确定的实体门及其位置信息标记在用于设置清洁路线的地图中。In some implementations of the second aspect of the present application, it further includes: marking the determined physical door and its location information in a map for setting a cleaning route.
在本申请的第二方面的某些实施方式中,所述依据实体门及其位置信息划分所述清洁机器人的清洁区域的步骤包括:在所述实体门处设置虚拟墙;以及依据所述虚拟墙及所述清洁机器人所在区域划分清洁机器人的清洁区域。In some implementations of the second aspect of the present application, the step of dividing the cleaning area of the cleaning robot according to the physical door and its position information includes: setting a virtual wall at the physical door; and according to the virtual The wall and the area where the cleaning robot is located divide the cleaning area of the cleaning robot.
在本申请的第二方面的某些实施方式中,所述清洁区域包括以下任一种:基于所述实体 门而确定的房间区域;按照预设区域范围和位于所述区域范围内的实体门所占位置信息而划分的区域。In some implementations of the second aspect of the present application, the cleaning area includes any one of the following: a room area determined based on the physical door; according to a preset area range and physical doors located within the area range Area divided by location information.
为实现上述目的及其他相关目的,本申请的第三方面提供一种移动机器人的导航系统,其特征在于,包括:测量装置,设置于所述移动机器人,用于测量所述移动机器人所在区域内障碍物相对于移动机器人的位置信息;摄像装置,设置于所述移动机器人,用于获取包含所述候选识别对象的图像;处理装置,连接所述测量装置和摄像装置,用于运行至少一程序,以执行如上任一所述的导航方法。In order to achieve the above and other related purposes, a third aspect of the present application provides a navigation system for a mobile robot, which is characterized by comprising: a measuring device, which is provided in the mobile robot and is used to measure the area where the mobile robot is located. Position information of the obstacle relative to the mobile robot; a camera device, which is provided in the mobile robot, and is used to obtain an image containing the candidate recognition object; a processing device, which connects the measurement device and the camera device, and is used to run at least one program To perform any of the navigation methods described above.
在本申请的第三方面的某些实施方式中,所述摄像装置嵌设于所述移动机器人,且主光轴垂直于所述移动机器人的行进平面。In some implementations of the third aspect of the present application, the camera device is embedded in the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
在本申请的第三方面的某些实施方式中,所述测量装置嵌设于所述移动机器人的体侧,所述测量装置包括:测距传感装置和角度传感装置,或者TOF测量装置。In some embodiments of the third aspect of the present application, the measuring device is embedded on the body side of the mobile robot, and the measuring device includes: a distance measuring sensor device and an angle sensor device, or a TOF measuring device .
为实现上述目的及其他相关目的,本申请的第四方面提供一种移动机器人,包括:测量装置,设置于所述移动机器人,用于测量所述移动机器人所在区域内障碍物相对于移动机器人的位置信息;摄像装置,设置于所述移动机器人,用于获取包含所述候选识别对象的图像;第一处理装置,连接所述测量装置和摄像装置,用于运行至少一程序,以执行如上任一所述的导航方法,以生成导航路线;移动装置,设置于所述移动机器人,用于受控地调整所述移动机器人的位置和姿态;第二处理装置,连接于所述第一处理装置和移动装置,用于运行至少一程序,以基于所述第一处理装置所提供的导航路线,控制所述移动装置调整位置和姿态,以沿所述导航路线进行自主移动。In order to achieve the foregoing and other related objectives, a fourth aspect of the present application provides a mobile robot, including: a measuring device, which is provided on the mobile robot, and is used to measure the obstacles relative to the mobile robot in the area where the mobile robot is located. Location information; a camera device, which is provided in the mobile robot, and is used to obtain an image containing the candidate recognition object; a first processing device, which is connected to the measurement device and the camera device, and is used to run at least one program to execute any of the above 1. The navigation method described above to generate a navigation route; a mobile device arranged in the mobile robot for controlled adjustment of the position and posture of the mobile robot; a second processing device connected to the first processing device And a mobile device, configured to run at least one program to control the mobile device to adjust the position and posture based on the navigation route provided by the first processing device to move autonomously along the navigation route.
在本申请的第四方面的某些实施方式中,所述摄像装置嵌设于所述移动机器人,且主光轴垂直于所述移动机器人的行进平面。In some implementations of the fourth aspect of the present application, the camera device is embedded in the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
在本申请的第四方面的某些实施方式中,所述测量装置嵌设于所述移动机器人的体侧,所述测量装置包括:测距传感装置和角度传感装置,或者TOF测量装置。In some embodiments of the fourth aspect of the present application, the measuring device is embedded on the body side of the mobile robot, and the measuring device includes: a distance measuring sensor device and an angle sensor device, or a TOF measuring device .
为实现上述目的及其他相关目的,本申请的第五方面提供一种划分清洁区域的系统,用于清洁机器人,包括:测量装置,设置于所述清洁机器人,用于测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息;摄像装置,设置于所述清洁机器人,用于获取包含所述候选门的图像;处理装置,连接所述测量装置和摄像装置,用于运行至少一程序,以执行如上任一所述的划分清洁区域的方法,以便在所生成的清洁区域内设置导航路线。In order to achieve the foregoing and other related purposes, a fifth aspect of the present application provides a system for dividing a cleaning area for a cleaning robot, including: a measuring device provided in the cleaning robot for measuring the area where the cleaning robot is located Position information of the internal obstacle relative to the cleaning robot; a camera device, which is provided in the cleaning robot, and is used to obtain an image containing the candidate door; a processing device, which is connected to the measuring device and the camera device, and is used to run at least one program , In order to implement any one of the above-mentioned methods for dividing a clean area, so as to set a navigation route in the generated clean area.
在本申请的第五方面的某些实施方式中,所述摄像装置嵌设于所述清洁机器人,且主光轴垂直于所述清洁机器人的行进平面。In some implementations of the fifth aspect of the present application, the camera device is embedded in the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
在本申请的第五方面的某些实施方式中,所述测量装置嵌设于所述清洁机器人的体侧, 所述测量装置包括:测距传感装置和角度传感装置,或者TOF测量装置。In some embodiments of the fifth aspect of the present application, the measuring device is embedded on the body side of the cleaning robot, and the measuring device includes: a distance measuring sensor device and an angle sensor device, or a TOF measuring device .
为实现上述目的及其他相关目的,本申请的第六方面提供一种清洁机器人,包括:测量装置,设置于所述清洁机器人,用于测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息;摄像装置,设置于所述清洁机器人,用于获取包含所述候选识别对象的图像;第一处理装置,连接所述测量装置和摄像装置,用于运行至少一程序,以执行如上任一所述的划分清洁区域的方法,并利用所得到的清洁区域生成导航路线;移动装置,设置于所述清洁机器人,用于受控地调整所述清洁机器人的位置和姿态;清洁装置,设置于所述清洁机器人,用于在清洁机器人移动期间清洁所途经的行进平面;第二处理装置,连接于所述第一处理装置并分别控制清洁装置和移动装置,用于运行至少一程序,以基于所述第一处理装置所提供的导航路线,控制所述移动装置调整位置和姿态以沿所述导航路线进行自主移动,以及控制清洁装置执行清洁操作。In order to achieve the foregoing and other related objectives, a sixth aspect of the present application provides a cleaning robot, including: a measuring device, which is provided in the cleaning robot, and is used to measure the obstacles relative to the cleaning robot in the area where the cleaning robot is located. Location information; a camera device, which is provided in the cleaning robot, and is used to obtain an image containing the candidate recognition object; a first processing device, which is connected to the measurement device and the camera device, and is used to run at least one program to perform any of the above 1. The method for dividing a cleaning area, and using the obtained cleaning area to generate a navigation route; a mobile device provided on the cleaning robot for controlled adjustment of the position and posture of the cleaning robot; a cleaning device, provided The cleaning robot is used to clean the traveling plane passed by during the movement of the cleaning robot; the second processing device is connected to the first processing device and controls the cleaning device and the moving device respectively, and is used to run at least one program to Based on the navigation route provided by the first processing device, the mobile device is controlled to adjust the position and posture to move autonomously along the navigation route, and the cleaning device is controlled to perform a cleaning operation.
在本申请的第六方面的某些实施方式中,所述摄像装置嵌设于所述清洁机器人,且主光轴垂直于所述清洁机器人的行进平面。In some embodiments of the sixth aspect of the present application, the camera device is embedded in the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
在本申请的第六方面的某些实施方式中,所述测量装置嵌设于所述清洁机器人的体侧,所述测量装置包括:测距传感装置和角度传感装置,或者TOF测量装置。In some implementations of the sixth aspect of the present application, the measuring device is embedded on the body side of the cleaning robot, and the measuring device includes: a distance measuring sensor device and an angle sensor device, or a TOF measuring device .
为实现上述目的及其他相关目的,本申请的第七方面提供一种数据处理装置,用于移动机器人,包括:数据接口,用于连接所述移动机器人的摄像装置和测量装置;存储单元,用于存储至少一程序;处理单元,与所述存储单元和数据接口相连,用于藉由所述数据接口获取所述测量装置所提供的位置信息,以及获取所述摄像装置拍摄的图像,以及用于执行所述至少一程序以执行如上任一所述的导航方法;或者执行如上任一所述的划分清洁区域的方法。In order to achieve the foregoing and other related purposes, a seventh aspect of the present application provides a data processing device for a mobile robot, including: a data interface for connecting the camera device and measurement device of the mobile robot; a storage unit At least one program is stored; a processing unit, connected to the storage unit and a data interface, is used to obtain the position information provided by the measurement device through the data interface, and to obtain the image taken by the camera device, and use The at least one program is executed to execute the navigation method as described above; or the method for dividing a clean area as described above is executed.
为实现上述目的及其他相关目的,本申请的第八方面提供一种计算机可读的存储介质,其特征在于,存储至少一种程序,所述至少一种程序在被调用时执行如上任一所述的导航方法;或者执行如上任一所述的划分清洁区域的方法。In order to achieve the foregoing and other related objectives, the eighth aspect of the present application provides a computer-readable storage medium, which is characterized by storing at least one program, and the at least one program executes any one of the foregoing when called The navigation method described above; or the method of dividing a clean area as described above.
如上所述,本申请的导航、划分清洁区域方法及系统、移动及清洁机器人,可以根据测距传感装置和角度传感装置,或者TOF测量装置测量移动机器人所在区域内障碍物相对于移动机器人的角度和距离,准确的确定在所述区域内的候选识别对象的位置信息,并令摄像装置获取包含该候选识别对象的图像,进而确定该候选识别对象相应的实体对象信息,且根据该实体对象信息及其位置信息确定该移动机器人在所述区域内的导航路线。本申请在已获得较为精确的关于实体对象信的位置信息后,直接根据该实体对象信息规划精确的导航路线以及进行区域的划分,增加导航路线规划以及区域划分的准确性,且提高移动机器人的人机交 互性。As mentioned above, the method and system for navigating and dividing the cleaning area, and the mobile and cleaning robot of the present application can measure the obstacle relative to the mobile robot in the area where the mobile robot is located according to the distance measuring sensor device and the angle sensor device, or the TOF measuring device. Accurately determine the position information of the candidate recognition object in the area, and make the camera obtain the image containing the candidate recognition object, and then determine the corresponding entity object information of the candidate recognition object, and according to the entity The object information and its position information determine the navigation route of the mobile robot in the area. After obtaining more accurate location information about the entity object information, this application directly plans an accurate navigation route and partitions the area according to the entity object information, increases the accuracy of navigation route planning and area division, and improves the mobile robot’s performance Human-computer interaction.
附图说明Description of the drawings
图1显示为本申请的移动机器人的导航方法在一具体实施例中的流程示意图。FIG. 1 shows a schematic flowchart of a specific embodiment of the mobile robot navigation method of this application.
图2显示为本申请的一具体实施例中确定在区域内候选识别对象所占的位置信息的流程示意图。FIG. 2 shows a schematic diagram of the process of determining the position information occupied by the candidate recognition object in the area in a specific embodiment of this application.
图3显示为本申请的一具体实施例中按照清洁机器人中测量装置的安装位置而获取的包含凳子的位置信息面阵列的示意图。FIG. 3 shows a schematic diagram of a plane array containing the position information of the stool obtained according to the installation position of the measuring device in the cleaning robot in a specific embodiment of this application.
图4显示为本申请的一具体实施例中基于图3的位置信息面阵列而确定的投影在地面的凳脚投影示意图。FIG. 4 shows a schematic diagram of the projection of the foot of a stool on the ground determined based on the position information plane array of FIG. 3 in a specific embodiment of this application.
图5显示为本申请的一具体实施例中经测量装置测量而得到的扫描轮廓投影在行进平面的俯视。FIG. 5 shows a top view of the scanning profile projected on the traveling plane obtained by the measurement device in a specific embodiment of this application.
图6显示为对图5所示扫描轮廓进行线化处理后的扫描轮廓投影在行进平面的俯视。Fig. 6 is a top view of the scanning contour projected on the traveling plane after linearizing the scanning contour shown in Fig. 5.
图7显示为移动机器人拍摄包含实体对象a的投影图像的情况下与实体对象a在相应物理空间的示意图。FIG. 7 shows a schematic diagram of the mobile robot in the corresponding physical space with the entity object a when the mobile robot shoots a projection image containing the entity object a.
图8显示为本申请的一具体实施例中场景应用示意图。FIG. 8 shows a schematic diagram of scene application in a specific embodiment of this application.
图9显示为本申请的一具体实施例中场景应用示意图。FIG. 9 shows a schematic diagram of a scenario application in a specific embodiment of this application.
图10显示为本申请的划分清洁区域的方法在一具体实施例中的流程示意图。FIG. 10 shows a schematic flowchart of a method for dividing a clean area according to this application in a specific embodiment.
图11显示为本申请的一具体实施例中确定在区域内候选门所占的位置信息的流程示意图。FIG. 11 is a schematic diagram of a process of determining the position information occupied by candidate doors in an area in a specific embodiment of this application.
图12显示为本申请的移动机器人的导航系统在一具体实施例中的组成示意图。FIG. 12 shows a schematic diagram of the composition of the navigation system of the mobile robot of this application in a specific embodiment.
图13显示为本申请的移动机器人在一具体实施例中的组成示意图。FIG. 13 shows a schematic diagram of the composition of the mobile robot of this application in a specific embodiment.
图14显示为本申请的划分清洁区域的系统在一具体实施例中的组成示意图。FIG. 14 shows a schematic diagram of the composition of the system for dividing a clean area of this application in a specific embodiment.
图15显示为本申请的清洁机器人在一具体实施例中的组成示意图。FIG. 15 shows a schematic diagram of the composition of the cleaning robot in a specific embodiment of this application.
图16显示为本申请的数据处理装置在一具体实施例中的组成示意图。FIG. 16 shows a schematic diagram of the composition of the data processing device of this application in a specific embodiment.
具体实施方式detailed description
以下由特定的具体实施例说明本申请的实施方式,熟悉此技术的人士可由本说明书所揭露的内容轻易地了解本申请的其他优点及功效。The following specific examples illustrate the implementation of this application. Those familiar with this technology can easily understand other advantages and effects of this application from the content disclosed in this specification.
在下述描述中,参考附图,附图描述了本申请的若干实施例。应当理解,还可使用其他实施例,并且可以在不背离本公开的精神和范围的情况下进行机械组成、结构、电气以及操 作上的改变。下面的详细描述不应该被认为是限制性的,并且本申请的实施例的范围仅由公布的专利的权利要求书所限定。这里使用的术语仅是为了描述特定实施例,而并非旨在限制本申请。空间相关的术语,例如“上”、“下”、“左”、“右”、“下面”、“下方”、“下部”、“上方”、“上部”等,可在文中使用以便于说明图中所示的一个元件或特征与另一元件或特征的关系。In the following description, referring to the drawings, the drawings describe several embodiments of the present application. It should be understood that other embodiments can also be used, and mechanical, structural, electrical, and operational changes can be made without departing from the spirit and scope of the present disclosure. The following detailed description should not be considered restrictive, and the scope of the embodiments of the present application is limited only by the claims of the published patent. The terms used here are only for describing specific embodiments, and are not intended to limit the application. Space-related terms, such as "upper", "lower", "left", "right", "below", "below", "lower", "above", "upper", etc., can be used in the text for ease of explanation The relationship between one element or feature shown in the figure and another element or feature.
虽然在一些实例中术语第一、第二等在本文中用来描述各种元件,但是这些元件不应当被这些术语限制。这些术语仅用来将一个元件与另一个元件进行区分。例如,第一预设阈值可以被称作第二预设阈值,并且类似地,第二预设阈值可以被称作第一预设阈值,而不脱离各种所描述的实施例的范围。第一预设阈值和预设阈值均是在描述一个阈值,但是除非上下文以其他方式明确指出,否则它们不是同一个预设阈值。相似的情况还包括第一音量与第二音量。Although the terms first, second, etc. are used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, the first preset threshold may be referred to as the second preset threshold, and similarly, the second preset threshold may be referred to as the first preset threshold without departing from the scope of the various described embodiments. The first preset threshold and the preset threshold are both describing a threshold, but unless the context clearly indicates otherwise, they are not the same preset threshold. The similar situation also includes the first volume and the second volume.
再者,如同在本文中所使用的,单数形式“一”、“一个”和“该”旨在也包括复数形式,除非上下文中有相反的指示。应当进一步理解,术语“包含”、“包括”表明存在所述的特征、步骤、操作、元件、组件、项目、种类、和/或组,但不排除一个或多个其他特征、步骤、操作、元件、组件、项目、种类、和/或组的存在、出现或添加。此处使用的术语“或”和“和/或”被解释为包括性的,或意味着任一个或任何组合。因此,“A、B或C”或者“A、B和/或C”意味着“以下任一个:A;B;C;A和B;A和C;B和C;A、B和C”。仅当元件、功能、步骤或操作的组合在某些方式下内在地互相排斥时,才会出现该定义的例外。Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to also include the plural forms, unless the context dictates to the contrary. It should be further understood that the terms "comprising" and "including" indicate the existence of the described features, steps, operations, elements, components, items, types, and/or groups, but do not exclude one or more other features, steps, operations, The existence, appearance or addition of elements, components, items, categories, and/or groups. The terms "or" and "and/or" used herein are interpreted as inclusive, or mean any one or any combination. Therefore, "A, B or C" or "A, B and/or C" means "any of the following: A; B; C; A and B; A and C; B and C; A, B and C" . An exception to this definition will only occur when the combination of elements, functions, steps or operations is inherently mutually exclusive in some way.
移动机器人基于导航控制技术执行移动操作。其中,受移动机器人所应用的场景影响,当移动机器人处于未知环境的未知位置时,利用VSLAM(Visual Simultaneous Localization and Mapping,基于视觉的即时定位与地图构建)技术可以帮助移动机器人构建地图并执行导航操作。具体地,移动机器人通过视觉传感器所提供的视觉信息以及移动传感器所提供的移动信息来构建地图,并根据所构建的地图为移动机器人提供导航能力,使得移动机器人能自主移动。其中,所述视觉传感器举例包括摄像装置,对应的视觉信息为图像数据(以下简称为图像)。所述移动传感器举例包括速度传感器、里程计传感器、距离传感器、悬崖传感器等。然而,在实际应用中,所述移动机器人根据预先构建的地图在其所在区域的行进平面进行移动,所构建的地图上仅显示应用场景中所包括物体的位置信息,当用户远程遥控移动机器人发送指定地点的视频或图片时,或者当用户遥控移动机器人清扫指定地点时,用户需要辨识移动机器人所保存的地图中待指示的位置,再依据地图中对应位置的坐标对移动机器人发送控制指令,这带来人机交互性差的问题。The mobile robot performs mobile operations based on navigation control technology. Among them, affected by the scene applied by the mobile robot, when the mobile robot is in an unknown location in an unknown environment, using VSLAM (Visual Simultaneous Localization and Mapping, vision-based instant positioning and map construction) technology can help the mobile robot build maps and perform navigation operating. Specifically, the mobile robot constructs a map through the visual information provided by the visual sensor and the movement information provided by the mobile sensor, and provides the mobile robot with navigation capabilities according to the constructed map, so that the mobile robot can move autonomously. Wherein, the visual sensor includes an imaging device for example, and the corresponding visual information is image data (hereinafter referred to as image for short). Examples of the movement sensors include speed sensors, odometer sensors, distance sensors, cliff sensors and the like. However, in actual applications, the mobile robot moves on the travel plane of its area according to a pre-built map, and the constructed map only displays the location information of the objects included in the application scene. When the user remotely controls the mobile robot to send When the video or picture of the designated place, or when the user remotely controls the mobile robot to clean the designated place, the user needs to identify the position to be indicated in the map saved by the mobile robot, and then send a control instruction to the mobile robot according to the coordinates of the corresponding position in the map. Brings the problem of poor human-computer interaction.
本申请提供一种移动机器人的导航方法,在该移动机器人的所在区域内(房间内),通 过测量装置准确的测量障碍物相对于移动机器人的位置,并根据摄像装置对包含障碍物的图像的识别,获取与障碍物对应的具体的实体对象,进而根据所定位的实体对象及其位置信息,确定所述移动机器人在所述区域内的导航路线。其中,所述实体对象包含移动机器人所移动的物理空间中任何可根据所述测量装置测量的障碍物形成的实体对象,该实体对象为物理实体,其举例但不限于:球、鞋、墙壁、门、花盆、衣帽、树、桌子、椅子、冰箱、电视、沙发、袜以及杯子等。所述摄像装置包括但不限于鱼眼摄像模块、广角(或非广角)摄像模块中的任一种。所述移动机器人包括但不限于:家庭陪伴式移动机器人、清洁机器人、巡逻式移动机器人、擦玻璃的机器人等。This application provides a method for navigating a mobile robot. In the area where the mobile robot is located (in the room), the position of the obstacle relative to the mobile robot is accurately measured by a measuring device, and the image containing the obstacle is measured according to the camera device. Recognize and obtain the specific physical object corresponding to the obstacle, and then determine the navigation route of the mobile robot in the area according to the located physical object and its position information. Wherein, the physical object includes any physical object that can be formed from obstacles measured by the measuring device in the physical space moved by the mobile robot. The physical object is a physical entity, such as but not limited to: balls, shoes, walls, Doors, flower pots, coats and hats, trees, tables, chairs, refrigerators, TVs, sofas, socks, cups, etc. The camera device includes but is not limited to any one of a fisheye camera module and a wide-angle (or non-wide-angle) camera module. The mobile robots include, but are not limited to: family companion mobile robots, cleaning robots, patrol mobile robots, glass cleaning robots, and the like.
在此,参阅图1,图1显示为本申请的移动机器人的导航方法在一具体实施例中的流程示意图。其中,所述移动机器人的导航方法可由移动机器人包括的处理装置来执行。其中,所述处理装置为一种能够进行数值运算、逻辑运算及数据分析的电子设备,其包括但不限于:CPU、GPU、FPGA等,以及用于暂存运算期间所产生的中间数据的易失性存储器,用于存储可执行所述方法的程序的非易失性存储器等。所述移动机器人包含测量装置和摄像装置。所述摄像装置包括但不限于鱼眼摄像模块、广角(或非广角)摄像模块中的任一种。所述移动机器人包括但不限于:家庭陪伴式移动机器人、清洁机器人、巡逻式移动机器人、擦玻璃的机器人等。所述测量装置可安装于所述移动机器人的体侧,所述测量装置举例可为扫描激光器或TOF(Time of Flight,飞行时间)传感器。其中扫描激光器包括角度传感装置和测距传感器,且通过所述角度传感器获取对应测距传感器所测量的距离信息的角度信息,且通过激光或者红外来测得障碍物测量点在所述扫描激光器当前角度上与所述测距传感器的距离。所述扫描激光器是相对于一固定参照系随时间改变方向、传播的起点或图样的激光器。扫描激光器基于激光测距原理,通过可旋转的光学部件(激光发射器)发射形成二维的扫描面,以实现区域扫描及轮廓测量功能。扫描激光器的测距原理包括:激光发射器发出激光脉冲波,当激光波碰到物体后,部分能量返回,当激光接收器收到返回激光波时,且返回波的能量足以触发门槛值,则扫描激光器计算它到物体的距离值。扫描激光器连续不停的发射激光脉冲波,激光脉冲波打在高速旋转的镜面上,将激光脉冲波发射向各个方向从而形成一个二维区域的扫描。此二维区域的扫描例如可以实现以下两个功能:1)在扫描激光器的扫描范围内,设置不同形状的保护区域,当有物体进入该区域时,发出报警信号;2)在扫描激光器的扫描范围内,扫描激光器输出每个障碍物测量点的距离,根据此距离信息,可以计算物体的外型轮廓,坐标定位等。Here, referring to FIG. 1, FIG. 1 shows a schematic flowchart of a specific embodiment of a navigation method for a mobile robot according to this application. Wherein, the navigation method of the mobile robot may be executed by a processing device included in the mobile robot. Wherein, the processing device is an electronic device capable of performing numerical operations, logical operations, and data analysis, which includes, but is not limited to: CPU, GPU, FPGA, etc., and an easy-to-use device for temporarily storing intermediate data generated during operations. Non-volatile memory, non-volatile memory used to store programs that can execute the method, etc. The mobile robot includes a measuring device and a camera device. The camera device includes but is not limited to any one of a fisheye camera module and a wide-angle (or non-wide-angle) camera module. The mobile robots include, but are not limited to: family companion mobile robots, cleaning robots, patrol mobile robots, glass cleaning robots, and the like. The measuring device may be installed on the body side of the mobile robot, and the measuring device may be, for example, a scanning laser or a TOF (Time of Flight) sensor. The scanning laser includes an angle sensing device and a distance measuring sensor, and the angle information corresponding to the distance information measured by the distance measuring sensor is obtained through the angle sensor, and the obstacle measurement point is measured at the scanning laser through laser or infrared. The distance from the distance measuring sensor at the current angle. The scanning laser is a laser that changes direction, starting point or pattern of propagation with time relative to a fixed frame of reference. The scanning laser is based on the principle of laser distance measurement, which forms a two-dimensional scanning surface through a rotatable optical component (laser transmitter) to achieve area scanning and profile measurement functions. The ranging principle of a scanning laser includes: a laser transmitter emits a laser pulse wave, when the laser wave hits an object, part of the energy returns, when the laser receiver receives the return laser wave, and the energy of the return wave is sufficient to trigger the threshold, then Scan the laser to calculate its distance to the object. The scanning laser continuously emits laser pulse waves. The laser pulse waves hit the high-speed rotating mirror surface and emit the laser pulse waves in all directions to form a two-dimensional area scan. The scanning of this two-dimensional area can, for example, realize the following two functions: 1) Set protection areas of different shapes within the scanning range of the scanning laser, and send out an alarm signal when an object enters the area; 2) Scanning the laser Within the range, the scanning laser outputs the distance of each obstacle measurement point. According to this distance information, the outline and coordinate positioning of the object can be calculated.
所述TOF测量装置基于TOF技术。TOF技术属于光学非接触式三维深度测量感知方式中的一种,通过给目标连续发送光脉冲,然后用传感器接收从物体返回的光,通过探测这些 发射和接收光脉冲的飞行(往返)时间来得到目标物距离。TOF的照射单元都是对光进行高频调制之后再进行发射,一般采用LED或激光(包含激光二极管和VCSEL(Vertical Cavity Surface Emitting Laser,垂直腔面发射激光器))来发射高性能脉冲光,本申请的实施例中,采用激光来发射高性能脉冲光。脉冲可达到100MHz左右,主要采用红外光。TOF测量装置应用的原理有以下两类,1)基于光学快门的方法;主要实现方式为:发射一束脉冲光波,通过光学快门快速精确获取照射到三维物体后反射回来的光波的时间差t,由于光速c已知,只要知道照射光和接收光的时间差,来回的距离可以通过公示d=t/2·c。2)基于连续波强度调制的方法;主要实现方式为:发射一束照明光,利用发射光波信号与反射光波信号的相位变化来进行距离测量。其中,照明模组的波长一般是红外波段,且需要进行高频率调制。TOF感光模组与普通手机摄像模组类似,由芯片,镜头,线路板等部件构成,TOF感光芯片每一个像元对发射光波的往返相机与物体之间的具体相位分别进行记录,通过数据处理单元提取出相位差,由公式计算出深度信息。TOF测量装置的体积小,可以直接输出被探测物体的深度数据,且TOF测量装置的深度计算结果不受物体表面灰度和特征的影响,可以非常准确的进行三维探测。The TOF measuring device is based on TOF technology. TOF technology is one of the optical non-contact three-dimensional depth measurement and perception methods. It continuously sends light pulses to the target, and then uses the sensor to receive the light returned from the object, and detects the flight (round trip) time of these transmitted and received light pulses. Get the target distance. TOF's irradiating unit is to emit high-frequency light after high-frequency modulation. Generally, LED or laser (including laser diode and VCSEL (Vertical Cavity Surface Emitting Laser)) is used to emit high-performance pulsed light. In the embodiment of the application, a laser is used to emit high-performance pulsed light. The pulse can reach about 100MHz, and infrared light is mainly used. The principles of TOF measurement device application are as follows: 1) The method based on optical shutter; the main implementation method is: emit a pulsed light wave, through the optical shutter, the time difference t of the light wave reflected back after being irradiated on a three-dimensional object is quickly and accurately obtained. The speed of light c is known. As long as the time difference between the irradiated light and the received light is known, the back and forth distance can be publicized by d=t/2·c. 2) A method based on continuous wave intensity modulation; the main implementation method is: emit a beam of illuminating light, and use the phase change of the emitted light wave signal and the reflected light wave signal to measure the distance. Among them, the wavelength of the lighting module is generally in the infrared band, and high frequency modulation is required. The TOF photosensitive module is similar to the ordinary mobile phone camera module, which is composed of chips, lenses, circuit boards and other components. Each pixel of the TOF photosensitive chip records the specific phase between the camera and the object that emits light waves. Through data processing The unit extracts the phase difference and calculates the depth information by the formula. The TOF measuring device is small in size and can directly output the depth data of the detected object, and the depth calculation result of the TOF measuring device is not affected by the grayscale and characteristics of the surface of the object, so it can perform three-dimensional detection very accurately.
在此,所述移动机器人的导航方法S1包括如图1所示的步骤S11~步骤S13。Here, the navigation method S1 of the mobile robot includes steps S11 to S13 as shown in FIG. 1.
在所述步骤S11中,令所述测量装置测量所述移动机器人所在区域内障碍物相对于移动机器人的位置信息,并确定在所述区域内的候选识别对象所占的位置信息;其中,所述区域例如为房间,所述障碍物可以为该房间中任何能反射测量介质的实体对象。利用上述任一示例所提及的测量装置可测得障碍物相对于测量装置的位置信息,得到障碍物的轮廓信息,利用所述轮廓信息来确定在所述区域内的候选识别对象及其占的位置信息。其中,所述位置信息包括:偏角信息及所对应的距离信息,所述距离信息和偏角信息被称为障碍物相对于清洁机器人的位置信息,或简称为障碍物的位置信息。In the step S11, the measuring device is asked to measure the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located, and determine the position information occupied by the candidate recognition object in the area; wherein, The area is, for example, a room, and the obstacle may be any physical object in the room that can reflect the measurement medium. Using the measurement device mentioned in any of the above examples, the position information of the obstacle relative to the measurement device can be measured to obtain the contour information of the obstacle, and the contour information is used to determine the candidate recognition object and its occupation in the area. Location information. Wherein, the position information includes: deflection angle information and corresponding distance information, and the distance information and deflection angle information are called the position information of the obstacle relative to the cleaning robot, or simply the position information of the obstacle.
参阅图2,图2显示为本申请的一具体实施例中确定在区域内候选识别对象所占的位置信息的流程示意图。即在一些实施例中,步骤S11中的所述确定在区域内候选识别对象所占的位置信息的步骤包括图2所示的步骤S111和步骤S112。Referring to FIG. 2, FIG. 2 shows a schematic diagram of the process of determining the position information of the candidate recognition object in the area in a specific embodiment of this application. That is, in some embodiments, the step of determining the position information occupied by the candidate recognition object in the area in step S11 includes step S111 and step S112 shown in FIG. 2.
在所述步骤S111中,所述处理装置可依据测量所述区域内各障碍物测量点的位置信息以获得一个扫描轮廓及其所占位置信息。In the step S111, the processing device may obtain a scan profile and its occupied position information according to the position information of the measurement points of the obstacles in the measurement area.
在此,利用上述任一示例所提及的测量装置遍历地测量所在区域内二维或三维平面内的障碍物,可获得所在区域内二维或三维平面内由障碍物测量点构成的扫描轮廓。其中,所述障碍物测量点是障碍物上的、用于反射测距传感器所发出的测量介质处的反射点。其中,测量介质举例为激光光束、LED灯束、或红外光束等。所得到的扫描轮廓为一种由各障碍物测 量点的位置信息构成的点阵矩阵,其中,所述位置信息包括障碍物测量点相对于测量装置的距离信息和偏角信息,或简称为障碍物测量点的位置信息。利用所测得的各测量障碍点的位置信息所构成的二维或三维阵列,构建障碍物的扫描轮廓。Here, the measurement device mentioned in any of the above examples is used to traversely measure the obstacles in the two-dimensional or three-dimensional plane in the area, and obtain the scanning contour of the obstacle measurement points in the two-dimensional or three-dimensional plane in the area. . Wherein, the obstacle measurement point is a reflection point on the obstacle that is used to reflect the measurement medium emitted by the ranging sensor. Among them, the measurement medium is, for example, a laser beam, an LED light beam, or an infrared beam. The obtained scan profile is a lattice matrix composed of the position information of each obstacle measurement point, where the position information includes distance information and deflection angle information of the obstacle measurement point relative to the measurement device, or simply referred to as obstacle The location information of the object measurement point. The two-dimensional or three-dimensional array formed by the measured position information of the measured obstacle points is used to construct the scanning contour of the obstacle.
对于位置信息的面阵列,所述步骤S111包括:基于所述测量装置测得各障碍物测量点的位置信息面阵列,拟合所述移动机器人的行进平面,以及确定位于所述行进平面上扫描轮廓及其所占位置信息。以所述测量装置为TOF测量装置、且所述TOF测量装置包含激光传感器阵列为例,所述位置信息面阵列为由激光传感器阵列测得。For the area array of position information, the step S111 includes: fitting the traveling plane of the mobile robot based on the position information area array of the measurement points of each obstacle measured by the measuring device, and determining to scan on the traveling plane Information about the contour and its position. Taking the measurement device as a TOF measurement device and the TOF measurement device including a laser sensor array as an example, the position information plane array is measured by the laser sensor array.
在此,以所述移动机器人为清洁机器人为例进行说明。为测得清洁机器人周围的障碍物,所述测量装置被安装在体侧且靠近行进平面的位置处,例如测量装置安装在清洁机器人的体侧。因此,所获取的各障碍物测量点的位置信息面阵列中可包含地面、放置在上的物体、悬挂于空中的物体等多种障碍物的测量点的位置信息。鉴于所述测量装置的安装位置,根据所测得的障碍物测量点通常包含所述清洁机器人的行进平面,如地面,利用平面拟合方式确定障碍物测量点所构成的平面,所构成的平面被认为是所述行进平面,再根据所确定的行进平面,确定放置在所述行进平面上的扫描轮廓及其所占位置信息。例如,随机地从所述位置信息面阵列中选取到若干个障碍物测量点的位置信息,利用平面拟合方式选取一平面,其中构成该平面的障碍物测量点的数量最多,并将所述位置信息面阵列中位于所选取的平面上的各障碍物测量点作为处于所述清洁机器人的行进平面的障碍物测量点;按照位置信息面阵列中各像素点的位置信息,将位于所述行进平面上部的像素点的位置信息投影到所述行进平面,由此得到位于所述行进平面上扫描轮廓及其所占位置信息。例如,请参阅图3和图4,其中图3仅示意性的提供了按照清洁机器人中测量装置的安装位置而获取的包含凳子的位置信息面阵列的示意图,图4为基于图3的位置信息面阵列而确定的投影在地面的凳脚投影示意图。其中,按照前述拟合和投影方式,并结合图3和4所示,处理装置根据所得到的位置信息面阵列将从凳子的凳脚高度至地面各位置信息投影到所述行进平面后,会得到对应所述凳脚的块状投影。Here, the description will be made by taking the mobile robot as a cleaning robot as an example. In order to measure obstacles around the cleaning robot, the measuring device is installed on the side of the body close to the traveling plane, for example, the measuring device is installed on the side of the cleaning robot. Therefore, the acquired position information area array of each obstacle measurement point may include the position information of the measurement points of various obstacles, such as the ground, objects placed on the surface, and objects suspended in the air. In view of the installation position of the measurement device, the measured obstacle measurement points usually include the traveling plane of the cleaning robot, such as the ground, and the plane formed by the obstacle measurement points is determined by the plane fitting method. It is considered as the traveling plane, and then according to the determined traveling plane, the scanning contour placed on the traveling plane and the position information occupied by it are determined. For example, randomly select the position information of a number of obstacle measurement points from the position information surface array, and select a plane using a plane fitting method, where the number of obstacle measurement points constituting the plane is the largest, and the The obstacle measurement points on the selected plane in the position information plane array are taken as obstacle measurement points on the traveling plane of the cleaning robot; according to the position information of each pixel in the position information plane array, the obstacle measurement points are located in the traveling plane. The position information of the pixel points on the upper part of the plane is projected onto the travel plane, thereby obtaining scan contours on the travel plane and the position information occupied by them. For example, please refer to Figure 3 and Figure 4, where Figure 3 only schematically provides a schematic diagram of a plane array containing the position information of the stool obtained according to the installation position of the measuring device in the cleaning robot, and Figure 4 is based on the position information of Figure 3. A schematic diagram of the projection of the foot of the stool on the ground determined by the surface array. Among them, according to the aforementioned fitting and projection method, combined with Figures 3 and 4, the processing device will project the position information from the height of the stool foot to the ground to the travel plane according to the obtained position information area array Obtain a block projection corresponding to the foot of the stool.
对于位置信息的线阵列,所述步骤S111包括:基于所述测量装置所测得的平行于行进平面的位置信息线阵列,确定位于所述行进平面上扫描轮廓及其所占位置信息。以所述测量装置为扫描激光器为例,所述位置信息的线阵列由扫描激光器测得。For the line array of position information, the step S111 includes: based on the line array of position information parallel to the traveling plane measured by the measuring device, determining the scanning profile on the traveling plane and the position information occupied by it. Taking the measurement device as a scanning laser as an example, the line array of the position information is measured by the scanning laser.
在此,所述激光扫描器可被安装于所述清洁机器人的顶部中间、顶部边缘或体侧。其中,所述扫描激光器的激光发射方向可与所述行进平面平行,且令所述扫描激光器在所述清洁机器人所在的位置以360度的角度进行旋转扫描,并通过所述扫描激光器的角度传感装置获取各障碍物测量点关于所述移动机器人的角度,且通过扫描激光器的测距传感装置(激光或红 外测距装置),以测得障碍物测量点与所述清洁机器人之间的距离,进而获得平行于所述行进平面的位置信息线阵列,且由于所述位置信息线阵列与所述行进平面平行,所以,可以直接根据所述位置信息线阵列确定位于所述行进平面上扫描轮廓及其所占位置信息。以所述移动机器人为清洁机器人为例,由于扫描激光器相距地面的距离相当于清洁机器人的高度,因此,利用扫描激光器所得到的位置信息线阵列可表示地面上妨碍清洁机器人移动的障碍物的位置信息。Here, the laser scanner may be installed on the top middle, top edge or body side of the cleaning robot. Wherein, the laser emission direction of the scanning laser can be parallel to the traveling plane, and the scanning laser is rotated and scanned at an angle of 360 degrees at the position where the cleaning robot is located, and is transmitted through the angle of the scanning laser. The sensing device acquires the angle of each obstacle measurement point with respect to the mobile robot, and scans the laser ranging sensing device (laser or infrared ranging device) to measure the distance between the obstacle measurement point and the cleaning robot Distance, and then obtain a position information line array parallel to the travel plane, and since the position information line array is parallel to the travel plane, it can be directly determined to scan on the travel plane according to the position information line array Information about the contour and its position. Taking the mobile robot as a cleaning robot as an example, since the distance between the scanning laser and the ground is equivalent to the height of the cleaning robot, the line array of position information obtained by the scanning laser can indicate the position of obstacles on the ground that hinder the movement of the cleaning robot information.
在一些实际应用中,测量装置的量程举例可以达到8米,而摄像装置通常并不能摄取到相应距离处的清晰图像。为使两种装置能匹配使用,在一些实施例中,所述处理装置令所述测量装置测量所述摄像装置的视场范围内障碍物相对于清洁机器人的位置信息,以令摄像装置获取包含所述测量装置测量的障碍物的图像。例如,处理装置对测量装置测得的位置信息进行筛选,即剔除所述测量装置对超出所述摄像装置的摄像范围的区域的障碍物测量点的位置信息,以根据剩余的有效位置信息获得测量装置测量的所述摄像装置的视场范围内障碍物相对于清洁机器人的位置信息。换言之,利用有效位置信息得到扫描轮廓及其所占位置信息。在另一些实施例中,所述处理装置令测量装置获取预设距离以内的各位置信息,其预设距离为固定值。例如,根据通常的屋内使用面积而确定预设距离,以确保测量装置能获取一间房间内各障碍物的位置信息,以及根据所获取的位置信息获得扫描轮廓及其所占位置信息。In some practical applications, for example, the measuring range of the measuring device can reach 8 meters, and the camera device usually cannot capture clear images at the corresponding distance. In order to match the two devices for use, in some embodiments, the processing device causes the measuring device to measure the position information of the obstacle relative to the cleaning robot in the field of view of the camera device, so that the camera device can obtain information including The image of the obstacle measured by the measuring device. For example, the processing device screens the position information measured by the measuring device, that is, eliminates the position information of the obstacle measurement point of the measuring device in the area beyond the imaging range of the imaging device, so as to obtain the measurement based on the remaining effective position information. The position information of the obstacle in the field of view of the camera device measured by the device relative to the cleaning robot. In other words, the effective position information is used to obtain the scan profile and its occupied position information. In other embodiments, the processing device enables the measuring device to obtain position information within a preset distance, and the preset distance is a fixed value. For example, the preset distance is determined according to the usual indoor use area to ensure that the measuring device can obtain the position information of the obstacles in a room, and obtain the scan contour and its occupied position information according to the obtained position information.
在获得了所述扫描轮廓及其所占位置信息后,处理装置可利用特征线、特征点等组合确定扫描轮廓所描绘的候选识别对象及其位置信息。在一些实施方式中,处理装置还利用线化算法将构成扫描轮廓的点阵信息进行线化处理,得到用长线、短线来描述的扫描轮廓。其中,所述线化算法举例包括:膨胀与腐蚀算法等。参阅图5和图6,其中,图5仅示意性的显示了经测量装置测量而得到的扫描轮廓投影在行进平面的俯视示意图,其中,图6显示为对应图5的扫描轮廓经线化处理后的扫描轮廓投影在行进平面的俯视示意图。其中,图5中,原始获取的扫描轮廓包含间隔小于预设阈值的障碍物测量点构成的轮廓部分B1-B2,间隔大于预设阈值的障碍物测量点构成的轮廓部分B2-B3、和B4-B5。与图5对应地,由图6可见,经由线化算法处理后的扫描轮廓包含连续的长线构成的轮廓部分A1-A2,不连续短线构成的轮廓部分A2-A3和A4-A5。After obtaining the scan contour and its position information, the processing device can use a combination of feature lines, feature points, etc. to determine the candidate recognition object depicted by the scan contour and its position information. In some embodiments, the processing device also uses a linearization algorithm to linearize the dot matrix information that constitutes the scan contour to obtain a scan contour described by long lines and short lines. Among them, examples of the linearization algorithm include: expansion and erosion algorithms. Refer to Figures 5 and 6, where Figure 5 only schematically shows the top view of the scanning profile projected on the travel plane measured by the measuring device, where Figure 6 shows the scanning profile corresponding to Figure 5 after being linearized. A schematic top view of the scan profile projected on the travel plane. Among them, in Figure 5, the originally acquired scan contour includes contour parts B1-B2 composed of obstacle measurement points whose intervals are less than a preset threshold, and contour parts B2-B3, and B4 composed of obstacle measurement points whose intervals are greater than the preset threshold. -B5. Corresponding to FIG. 5, it can be seen from FIG. 6 that the scan contour processed by the linearization algorithm includes contour parts A1-A2 composed of continuous long lines, and contour parts A2-A3 and A4-A5 composed of discontinuous short lines.
藉由图5和图6所示示例推及至更具普适性的扫描轮廓,扫描轮廓可由连续部分和不连续部分组成。其中,在一些实施方式中,构成连续部分的条件pre1包括以下至少一种或多种组合:1)扫描轮廓中相邻的障碍物测量点之间的间距小于预设长度阈值且这些障碍物测量点的数量大于预设数量阈值的障碍物测量点所构成的轮廓部分,例如图5所示的B1-B2;2)扫描轮廓中线长大于预设长度阈值的连续线所构成的轮廓部分,例如,图6所示的A1-A2;3) 基于1)和/或2)而确定的轮廓部分上各障碍物测量点中,其各位置信息符合预设的连续变化条件,其中,所述连续变化条件包括:相邻障碍物测量点的距离信息的差值小于预设距离突变阈值。例如,图5所示的B4-B5轮廓部分以及图6所示的A4-A5轮廓部分不构成连续部分。在此,前述提及的完整的扫描轮廓由不连续部分与连续部分构成,因此,不连续部分与连续部分可视为逻辑上的“或”关系。例如,图5中的B2-B3及B4-B5轮廓部分为不连续部分,图6中的A2-A3及A4-A5轮廓部分为不连续部分。Using the examples shown in Figures 5 and 6 to extend to a more general scan profile, the scan profile can be composed of continuous and discontinuous parts. Wherein, in some embodiments, the condition pre1 constituting the continuous part includes at least one or more of the following combinations: 1) The distance between adjacent obstacle measurement points in the scan profile is less than a preset length threshold and these obstacles are measured The contour part formed by obstacle measurement points whose number of points is greater than the preset number threshold, such as B1-B2 shown in Figure 5; 2) The contour part formed by continuous lines whose line length is greater than the preset length threshold in the scan contour, for example , A1-A2 shown in Fig. 6; 3) Among the obstacle measurement points on the contour part determined based on 1) and/or 2), the position information of each obstacle meets the preset continuous change condition, wherein the continuous The changing conditions include: the difference between the distance information of adjacent obstacle measurement points is less than the preset distance mutation threshold. For example, the B4-B5 outline part shown in FIG. 5 and the A4-A5 outline part shown in FIG. 6 do not constitute a continuous part. Here, the aforementioned complete scan profile is composed of a discontinuous part and a continuous part. Therefore, the discontinuous part and the continuous part can be regarded as a logical “or” relationship. For example, the contour parts of B2-B3 and B4-B5 in Fig. 5 are discontinuous parts, and the contour parts of A2-A3 and A4-A5 in Fig. 6 are discontinuous parts.
在另一些实施方式中,构成不连续部分的条件pre2包括以下至少一种或多种组合:1)扫描轮廓中相邻的障碍物测量点之间的间距大于预设长度阈值、且至少两端的的障碍物测量点与连续部分相连的轮廓部分,例如图5所示的B2-B3、和B4-B5;2)扫描轮廓中由线长小于预设长度阈值的至少一条且连续的短线所构成的轮廓部分,例如,图6所示的A2-A3、和A4-A5。在此,前述提及的完整的扫描轮廓由不连续部分与连续部分构成,因此,不连续部分与连续部分可视为逻辑上的“或”关系。例如,图5中的B2-B3及B4-B5轮廓部分为不连续部分,图6中的A2-A3及A4-A5轮廓部分为不连续部分。In other embodiments, the condition pre2 constituting the discontinuous part includes at least one or more of the following combinations: 1) The distance between adjacent obstacle measurement points in the scan profile is greater than a preset length threshold, and at least two ends The obstruction measurement point is the contour part connected with the continuous part, such as B2-B3 and B4-B5 shown in Figure 5; 2) The scanning contour is composed of at least one continuous short line whose line length is less than the preset length threshold For example, the outlines of A2-A3 and A4-A5 shown in Figure 6. Here, the aforementioned complete scan profile is composed of a discontinuous part and a continuous part. Therefore, the discontinuous part and the continuous part can be regarded as a logical “or” relationship. For example, the contour parts of B2-B3 and B4-B5 in Fig. 5 are discontinuous parts, and the contour parts of A2-A3 and A4-A5 in Fig. 6 are discontinuous parts.
为此,所述步骤S11包括步骤S112,即述按照所述扫描轮廓上的不连续部分,将所述扫描轮廓划分为多个候选识别对象,并确定各候选识别对象所占的位置信息的步骤。To this end, the step S11 includes step S112, which is the step of dividing the scanning contour into a plurality of candidate recognition objects according to the discontinuous parts on the scanning contour, and determining the position information occupied by each candidate recognition object .
在此,处理装置以不连续部分的边界将扫描轮廓进行分段处理,得到由连续部分构成的轮廓部分和由不连续部分构成的轮廓部分。在一些示例中,将连续部分和不连续部分分别作为候选识别对象,以及分别根据连续部分和不连续部分中的障碍物测量点的位置信息,确定对应候选识别对象所占的位置信息。在另一些示例中,利用预设的特征线、特征点等组合从连续部分确定至少一个候选识别对象,以及将不连续部分作为单独的候选识别对象,以及分别根据连续部分和不连续部分中的障碍物测量点的位置信息,确定对应候选识别对象所占的位置信息。应当理解,处理装置也可按照连续部分的边界将扫描轮廓进行分段处理,其应当视为与按照不连续部分的边界将扫描轮廓进行分段处理的方式相同或相似。Here, the processing device performs segmentation processing on the scanned contour at the boundary of the discontinuous part to obtain a contour part composed of a continuous part and a contour part composed of a discontinuous part. In some examples, the continuous part and the discontinuous part are respectively used as candidate recognition objects, and the position information of the corresponding candidate recognition object is determined according to the position information of the obstacle measurement points in the continuous part and the discontinuous part respectively. In other examples, at least one candidate recognition object is determined from the continuous part using a combination of preset feature lines, feature points, etc., and the discontinuous part is used as a separate candidate recognition object, and the continuous part and the discontinuous part The position information of the obstacle measurement point determines the position information occupied by the corresponding candidate recognition object. It should be understood that the processing device may also perform segmentation processing of the scan contour according to the boundary of the continuous part, which should be regarded as the same or similar to the method of segmenting the scan contour according to the boundary of the discontinuous part.
在一些示例中,基于扫描轮廓中的不连续部分确定候选识别对象的方式包括:基于所述扫描轮廓上由不连续部分所形成的缺口,确定相应候选识别对象为包含缺口的第一候选识别对象;以及根据所述扫描轮廓上由不连续部分所分隔的连续部分,确定相应候选识别对象为妨碍清洁机器人移动的第二候选识别对象。其中,第一候选识别对象和第二候选识别对象表示按照物体种类命名的、待识别的实体对象信息。其中,所述种类命名举例但不限于:门、窗、墙壁、桌子、椅子、球、柜子、袜子等。在一个候选识别对象中可包含一个或多个待识别的实体对象信息。In some examples, the method of determining the candidate recognition object based on the discontinuous part in the scan contour includes: determining the corresponding candidate recognition object as the first candidate recognition object including the gap based on the gap formed by the discontinuous part on the scan contour And according to the continuous part separated by the discontinuous part on the scan contour, the corresponding candidate identification object is determined as the second candidate identification object that hinders the movement of the cleaning robot. Wherein, the first candidate recognition object and the second candidate recognition object represent the entity object information to be recognized named according to the type of the object. Wherein, the naming of the categories is exemplified but not limited to: doors, windows, walls, tables, chairs, balls, cabinets, socks, etc. A candidate identification object may contain one or more entity object information to be identified.
其中,所述第二候选识别对象意图包含待识别的、妨碍清洁机器人移动的实体对象,其 举例包括但不限于以下至少一种:墙、柜子、风扇、沙发、箱子、袜子、球、桌(椅)脚等。所述第一候选识别对象意图表示待识别的能够连通/隔断两个空间区域的实体对象;其中,当连通两个空间区域时,该实体对象能可在扫描轮廓上形成缺口。例如,实体对象为门,当门敞开时连通屋里和屋外两个空间区域,当门关闭时隔断屋里和屋外两个空间区域。在此,所述第一候选识别对象主要用于为处于敞开状态的实体门而提供进一步筛选及确认的候选识别对象。事实上,受清洁机器人与屋内各实体对象之间的位置关系、实体对象的形状等影响,扫描轮廓上形成的缺口还可能是由两个实体对象之间形成的间隙、或实体对象的形状引起的。例如,扫描轮廓上的缺口可由两个衣柜之间的间隔、衣柜和墙之间的间隔等产生的。又如,扫描轮廓上的缺口为桌腿之间的间隔产生的。因此,需要进一步对所得到的第一候选识别对象进行筛选识别。Wherein, the second candidate recognition object intends to include the entity object to be recognized that hinders the movement of the cleaning robot, and examples thereof include but are not limited to at least one of the following: wall, cabinet, fan, sofa, box, socks, ball, table ( Chair) feet etc. The first candidate recognition object is intended to represent a physical object to be recognized that can connect/separate two spatial regions; wherein, when the two spatial regions are connected, the physical object can form a gap in the scan contour. For example, the physical object is a door. When the door is open, it connects the two space areas inside and outside the house. When the door is closed, it separates the two space areas inside and outside the house. Here, the first candidate identification object is mainly used to provide further screening and confirmation candidate identification objects for the physical door in an open state. In fact, affected by the positional relationship between the cleaning robot and the solid objects in the house, the shape of the solid object, etc., the gap formed on the scanning contour may also be caused by the gap formed between two solid objects or the shape of the solid object of. For example, the gap in the scanned contour can be caused by the interval between two wardrobes, the interval between the wardrobe and the wall, and so on. For another example, the gaps in the scan contour are caused by the space between the legs of the table. Therefore, it is necessary to further screen and identify the obtained first candidate identification object.
其中,为提高对第一候选识别对象的识别效率,在一些示例中,对所得到的扫描轮廓上的缺口及构成缺口两端的连续部分的位置信息进行进一步分析,以对所得到的缺口进行筛除处理。为此,所形成的缺口被限制为依附于所述连续部分而形成的缺口,一些孤立的、不依附于任何连续部分的缺口,例如桌子腿或凳子腿之间所形成的缺口,可为不属于所述第一候选识别对象所包含的缺口,应不属于所述第一候选识别对象所包含的缺口,则这些孤立的缺口所对应的候选识别对象并不是所述第一候选识别对象,需要进行筛除。另外,太小或太大的缺口也不应属于所述第一候选识别对象所包含的缺口。基于上述描述,可执行步骤S1121和步骤S1122。Among them, in order to improve the recognition efficiency of the first candidate recognition object, in some examples, further analysis is performed on the obtained notch on the scan contour and the position information of the continuous parts constituting the two ends of the notch to screen the obtained notch. In addition to processing. For this reason, the formed gap is limited to the gap formed by attaching to the continuous part. Some isolated gaps that are not attached to any continuous part, such as the gap formed between table legs or stool legs, may not The gaps included in the first candidate recognition object should not belong to the gaps included in the first candidate recognition object, and the candidate recognition objects corresponding to these isolated gaps are not the first candidate recognition object. Perform screening. In addition, gaps that are too small or too large should not be included in the first candidate recognition object. Based on the above description, step S1121 and step S1122 can be performed.
在步骤S1121中,按照预设的筛选条件,对所形成的缺口进行筛选;其中,所述筛选条件包含:缺口位于与其相邻的至少一侧的连续部分所在沿线上、和/或预设的缺口宽度阈值。In step S1121, the formed gaps are screened according to preset screening conditions; wherein, the screening conditions include: the gap is located along the line where the continuous part of at least one side adjacent to it is located, and/or the preset Gap width threshold.
在一些示例中,所述筛选条件包括缺口位于与其相邻的至少一侧的连续部分所在沿线上,则该缺口为与所述第一候选识别对象对应的缺口。举例所述缺口为实体门所对应的缺口,由于实体门一般是依附于墙而设立,镶嵌实体门的至少一侧墙体位于与所述缺口相邻的连续部分的沿线上。因此所述实体门打开时所形成所对应的缺口为对应所述第一候选识别对象的缺口。而凳子的两条凳腿对应的缺口一般是独立放置于物理空间中的,因此凳子的两条凳腿不处于任何连续部分的沿线上,属于孤立的缺口,则将这些孤立的缺口所对应的候选识别对象剔除在第一候选识别对象之外,并筛除相应的缺口。In some examples, the screening condition includes that a gap is located along a continuous portion of at least one side adjacent to the gap, and the gap is a gap corresponding to the first candidate identification object. For example, the gap is the gap corresponding to the physical door. Since the physical door is generally set up by attaching to the wall, at least one side wall of the inlaid physical door is located along the continuous part adjacent to the gap. Therefore, the corresponding gap formed when the physical door is opened is the gap corresponding to the first candidate recognition object. The gaps corresponding to the two stool legs of the stool are generally placed independently in the physical space. Therefore, the two stool legs of the stool are not located along any continuous part. They are isolated gaps. Then these isolated gaps correspond to The candidate recognition objects are excluded from the first candidate recognition object, and the corresponding gaps are screened out.
在又一些示例中,所述筛选条件包括预设的缺口宽度阈值。其中,所述缺口宽度阈值可为单一数值或一数值区间。例如,若缺口的宽度在预设的缺口宽度阈值(举例为60cm~120cm)内,则该缺口为与所述第一候选识别对象对应的缺口。处理装置对构成缺口的障碍物测量点 的位置信息进行缺口宽度的计算,并依据筛选条件对所得到的缺口进行筛除,即该缺口太大或大小都不是所述第一识别对象对应的缺口。In still other examples, the screening condition includes a preset gap width threshold. Wherein, the gap width threshold can be a single value or a range of values. For example, if the width of the gap is within a preset gap width threshold (for example, 60 cm to 120 cm), the gap is a gap corresponding to the first candidate recognition object. The processing device calculates the width of the gap on the position information of the obstacle measurement point that constitutes the gap, and screens out the obtained gap according to the screening conditions, that is, the gap is too large or the size is not the gap corresponding to the first identification object .
在另一些示例中,所述筛选条件包括缺口位于与其相邻的至少一侧的连续部分所在沿线上,且同时相应缺口的宽度在所述预设的缺口宽度阈值范围内。所述处理装置依据该筛选条件确定所述缺口对应的候选识别对象为包含所述缺口的第一候选识别对象。换言之,对扫描轮廓上的缺口的两侧均不处于与其相邻的连续部分所在沿线上,或所述缺口的宽度不处于所述预设的缺口宽度阈值范围内的缺口,确定为需要筛除的缺口。In some other examples, the screening condition includes that the gap is located along a continuous part of at least one side adjacent to the gap, and the width of the corresponding gap is within the preset gap width threshold range. The processing device determines that the candidate recognition object corresponding to the gap is the first candidate recognition object including the gap according to the screening condition. In other words, for the gaps on the scan profile that are not located along the adjacent continuous part on both sides, or the width of the gap is not within the preset gap width threshold range, it is determined that it needs to be filtered out The gap.
在步骤S1122中,基于筛选后的缺口确定相应候选识别对象为包含所述缺口的第一候选识别对象。在此,基于由步骤S1121筛除后得到的缺口,确定相应的候选识别对象为包含所述缺口的第一候选识别对象。例如,当经筛除后得到的缺口位于与其相邻的至少一侧的连续部分所在沿线上且该缺口的宽度在预设的缺口宽度阈值范围内时,将该缺口及其两端确定为包含所述缺口的第一候选识别对象。又如,当所述缺口位于与其相邻的至少一侧的连续部分所在沿线上或缺口的宽度在预设的缺口宽度阈值范围内时,将该缺口及其两端确定为包含所述缺口的第一候选识别对象。In step S1122, based on the screened gaps, it is determined that the corresponding candidate recognition object is the first candidate recognition object containing the gap. Here, based on the gap obtained after screening in step S1121, the corresponding candidate recognition object is determined to be the first candidate recognition object that includes the gap. For example, when the notch obtained after screening is located along the line of the continuous part of at least one side adjacent to it and the width of the notch is within the preset notch width threshold range, the notch and both ends are determined to include The first candidate recognition object of the gap. For another example, when the gap is located along the line where the continuous part on at least one side adjacent to it is located or the width of the gap is within the preset gap width threshold range, the gap and its two ends are determined to include the gap The first candidate for recognition.
为确保第一候选识别对象和第二候选识别对象被准确识别,参见图1,其还包括步骤S12:根据所确定的候选识别对象所占的位置信息,令所述摄像装置获取包含所述候选识别对象的图像。In order to ensure that the first candidate recognition object and the second candidate recognition object are accurately recognized, refer to FIG. 1, which further includes step S12: according to the determined position information of the candidate recognition object, the camera device is made to acquire the candidate recognition object. Identify the image of the object.
在此,所述移动机器人包括至少一个摄像装置。所述摄像装置在移动机器人所在位置摄取视场范围内实体对象并投影至所述移动机器人的行进平面,以得到投影图像。例如,移动机器人包含一个摄像装置,其设置于所述移动机器人顶部、肩部或背部,且主光轴垂直于所述移动机器人的行进平面。又如,移动机器人包含多个摄像装置,其中一个摄像装置的主光轴垂直于所述移动机器人的行进平面。再如,所述清洁机器人所包含的摄像装置,其嵌设在体侧或顶部且主光轴与行进平面具有一非垂直的倾斜角度;所述倾斜角度举例为在0°~60°之间的倾斜角度。Here, the mobile robot includes at least one camera. The camera device captures a physical object in the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image. For example, a mobile robot includes a camera device, which is arranged on the top, shoulder or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot. For another example, a mobile robot includes a plurality of camera devices, and the main optical axis of one camera device is perpendicular to the traveling plane of the mobile robot. For another example, the camera device included in the cleaning robot is embedded on the side or top of the body and the main optical axis has a non-vertical tilt angle with the traveling plane; the tilt angle is, for example, between 0° and 60° The tilt angle.
在一些实施方式中,所述摄像装置的主光轴垂直于行进平面,摄像装置所摄取的二维图像所在平面与所述行进平面具有平行关系。请参阅图7,其显示为移动机器人拍摄包含实体对象a的投影图像时,其与实体对象a在相应物理空间的示意图。图7中的移动机器人的至少一摄像装置的主光轴垂直于所述移动机器人的行进平面,在摄像装置拍摄一幅投影图像时,所拍摄到的实体对象a投影到该投影图像M1中的位置D1与同一实体对象a投影到行进平面M2中的位置D2,其中位置D1和D2相对于移动机器人的位置D具有相同角度的特点。以此类推,我们用摄像装置所摄取的实体对象在投影图像中的位置来表示该实体对象投影至 所述移动机器人的行进平面的位置,且利用所述实体对象在投影图像中的位置相对于所述移动机器人移动方向的角度来表征该实体对象投影至所述移动机器人的行进平面的位置相对于所述移动机器人移动方向的角度。In some embodiments, the main optical axis of the camera device is perpendicular to the traveling plane, and the plane where the two-dimensional image captured by the camera device is located has a parallel relationship with the traveling plane. Please refer to FIG. 7, which shows a schematic diagram of the mobile robot in the corresponding physical space with the entity object a when it shoots a projection image containing the entity object a. The main optical axis of at least one camera device of the mobile robot in FIG. 7 is perpendicular to the traveling plane of the mobile robot. When the camera device takes a projected image, the photographed physical object a is projected onto the projected image M1. The position D1 and the same solid object a are projected to the position D2 in the traveling plane M2, where the positions D1 and D2 have the same angle characteristics relative to the position D of the mobile robot. By analogy, we use the position of the solid object captured by the camera device in the projection image to indicate the position of the solid object projected onto the traveling plane of the mobile robot, and use the position of the solid object in the projection image relative to The angle of the moving direction of the mobile robot represents the angle of the position of the solid object projected onto the traveling plane of the mobile robot relative to the moving direction of the mobile robot.
在此,所述处理装置令所述测量装置测量所述摄像装置的视场范围内障碍物相对于移动机器人的位置信息;以及令所述摄像装置摄取所述候选识别对象投影至所述移动机器人的行进平面的图像。其中,利用摄像装置所摄取的候选识别对象在所述图像中的位置来表示该候选识别对象投影至所述移动机器人的行进平面的位置,且利用所述候选识别对象在所述图像中的位置相对于所述移动机器人移动方向的角度来表征该候选识别对象投影至所述移动机器人的行进平面的位置相对于所述移动机器人移动方向的角度。Here, the processing device causes the measurement device to measure the position information of the obstacle relative to the mobile robot within the field of view of the camera device; and causes the camera device to capture the candidate recognition object and project it to the mobile robot Image of the plane of travel. Wherein, the position of the candidate recognition object in the image captured by the camera is used to indicate the position of the candidate recognition object projected onto the traveling plane of the mobile robot, and the position of the candidate recognition object in the image is used The angle relative to the moving direction of the mobile robot characterizes the angle of the position of the candidate recognition object projected onto the traveling plane of the mobile robot relative to the moving direction of the mobile robot.
在另一些实施例中,所述移动机器人还包括移动装置,当测量装置测得的候选识别对象所占位置信息在所述摄像装置的视场范围以外时,所述处理装置根据摄像装置的摄像参数控制移动装置运行,即根据所得到的候选识别对象所占位置信息控制所述清洁机器人移动,以摄取到包含候选识别对象的图像。其中,所述摄像参数包括视场范围、变焦区间等。例如,所述摄像装置的主光轴垂直于行进平面,所述处理装置控制移动装置按照测量装置所提供的候选识别对象的角度信息所指示的角度方向移动,并令所述摄像装置摄取所述候选识别对象投影至所述清洁机器人的行进平面的图像。又如,所述摄像装置的主光轴与行进平面之间具有前述提及的倾斜角度,处理装置控制移动装置按照测量装置所提供的候选识别对象的角度信息所指示的角度方向移动,并令所述摄像装置摄取包含候选识别对象的图像。其中,所述移动机器人可以为清洁机器人,所述清洁机器人的移动装置可包括行走机构和行走驱动机构,其中,所述行走机构可设置于所述机器人本体的底部,所述行走驱动机构内置于所述机器人本体内。所述行走机构可例如包括两个直行行走轮和至少一个辅助转向轮的组合,所述两个直行行走轮分别设于机器人本体的底部的相对两侧,所述两个直行行走轮可分别由对应的两个行走驱动机构实现独立驱动,即,左直行行走轮由左行走驱动机构驱动,右直行行走轮由右行走驱动机构驱动。所述的万向行走轮或直行行走轮可具有偏置下落式悬挂系统,以可移动方式紧固,例如以可旋转方式安装到机器人本体上,且接收向下及远离机器人本体偏置的弹簧偏置。所述弹簧偏置允许万向行走轮或直行行走轮以一定的着地力维持与地面的接触及牵引。在实际的应用中,所述至少一个辅助转向轮未参与的情形下,所述两个直行行走轮主要用于前进和后退,而在所述至少一个辅助转向轮参与并与所述两个直行行走轮配合的情形下,就可实现转向和旋转等移动。所述行走驱动机构可包括驱动电机和控制所述驱动电机的控制电路,利用所述驱动电机可驱动所述行走机构中的行走轮实现移动。在具体实现上,所述驱动电机可例如为可逆驱动电机,且所述驱动电机与所述行走轮的轮轴之间还可设 置有变速机构。所述行走驱动机构可以可拆卸地安装到机器人本体上,方便拆装和维修。In other embodiments, the mobile robot further includes a mobile device. When the position information of the candidate recognition object measured by the measuring device is outside the field of view of the camera device, the processing device is The parameter controls the operation of the mobile device, that is, controls the movement of the cleaning robot according to the obtained position information of the candidate identification object to capture an image containing the candidate identification object. Wherein, the imaging parameters include field of view range, zoom interval, etc. For example, the main optical axis of the camera device is perpendicular to the traveling plane, and the processing device controls the mobile device to move in the angular direction indicated by the angle information of the candidate recognition object provided by the measuring device, and causes the camera device to capture the The candidate recognition object is projected onto the image of the traveling plane of the cleaning robot. For another example, there is the aforementioned inclination angle between the main optical axis of the imaging device and the traveling plane, and the processing device controls the moving device to move in the angular direction indicated by the angle information of the candidate recognition object provided by the measuring device, and makes The camera device captures an image containing a candidate for recognition. Wherein, the mobile robot may be a cleaning robot, and the moving device of the cleaning robot may include a walking mechanism and a walking driving mechanism, wherein the walking mechanism may be provided at the bottom of the robot body, and the walking driving mechanism is built in Inside the robot body. The walking mechanism may, for example, include a combination of two straight walking wheels and at least one auxiliary steering wheel, the two straight walking wheels are respectively provided on opposite sides of the bottom of the robot body, and the two straight walking wheels can be driven by The corresponding two traveling driving mechanisms realize independent driving, that is, the left straight traveling wheel is driven by the left traveling drive mechanism, and the right straight traveling wheel is driven by the right traveling drive mechanism. The universal walking wheel or the straight walking wheel may have a bias drop suspension system, which is fastened in a movable manner, for example, is rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Bias. The spring bias allows the universal traveling wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain ground force. In actual applications, when the at least one auxiliary steering wheel is not involved, the two straight traveling wheels are mainly used to move forward and backward, and when the at least one auxiliary steering wheel participates in and goes straight with the two When the traveling wheels are matched, the steering and rotation can be realized. The walking driving mechanism may include a driving motor and a control circuit that controls the driving motor, and the driving motor can drive the walking wheels in the walking mechanism to move. In terms of specific implementation, the drive motor can be, for example, a reversible drive motor, and a speed change mechanism can also be provided between the drive motor and the axle of the walking wheel. The walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly, assembly and maintenance.
在基于上述任一种装配的所述摄像装置摄取到包含所述候选识别对象的图像后,对所获取的图像进行识别处理。在一些实施方式中,所述步骤S12包括步骤S121,基于预设已知的多种实体对象的特征信息,识别所述图像中的候选识别对象的实体对象信息;所述特征信息可为所述多种实体对象的图像特征,所述图像特征能够标识图像中的实体对象信息,所述图像特征例如为关于所述实体对象信息的轮廓特征。例如,清洁机器人应用于室内环境中,所述预设已知的多种实体对象包括但不限于:桌子、椅子、沙发、花盆、鞋、袜、门、柜子、杯子等。其中,所述图像特征包括预设的对应实体对象种类的图形特征,或者经图像处理算法而得到的图像特征。其中,所述图像处理算法包括但不限于以下至少一种:灰度处理、锐化处理、轮廓提取、角提取、线提取,利用经机器学习而得到的图像处理算法。利用经机器学习而得到的图像处理算法包括但不限于:神经网络算法、聚类算法等。利用步骤S121,所述处理装置可从基于扫描轮廓的连续部分和不连续部分而划分的第二候选识别对象和第一候选识别对象中识别各自对应的实体对象信息。例如,利用图像处理算法确定第一候选识别对象是否为实体门,确定第二候选识别对象包含墙壁、衣柜等,并得到所确定的实体对象信息及其位置信息。After the imaging device assembled based on any one of the above picks up the image containing the candidate recognition object, recognition processing is performed on the acquired image. In some embodiments, the step S12 includes step S121, based on preset known feature information of multiple entity objects, identifying the entity object information of the candidate identification object in the image; the feature information may be the Image features of various entity objects, the image features can identify entity object information in the image, and the image features are, for example, contour features about the entity object information. For example, a cleaning robot is applied in an indoor environment, and the preset known multiple entity objects include, but are not limited to: tables, chairs, sofas, flower pots, shoes, socks, doors, cabinets, cups, etc. Wherein, the image feature includes a preset graphic feature corresponding to the type of entity object, or an image feature obtained through an image processing algorithm. Wherein, the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, angle extraction, line extraction, and image processing algorithms obtained through machine learning. Image processing algorithms obtained through machine learning include, but are not limited to: neural network algorithms, clustering algorithms, etc. Using step S121, the processing device can identify the respective corresponding entity object information from the second candidate recognition object and the first candidate recognition object divided based on the continuous part and the discontinuous part of the scan contour. For example, an image processing algorithm is used to determine whether the first candidate recognition object is a physical door, the second candidate recognition object is determined to include a wall, a wardrobe, etc., and the determined physical object information and location information are obtained.
在另一些实施方式中,所述步骤S12包括步骤S122,利用预设的图像识别算法,构建所述图像中候选识别对象与已知的多种实体对象信息的映射关系,以确定候选识别对象所对应的实体对象信息。例如,移动机器人中的存储装置所存储的程序包含神经网络模型的网络结构及连接方式。在某些实施例中,所述神经网络模型可以为卷积神经网络,所述网络结构包括输入层、至少一层隐藏层和至少一层输出层。其中,所述输入层用于接收所拍摄的图像或者经预处理后的图像;所述隐藏层包含卷积层和激活函数层,甚至还可以包含归一化层、池化层、融合层中的至少一种等;所述输出层用于输出标记有物体种类标签的图像。所述连接方式根据各层在神经网络模型中的连接关系而确定。例如,基于数据传输而设置的前后层连接关系,基于各隐藏层中卷积核尺寸而设置与前层数据的连接关系,以及全连接等。所述神经网络模型从图像中识别出的各物体分类。当所述实体对象为门时,其对应的特征信息可为包括两条垂直于所述清洁机器人的行进平面的特征线,且两条特征线间的距离在一预设的宽度阈值范围内,即:利用所述图像识别算法,构建所述图像中候选识别对象与已知的多种实体对象信息的映射关系,且查找到所述图像中候选识别对象与已知的实体门相对应,则确定候选识别对象所对应的实体对象信息为门所对应的实体对象信息。利用步骤S122,所述处理装置可从基于扫描轮廓的连续部分和不连续部分而得到的第二候选识别对象和第一候选识别对象中识别器对应的实体对象信息。In other embodiments, the step S12 includes step S122, using a preset image recognition algorithm to construct the mapping relationship between the candidate recognition object in the image and the known multiple entity object information to determine the candidate recognition object. Corresponding entity object information. For example, the program stored in the storage device in the mobile robot includes the network structure and connection mode of the neural network model. In some embodiments, the neural network model may be a convolutional neural network, and the network structure includes an input layer, at least one hidden layer, and at least one output layer. Wherein, the input layer is used to receive the captured image or the preprocessed image; the hidden layer includes a convolutional layer and an activation function layer, and may even include a normalization layer, a pooling layer, and a fusion layer. The output layer is used to output images marked with object type tags. The connection mode is determined according to the connection relationship of each layer in the neural network model. For example, the connection relationship between the front and back layers is set based on data transmission, the connection relationship with the previous layer data is set based on the size of the convolution kernel in each hidden layer, and the full connection is set. The neural network model classifies each object recognized from the image. When the physical object is a door, its corresponding feature information may include two feature lines perpendicular to the traveling plane of the cleaning robot, and the distance between the two feature lines is within a preset width threshold range, That is: the image recognition algorithm is used to construct the mapping relationship between the candidate recognition object in the image and the known entity object information, and the candidate recognition object in the image is found to correspond to the known entity door, then The entity object information corresponding to the candidate recognition object is determined to be the entity object information corresponding to the door. Using step S122, the processing device can obtain the entity object information corresponding to the recognizer in the second candidate recognition object and the first candidate recognition object obtained based on the continuous part and the discontinuous part of the scanning contour.
在又一些实施方式中,所述步骤S12包括步骤S123和步骤S124,在步骤S123中,根据所述候选识别对象所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域。在步骤S124中,对所述图像区域进行特征识别以确定所述候选识别对象相应的实体对象信息。在此实施例中,所述摄像装置的主光轴垂直于行进平面,且参阅图3及其相关描述,所述候选识别对象在图像中的角度范围可表征该候选识别对象所对应的实体对象投影至所述移动机器人的行进平面的角度范围,利用测量装置所测得的候选识别对象所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域。在一些实施例中,处理装置可利用前述步骤S121或S122所提供的识别方式在所述图像区域中识别候选识别对象,以提高识别运算效率。In still other embodiments, the step S12 includes step S123 and step S124. In step S123, the image area within the corresponding angle range in the image is determined according to the angle range in the position information occupied by the candidate recognition object . In step S124, feature recognition is performed on the image area to determine the entity object information corresponding to the candidate recognition object. In this embodiment, the main optical axis of the camera device is perpendicular to the traveling plane, and referring to FIG. 3 and related descriptions, the angle range of the candidate recognition object in the image can represent the entity object corresponding to the candidate recognition object The angle range projected to the traveling plane of the mobile robot is determined by using the angle range in the position information occupied by the candidate recognition object measured by the measuring device to determine the image area within the corresponding angle range in the image. In some embodiments, the processing device may use the recognition method provided in step S121 or S122 to recognize candidate recognition objects in the image area to improve the efficiency of recognition calculations.
在另一些实施例中,当所述候选识别对象为包含带有缺口的第一候选识别对象,所述步骤S123可包含步骤S1231和步骤S1232。In other embodiments, when the candidate recognition object includes the first candidate recognition object with a gap, the step S123 may include step S1231 and step S1232.
在所述步骤S1231中,基于所述第一候选识别对象两端的位置信息确定至少一个角度范围;在所述步骤S1232中,按照所确定的角度范围从所述图像中确定用于识别相应第一候选识别对象的实体对象信息的图像区域。In the step S1231, determine at least one angle range based on the position information of the two ends of the first candidate recognition object; in the step S1232, determine from the image according to the determined angle range to identify the corresponding first The image area of the entity object information of the candidate recognition object.
在一些实施例中,按照第一候选识别对象两端的位置信息,确定包含所述候选识别对象两端的位置信息的一个角度范围,即该角度范围包含所述第一候选识别对象的整个缺口,且按照该包含与所述候选识别对象对应的缺口的角度范围,作为用于识别所述第一候选识别对象的实体对象信息的图像区域,例如,请参阅图8,图8显示为本申请的一具体实施例中场景应用示意图。图8中,所述第一候选识别对象例如为候选门81,所述移动机器人为清洁机器人82,候选门81的两端分别与所述清洁机器人82的移动方向所呈的角度为10度和25度,则选取该10度到25度的角度范围内的区域作为用于识别所述第一候选识别对象的实体对象信息的图像区域。另一些实施例中,分别对所述第一候选识别对象的两端进行包含单端的小角度范围的选取,即选取关于所述第一候选识别对象的两端的两个小的角度范围,并将其作为用于识别所述第一候选识别对象的实体对象信息的图像区域。例如,请参阅图9,图9显示为本申请的一具体实施例中场景应用示意图。图9中,所述第一候选识别对象例如为候选门91,所述移动机器人为清洁机器人92,所述候选门91的两端分别与所述清洁机器人92的移动方向所呈的角度为10度和25度,则在所述候选门91与所述清洁机器人92的移动方向所呈的角度为10度的一端,选取关于所述清洁机器人92的移动方向呈9度到11度的第一角度范围,以及在所述候选门91与所述清洁机器人92的移动方向所呈的角度为25度的另一端,选取关于所述清洁机器人92的移动方向呈24度到26度的第二角度范围,选取所述第一角度范围和第二角度范围作为用于识别所述候选门91的实体对象信息的图像区域。In some embodiments, according to the position information of the two ends of the first candidate recognition object, an angle range including the position information of the two ends of the candidate recognition object is determined, that is, the angle range includes the entire gap of the first candidate recognition object, and According to the angle range containing the gap corresponding to the candidate recognition object, it is used as the image area for identifying the entity object information of the first candidate recognition object. For example, please refer to FIG. 8, which shows an example of this application. A schematic diagram of scene application in a specific embodiment. In FIG. 8, the first candidate recognition object is, for example, a candidate door 81, the mobile robot is a cleaning robot 82, and the angles between the two ends of the candidate door 81 and the moving direction of the cleaning robot 82 are 10 degrees and 25 degrees, the area within the angle range of 10 degrees to 25 degrees is selected as the image area for identifying the entity object information of the first candidate recognition object. In some other embodiments, the two ends of the first candidate recognition object are selected to include a single-ended small angle range, that is, two small angle ranges with respect to the two ends of the first candidate recognition object are selected, and It serves as an image area for identifying the entity object information of the first candidate identification object. For example, please refer to FIG. 9, which shows a schematic diagram of a scene application in a specific embodiment of this application. In FIG. 9, the first candidate recognition object is, for example, a candidate door 91, the mobile robot is a cleaning robot 92, and the angle between the two ends of the candidate door 91 and the moving direction of the cleaning robot 92 is 10 Degrees and 25 degrees, at the end of the 10 degree angle between the candidate door 91 and the moving direction of the cleaning robot 92, select the first one that is 9 to 11 degrees in the moving direction of the cleaning robot 92 Angle range, and at the other end where the angle between the candidate door 91 and the direction of movement of the cleaning robot 92 is 25 degrees, select a second angle of 24 degrees to 26 degrees with respect to the direction of movement of the cleaning robot 92 Range, the first angle range and the second angle range are selected as the image area for identifying the entity object information of the candidate door 91.
在一些示例中,所述步骤S124可包括前述步骤S121或S122,即处理装置可按照前述步骤S121或S122所提供的识别方式在所选取的图像区域中识别第一候选识别对象。In some examples, the step S124 may include the foregoing step S121 or S122, that is, the processing device may recognize the first candidate recognition object in the selected image area according to the recognition method provided in the foregoing step S121 or S122.
在又一些示例中,根据摄像装置的主光轴与清洁机器人的行进平面的角度关系,使得实体门的门框投影在图像中会呈现具有灭点的特征,则利用测量装置所得到的第一候选识别对象若可被识别为实体门,将在所选取的图像区域中包含对应上述特点的特征线。以摄像装置的主光轴与清洁机器人的行进平面为垂直角度关系为例,在实际应用中,清洁机器人的高度一般比较低,摄像装置对门的摄取角度,一般为从下往上的角度,当所述第一候选识别对象为待识别的实体门时,摄像装置距离实体门的下部较近,而距离实体门的上部较远。由于图像的透视关系,在所述摄像装置所摄取的图像中,距离所述摄像装置较近的实体对象对应的图像较大,而距离所述摄像装置较远的实体对象对象的图像较小,在所述摄像装置摄取的图像中会出现在第一候选识别对象所在角度范围内的多条特征线及其延长线汇聚于一个点,该点被视为灭点。为此,所述步骤S124还包括步骤S1241和S1242,即处理装置通过执行下述步骤S1241和S1242来确定第一候选识别对象是否为实体门。In still other examples, according to the angular relationship between the main optical axis of the camera device and the traveling plane of the cleaning robot, the projection of the door frame of the solid door will show the feature of vanishing point in the image, then the first candidate obtained by the measuring device If the recognition object can be recognized as a solid door, the selected image area will contain the characteristic lines corresponding to the above characteristics. Take the vertical angle relationship between the main optical axis of the camera and the traveling plane of the cleaning robot as an example. In practical applications, the height of the cleaning robot is generally low. The angle of the camera to the door is generally from bottom to top. When the first candidate recognition object is a physical door to be recognized, the camera device is closer to the lower part of the physical door and farther from the upper part of the physical door. Due to the perspective relationship of the image, in the image captured by the camera device, the image corresponding to the entity object closer to the camera device is larger, while the image of the entity object object farther from the camera device is smaller. In the image captured by the camera device, multiple characteristic lines and their extension lines appearing in the angular range of the first candidate recognition object converge at a point, which is regarded as a vanishing point. To this end, the step S124 further includes steps S1241 and S1242, that is, the processing device determines whether the first candidate recognition object is a physical door by executing the following steps S1241 and S1242.
步骤S1241:根据所述第一候选识别对象所占的位置信息,在所述图像中识别出至少两条用于表示垂直于所述行进平面的特征线。Step S1241: According to the position information occupied by the first candidate recognition object, at least two characteristic lines used to indicate perpendicular to the traveling plane are recognized in the image.
步骤S1242:基于所识别出的特征线确定所述第一候选识别对象为用于表示门的实体对象信息。在此,根据第一候选识别对象所占的位置信息,对与所述位置信息相关的角度范围内的图像区域进行识别,在所述图像区域中识别出至少三条特征线所在直线交汇于一点,则确定该至少三条用于表示垂直于所述行进平面的特征线;进而基于所识别出的特征线确定所述第一候选识别对象为用于表示门的实体对象信息,即确定第一候选识别对象的实体对象信息为门。Step S1242: Based on the identified characteristic line, it is determined that the first candidate identification object is the entity object information used to represent the door. Here, according to the position information occupied by the first candidate recognition object, the image area within the angle range related to the position information is recognized, and at least three feature lines are identified in the image area where the straight lines intersect at one point, Determine the at least three feature lines that are used to indicate perpendicular to the travel plane; and then determine that the first candidate identification object is the entity object information used to represent the door based on the identified feature lines, that is, determine the first identification candidate The entity object information of the object is the door.
在一些实施例中,所述移动机器人的导航方法S1还包括将所确定的实体对象信息及其位置信息标记在用于设置导航路线的地图中的步骤。在一些实施例中,所述地图为一种栅格地图,预先确定单位栅格尺寸与物理空间的单位尺寸之间的映射关系,将所得到的实体对象信息及其位置信息标记到所述地图的对应栅格位置。例如可以将对应各所述实体对象信息的文字描述、图像标识、或者编号标记于所述地图中,所述文字描述可以为对各实体对象信息的种类的名称描述,例如包括对桌子、椅子、花盆、电视以及冰箱等物体的名称描述。例如对应所述桌子的名称描述为“桌子”,对应所述电视的名称描述为“电视”。所述图像标识可以为对应各实体对象信息的种类的实际形象的图标。所述编号可以为预先对应各所述实体对象信息进行编排的数字标号。例如“001”代表冰箱,“002”代表椅子,“003”代表桌子、“004”代表门等。所述移动机器人为清洁机器人,在一些示例中,移动机器人基于预先确定的清扫区 域设计遍历该清扫区域的导航路线,例如,根据地图中位于所述清扫区域内的实体对象的标记信息,移动机器人根据相应的标记信息确定便于清洁的导航路线。其中,所述清扫区域包括但不限于以下至少一种:按照预设栅格数量划分的清扫区域、以及按照房间分割的清扫区域等。例如,在所获取的地图中的一清扫区域内,标记有桌子及其位置信息,故,在设计导航路线时设计包含围绕桌腿旋转的导航路线。In some embodiments, the navigation method S1 of the mobile robot further includes a step of marking the determined physical object information and its position information in a map for setting a navigation route. In some embodiments, the map is a grid map, the mapping relationship between the unit grid size and the unit size of the physical space is predetermined, and the obtained entity object information and its position information are marked on the map The corresponding grid position. For example, the text description, image identification, or number corresponding to each entity object information can be marked on the map, and the text description can be a name description of the type of each entity object information, such as tables, chairs, Description of the names of flower pots, televisions, refrigerators, etc For example, the name corresponding to the table is described as "table", and the name corresponding to the TV is described as "television". The image identifier may be an actual image icon corresponding to the type of entity object information. The number may be a digital label that is arranged in advance corresponding to the entity object information. For example, "001" represents a refrigerator, "002" represents a chair, "003" represents a table, and "004" represents a door. The mobile robot is a cleaning robot. In some examples, the mobile robot designs a navigation route to traverse the cleaning area based on a predetermined cleaning area. For example, according to the marking information of the physical objects located in the cleaning area on the map, the mobile robot Determine the navigation route that is convenient for cleaning according to the corresponding marking information. Wherein, the cleaning area includes but is not limited to at least one of the following: a cleaning area divided according to a preset number of grids, a cleaning area divided according to a room, and the like. For example, in a cleaning area in the acquired map, the table and its position information are marked. Therefore, when designing a navigation route, the design includes a navigation route rotating around the table legs.
利用标记有实体对象的标记信息的地图,所述导航方法还包括步骤S13,即依据所述实体对象信息及其位置信息确定所述移动机器人在所述区域内的导航路线。Using the map marked with the marking information of the physical object, the navigation method further includes step S13, that is, determining the navigation route of the mobile robot in the area according to the physical object information and its position information.
在一些实施例中,所述移动机器人为清洁机器人,且依据所述实体对象信息及所述清洁机器人所在区域划分移动机器人的清洁区域,并设计在所述行走区域中的导航路线。其中所述位置信息包括所述实体对象的测量点相对于所述清洁机器人的距离和角度。在一些实施例中,所述清洁区域为基于所述实体对象信息而确定的房间区域;例如当由实体对象“墙”和实体对象“门”所组成的房间区域中包含实体对象“床”时,该包含床这一实体对象的房间区域则为卧室。且当由实体对象“墙”和实体对象“门”所组成的房间区域中包含沙发这一实体对象时,该包含沙发这一实体对象的房间区域为客厅。且根据获得的所述客厅、卧室等区域划分清洁区域,且可向所述清洁机器人发送依次遍历所述客厅和卧室的清洁指令。且所述清洁机器人可以以预先设置遍历所述清洁区域的清洁单位范围,每个所述清洁单位范围可以包含九个栅格区域,每次为所述清洁机器人规划接下来需要清扫的九个栅格,且在该九个栅格区域被清洁完毕后,为所述清洁机器人规划下一个清洁单位范围,且当规划的清洁单位范围由于障碍物(例如墙或柜子)的阻隔而无法规划到九个栅格区域时,则以所述障碍物为截止点,将未被障碍物阻挡的栅格区域作为所述清洁机器人接下来需要遍历清洁的清洁范围,例如由于墙的阻隔,下个规划的清扫区域只能达到六个栅格区域时,则以该六个栅格区域作为所述清洁机器人接下来需要遍历的清扫区域,并以此类推,直到所述清洁机器人遍历了当前所在的清洁区域。In some embodiments, the mobile robot is a cleaning robot, and the cleaning area of the mobile robot is divided according to the physical object information and the area where the cleaning robot is located, and a navigation route in the walking area is designed. The position information includes the distance and angle of the measurement point of the physical object relative to the cleaning robot. In some embodiments, the cleaning area is a room area determined based on the physical object information; for example, when the room area composed of the physical object "wall" and the physical object "door" contains the physical object "bed" , The room area containing the physical object of the bed is the bedroom. And when the room area composed of the solid object "wall" and the solid object "door" contains the solid object of the sofa, the room area containing the solid object of the sofa is the living room. And according to the obtained areas such as the living room, bedroom, etc., cleaning areas are divided, and a cleaning instruction for sequentially traversing the living room and bedroom can be sent to the cleaning robot. Moreover, the cleaning robot may preset a range of cleaning units to traverse the cleaning area, each of the cleaning unit ranges may include nine grid areas, and each time the cleaning robot plans the next nine grids to be cleaned. After the nine grid areas have been cleaned, plan the next cleaning unit range for the cleaning robot, and when the planned cleaning unit range cannot be planned to nine due to obstacles (such as walls or cabinets). When there are two grid areas, the obstacle is the cut-off point, and the grid area not blocked by the obstacle is used as the cleaning range that the cleaning robot needs to traverse next. For example, due to the barrier of the wall, the next planned When the cleaning area can only reach six grid areas, the six grid areas are used as the cleaning area that the cleaning robot needs to traverse next, and so on, until the cleaning robot traverses the current cleaning area .
在另一些实施例中,所述清洁区域为按照预设区域范围和位于所述区域范围内的实体对象信息所占位置信息而划分的区域。且当所确定的实体对象信息包含实体门时,还包括在所述实体门所对应的位置信息处设置虚拟墙的步骤;以便依据所述虚拟墙及所述清洁机器人所在区域划分清洁机器人的清洁区域,并设计在所述行走区域中的导航路线。所述预设区域范围例如为用户的家,用户的家可包括客厅、卧室、厨房以及卫生间这三个区域,且每个区域都存在实体门,且当通过所述测量装置和所述摄像装置获取到各实体们对象所占位置信息后,在所述实体门所对应的位置信息处设置虚拟墙,且虚拟墙以及与虚拟墙连接的实体墙的结合组成一个独立的区域,进而根据所述虚拟墙以及所述清洁机器人所在区域划分清洁机器人的 清洁区域,例如根据所述虚拟墙将所述用户的家这个区域范围划分为四个清洁区域,分别为客厅、卧室、厨房以及卫生间。且在每个所述清洁区域内以预设的遍历方式进行遍历清洁。In other embodiments, the cleaning area is an area divided according to a preset area range and location information occupied by physical object information within the area range. And when the determined physical object information includes a physical door, it further includes the step of setting a virtual wall at the location information corresponding to the physical door; so as to divide the cleaning area of the cleaning robot according to the virtual wall and the area where the cleaning robot is located , And design a navigation route in the walking area. The preset area range is, for example, the user's home. The user's home may include three areas: a living room, a bedroom, a kitchen, and a bathroom, and each area has a physical door, and when passing through the measuring device and the camera After obtaining the location information of each entity object, a virtual wall is set at the location information corresponding to the physical door, and the combination of the virtual wall and the physical wall connected to the virtual wall forms an independent area, and then according to the The virtual wall and the area where the cleaning robot is located divide the cleaning area of the cleaning robot. For example, the area of the user's home is divided into four cleaning areas according to the virtual wall, which are a living room, a bedroom, a kitchen, and a bathroom. And traverse cleaning is performed in each of the cleaning areas in a preset traversal manner.
在另一些实施例中,在将所确定的实体对象信息及其位置信息标记在用于设置导航路线的地图中后,依据所述实体对象信息及其位置信息确定所述移动机器人在所述区域内的导航路线的步骤还包括:基于包含实体对象信息的指令信息,设置导航至所述实体对象信息的导航路线;于本实施例中,所述实体对象信息例如为对各实体对象信息的种类的名称描述,例如包括对桌子、椅子、花盆、电视、冰箱以及门等物体的名称描述。In other embodiments, after marking the determined physical object information and its location information in the map for setting the navigation route, it is determined that the mobile robot is in the area according to the physical object information and its location information. The step of the navigation route in the internal object information further includes: based on the instruction information containing the entity object information, setting a navigation route to the entity object information; in this embodiment, the entity object information is, for example, the type of each entity object information The description of the name, for example, includes the description of the names of objects such as tables, chairs, flower pots, TVs, refrigerators, and doors.
其中,获取包含实体对象信息的指令信息的方式包括但不限于:语音方式、文本方式等。在此,根据用户对移动机器人的操作目的,所述指令中还可以包含移动机器人的执行指令。例如,所述指令还包括清扫指令、巡视指令、远程操控指令等。Wherein, the method of obtaining the instruction information including the entity object information includes but is not limited to: voice mode, text mode, etc. Here, according to the user's operation purpose of the mobile robot, the instruction may also include an execution instruction of the mobile robot. For example, the instructions also include cleaning instructions, patrol instructions, remote control instructions, and the like.
在一种实施例中,所述基于包含实体对象信息的指令信息,设置导航至所述实体对象信息的导航路线的步骤可以包括:获取一语音信息,并从所述语音信息中识别包含实体对象信息的指令。在一示例中,移动机器人可以直接接收用户的语音信息并识别所述信息中包括的实体对象信息的指令。例如,用户可以直接向移动机器人语音“桌子”,移动机器人在接收到所述指令后移动至桌子以进行预先设定的相应处理。且可以根据路线上经过的实体对象信息规划所述移动机器人从当前的位置移动至桌子的导航路线。所述移动机器人从当前位置移动到所述桌子的导航路线上可经过花盆、电视、沙发等。以移动机器人为例,预先设定移动机器人在接收到用户包含实体对象信息的指令后,根据所构建的地图规划导航路线,以令所述移动机器人移动至该实体对象信息对应的位置以进行清扫,则在用户向移动机器人语音“桌子”的情况下,移动机器人在接收到该语音指令后,根据所构建的地图,形成令所根据花盆、电视和沙发形成的导航路线,且所述移动机器人在经过根据所述花盆、电视和沙发形成的导航路线后移动至桌子处并执行清扫操作。另外,所述语音信息并不限于仅表示实体对象信息的短指令,还可以是包括实体对象信息的长指令,例如,用户语音“去桌子处”,则移动机器人可以识别语音信息中所包括的实体对象信息“桌子”指令,然后进行后续操作。In an embodiment, the step of setting a navigation route for navigating to the entity object information based on the instruction information containing the entity object information may include: acquiring a piece of voice information, and identifying the entity object contained in the voice information Information instructions. In an example, the mobile robot can directly receive the user's voice information and recognize the instruction of the entity object information included in the information. For example, the user can directly voice "table" to the mobile robot, and the mobile robot moves to the table after receiving the instruction to perform preset corresponding processing. And the navigation route for the mobile robot to move from the current position to the table can be planned according to the information of the entity objects passing by the route. The mobile robot moves from the current position to the navigation route of the table and can pass through flower pots, televisions, sofas, etc. Taking a mobile robot as an example, it is preset that the mobile robot plans a navigation route according to the constructed map after receiving an instruction from the user containing the entity object information, so that the mobile robot can move to the location corresponding to the entity object information for cleaning , When the user voices "table" to the mobile robot, after receiving the voice instruction, the mobile robot forms a navigation route based on the flowerpot, TV, and sofa according to the constructed map, and the mobile robot The robot moves to the table and performs a cleaning operation after passing the navigation route formed according to the flowerpot, TV and sofa. In addition, the voice information is not limited to short instructions that only indicate physical object information, but may also be long instructions that include physical object information. For example, if the user voice "go to the table", the mobile robot can recognize the information included in the voice information. Entity object information "table" instruction, and then follow-up operations.
在另一种实施例中,所述基于包含实体对象信息的指令信息,设置导航至所述实体对象信息的导航路线的步骤还包括:自一终端设备获取包含实体对象信息的指令。其中,所述终端设备与移动机器人无线连接。在一示例中,用户经由终端设备以文本方式输入包含实体对象信息的指令。例如,用户通过手机APP以文本方式输入“桌子”。在另一示例中,用于经由终端设备以语音方式输入包含实体对象信息的指令。例如,用户通过手机APP以语音方式输入“桌子”。另外,用户输入的语音信息并不限于仅表示实体对象信息的短指令,还可以是包括实体对象信息的长指令,例如,用户语音“去桌子处”,则终端设备将其翻译成文本并提取 其中桌子等关键词,将所翻译的文本匹配到相应的指令上发送给移动机器人。在此,终端设备可以与移动机器人以wifi连接、近场通信或蓝牙配对等无线方式连接,以将终端设备接收的指令传送给移动机器人进行后续操作。所述终端设备例如为智能手机、平板电脑、可穿戴设备或其他具有智能处理功能的智能设备。In another embodiment, the step of setting a navigation route for navigating to the entity object information based on the instruction information containing the entity object information further includes: obtaining the instruction containing the entity object information from a terminal device. Wherein, the terminal device is wirelessly connected with the mobile robot. In an example, the user inputs an instruction containing physical object information in a text manner via a terminal device. For example, the user enters "table" in text form through a mobile phone APP. In another example, it is used to input an instruction containing physical object information via a terminal device in a voice manner. For example, the user enters "table" by voice through the mobile APP. In addition, the voice information input by the user is not limited to short instructions that only indicate physical object information, but can also be long instructions that include physical object information. For example, if the user’s voice "go to the table", the terminal device will translate it into text and extract it. Among them, keywords such as table are matched with the translated text to the corresponding instruction and sent to the mobile robot. Here, the terminal device can be connected with the mobile robot in a wireless manner such as wifi connection, near field communication, or Bluetooth pairing, so as to transmit the instructions received by the terminal device to the mobile robot for subsequent operations. The terminal device is, for example, a smart phone, a tablet computer, a wearable device, or other smart devices with smart processing functions.
本申请的移动机器人的导航方法,可以根据测距传感装置和角度传感装置,或者TOF测量装置测量移动机器人所在区域内障碍物相对于移动机器人的角度和距离,准确的确定在所述区域内的候选识别对象的位置信息,并令摄像装置获取包含该候选识别对象的图像,进而确定该候选识别对象相应的实体对象信息,且根据该实体对象信息及其位置信息确定该移动机器人在所述区域内的导航路线,本申请在已获得较为精确的关于实体对象信的位置信息后,直接根据该实体对象信息规划导航路线,增加导航路线规划的准确性,且提高运行该导航方法的移动机器人的人机交互性。The navigation method of the mobile robot of the present application can accurately determine the angle and distance of the obstacle relative to the mobile robot in the area where the mobile robot is located according to the distance measuring sensor device and the angle sensor device, or the TOF measuring device. The location information of the candidate recognition object in the camera, and the camera device can obtain the image containing the candidate recognition object, and then determine the corresponding entity object information of the candidate recognition object, and determine that the mobile robot is at the location based on the entity object information and its location information. For the navigation route in the area, this application directly plans the navigation route based on the entity object information after obtaining more accurate location information about the entity object information, which increases the accuracy of the navigation route planning and improves the movement of the navigation method. The human-computer interaction of the robot.
对于清洁机器人来说,在遍历所清洁的室内地面期间,一些清洁机器人采用按照预设长宽尺寸划分区域的方式设计遍历相应区域内的导航路线,以便在移动期间完成清洁工作。另一些清洁机器人采用按照房间划分区域方式设计遍历房间区域内的导航路线,以便在移动期间完成清洁工作。对于采用前一种清洁方式进行地面清洁的机器人来说,当门敞开时,清洁机器人易于在房间尚未清洁完毕时,移出相应房间转而清洁其他房间。这是因为清洁机器人在划分区域时按照预设方向设计相邻清洁区域的优先级,由此导致房间内产生需要补扫的区域。对于采用后一种清洁方式进行地面清洁的机器人来说,当门敞开时,清洁机器人易于误判单个房间的区域,同样出现在一个房间尚未清洁完毕时移出相应房间转而清洁其他房间的情况。这是因为清洁机器人误将门作为房间内的通道,错误的划分了房间,由此房间内也会产生需要补扫的区域。当清洁机器人遗留了过多的补扫区域时,需要对补扫区域逐个补充清洁,这降低了清洁机器人的工作效率。为减少清洁机器人在清洁期间遗留的补扫区域的数量,本申请还提供一种划分清洁区域的方法,其旨在通过识别实体门,特别是识别处于敞开状态的实体门,相对于清洁机器人的位置,实现在划分清洁区域时参考实体门及其所占位置,以减少房间内的补扫区域,提高单次的清洁效率。For cleaning robots, during traversing the cleaned indoor ground, some cleaning robots design the navigation route to traverse the corresponding area by dividing the area according to the preset length and width dimensions, so as to complete the cleaning work during the movement. Some other cleaning robots design the navigation route traversing the room area according to the room division method, so as to complete the cleaning work during the movement. For the robot that uses the former cleaning method to clean the floor, when the door is open, the cleaning robot is easy to move out of the corresponding room to clean other rooms when the room has not been cleaned. This is because the cleaning robot designs the priority of adjacent cleaning areas according to a preset direction when dividing areas, which results in areas that need to be cleaned up in the room. For robots that use the latter cleaning method to clean the floor, when the door is open, the cleaning robot is likely to misjudge the area of a single room. It also appears that when a room has not been cleaned, it moves out of the corresponding room and cleans other rooms. This is because the cleaning robot mistakenly uses the door as a passage in the room, and incorrectly divides the room. As a result, there will be areas in the room that need to be cleaned up. When the cleaning robot leaves too many supplementary sweeping areas, the supplementary sweeping areas need to be supplemented one by one, which reduces the working efficiency of the cleaning robot. In order to reduce the number of cleaning areas left by the cleaning robot during cleaning, this application also provides a method for dividing the cleaning area, which aims to identify physical doors, especially those in an open state, as opposed to the cleaning robot’s Location, to refer to the physical door and its occupied position when dividing the cleaning area, so as to reduce the cleaning area in the room and improve the single cleaning efficiency.
参阅图10,图10显示为本申请的划分清洁区域的方法在一具体实施例中的流程示意图。所述划分清洁区域的方法S2应用于一清洁机器人中,所述划分清洁区域的方法S2可由清洁机器人来执行。其中,所述清洁机器人包括处理装置、测量装置、摄像装置等。所述处理装置为一种能够进行数值运算、逻辑运算及数据分析的电子设备,其包括但不限于:CPU、GPU、FPGA等,以及用于暂存运算期间所产生的中间数据的易失性存储器,用于存储可执行所述方法的程序的非易失性存储器等。所述清洁机器人包含测量装置和摄像装置。 所述摄像装置包括但不限于鱼眼摄像模块、广角(或非广角)摄像模块中的任一种。所述测量装置可安装于所述清洁机器人的体侧,所述测量装置举例可为扫描激光器或TOF传感器。其中扫描激光器包括角度传感装置和测距传感器,且通过所述角度传感器获取对应测距传感器所测量的距离信息的角度信息,且通过激光或者红外来测得障碍物测量点在所述扫描激光器当前角度上与所述测距传感器的距离。所述扫描激光器是相对于一固定参照系随时间改变方向、传播的起点或图样的激光器。扫描激光器基于激光测距原理,通过可旋转的光学部件(如激光发射器)发射形成二维的扫描面,以实现区域扫描及轮廓测量功能。扫描激光器的测距原理包括:激光发射器发出激光脉冲波,当激光波碰到物体后,部分能量返回,当激光接收器收到返回激光波时,且返回波的能量足以触发门槛值,则扫描激光器计算它到物体的距离值。扫描激光器连续不停的发射激光脉冲波,激光脉冲波打在高速旋转的镜面上,将激光脉冲波发射向各个方向从而形成一个二维区域的扫描。此二维区域的扫描例如可以实现以下两个功能:1)在扫描激光器的扫描范围内,设置不同形状的保护区域,当有物体进入该区域时,发出报警信号;2)在扫描激光器的扫描范围内,扫描激光器输出每个障碍物测量点的距离,根据此距离信息,可以计算物体的外型轮廓,坐标定位等。Referring to FIG. 10, FIG. 10 shows a schematic flowchart of a method for dividing a clean area according to a specific embodiment of the present application. The method S2 of dividing a cleaning area is applied to a cleaning robot, and the method S2 of dividing a cleaning area can be executed by the cleaning robot. Wherein, the cleaning robot includes a processing device, a measuring device, a camera device, and the like. The processing device is an electronic device capable of performing numerical operations, logical operations, and data analysis, including but not limited to: CPU, GPU, FPGA, etc., and is used to temporarily store the volatility of intermediate data generated during operations Memory, non-volatile memory for storing programs that can execute the method, etc. The cleaning robot includes a measuring device and a camera device. The camera device includes but is not limited to any one of a fisheye camera module and a wide-angle (or non-wide-angle) camera module. The measuring device may be installed on the body side of the cleaning robot, and the measuring device may be, for example, a scanning laser or a TOF sensor. The scanning laser includes an angle sensing device and a distance measuring sensor, and the angle information corresponding to the distance information measured by the distance measuring sensor is obtained through the angle sensor, and the obstacle measurement point is measured at the scanning laser through laser or infrared. The distance from the distance measuring sensor at the current angle. The scanning laser is a laser that changes direction, starting point or pattern of propagation with time relative to a fixed frame of reference. The scanning laser is based on the principle of laser distance measurement, which forms a two-dimensional scanning surface through a rotatable optical component (such as a laser transmitter) to realize area scanning and profile measurement functions. The ranging principle of a scanning laser includes: a laser transmitter emits a laser pulse wave, when the laser wave hits an object, part of the energy returns, when the laser receiver receives the return laser wave, and the energy of the return wave is sufficient to trigger the threshold, then Scan the laser to calculate its distance to the object. The scanning laser continuously emits laser pulse waves. The laser pulse waves hit the high-speed rotating mirror surface and emit the laser pulse waves in all directions to form a two-dimensional area scan. The scanning of this two-dimensional area can, for example, realize the following two functions: 1) Set protection areas of different shapes within the scanning range of the scanning laser, and send out an alarm signal when an object enters the area; 2) Scanning the laser Within the range, the scanning laser outputs the distance of each obstacle measurement point. According to this distance information, the outline and coordinate positioning of the object can be calculated.
所述TOF测量装置基于TOF技术。TOF技术属于光学非接触式三维深度测量感知方式中的一种,通过给目标连续发送光脉冲,然后用传感器接收从物体返回的光,通过探测这些发射和接收光脉冲的飞行(往返)时间来得到目标物距离。TOF的照射单元都是对光进行高频调制之后再进行发射,一般采用LED或激光(包含激光二极管和VCSEL(Vertical Cavity Surface Emitting Laser,垂直腔面发射激光器))来发射高性能脉冲光,本申请的实施例中,采用激光来发射高性能脉冲光。脉冲可达到100MHz左右,主要采用红外光。TOF测量装置应用的原理有以下两类,1)基于光学快门的方法;主要实现方式为:发射一束脉冲光波,通过光学快门快速精确获取照射到三维物体后反射回来的光波的时间差t,由于光速c已知,只要知道照射光和接收光的时间差,来回的距离可以通过公示d=t/2·c。2)基于连续波强度调制的方法;主要实现方式为:发射一束照明光,利用发射光波信号与反射光波信号的相位变化来进行距离测量。其中,照明模组的波长一般是红外波段,且需要进行高频率调制。TOF感光模组与普通手机摄像模组类似,由芯片,镜头,线路板等部件构成,TOF感光芯片每一个像元对发射光波的往返相机与物体之间的具体相位分别进行记录,通过数据处理单元提取出相位差,由公式计算出深度信息。TOF测量装置的体积小,可以直接输出被探测物体的深度数据,且TOF测量装置的深度计算结果不受物体表面灰度和特征的影响,可以非常准确的进行三维探测。The TOF measuring device is based on TOF technology. TOF technology is one of the optical non-contact three-dimensional depth measurement and perception methods. It continuously sends light pulses to the target, and then uses the sensor to receive the light returned from the object, and detects the flight (round trip) time of these transmitted and received light pulses. Get the target distance. TOF's irradiating unit all emits after high-frequency modulation of light. Generally, LED or laser (including laser diode and VCSEL (Vertical Cavity Surface Emitting Laser)) is used to emit high-performance pulsed light. In the embodiment of the application, a laser is used to emit high-performance pulsed light. The pulse can reach about 100MHz, and infrared light is mainly used. The principles of TOF measurement device application are as follows: 1) The method based on optical shutter; the main implementation method is: emit a pulsed light wave, through the optical shutter, the time difference t of the light wave reflected back after being irradiated on a three-dimensional object is quickly and accurately obtained. The speed of light c is known. As long as the time difference between the irradiated light and the received light is known, the back and forth distance can be publicized by d=t/2·c. 2) A method based on continuous wave intensity modulation; the main implementation method is: emit a beam of illuminating light, and use the phase change of the emitted light wave signal and the reflected light wave signal to measure the distance. Among them, the wavelength of the lighting module is generally in the infrared band, and high frequency modulation is required. The TOF photosensitive module is similar to the ordinary mobile phone camera module, which is composed of chips, lenses, circuit boards and other components. Each pixel of the TOF photosensitive chip records the specific phase between the camera and the object that emits light waves. Through data processing The unit extracts the phase difference and calculates the depth information by the formula. The TOF measuring device is small in size and can directly output the depth data of the detected object, and the depth calculation result of the TOF measuring device is not affected by the grayscale and characteristics of the surface of the object, so it can perform three-dimensional detection very accurately.
在此,所述划分清洁区域的方法S2包括如图10所示的步骤S21~步骤S23,用以根据所 述清洁机器人的测量装置和摄像装置获取所述清洁机器人所在区域内的实体门的位置,并根据该实体门及其位置信息,约束所述清洁机器人的清洁范围。Here, the method S2 for dividing a cleaning area includes steps S21 to S23 as shown in FIG. 10, for obtaining the position of the physical door in the area where the cleaning robot is located according to the measuring device and the camera device of the cleaning robot , And restrict the cleaning range of the cleaning robot according to the physical door and its position information.
其中,在所述步骤S21中,令所述测量装置测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息,并确定在所述区域内的候选门所占的位置信息。其中,所述区域例如为房间。所述障碍物可以为该房间中任何能反射测量介质的实体对象。利用上述任一示例所提及的测量装置可测得障碍物相对于测量装置的位置信息,得到障碍物的轮廓信息,利用所述轮廓信息来确定在所述区域内的候选门及其占的位置信息。其中,所述位置信息包括:偏角信息及所对应的距离信息,所述距离信息和偏角信息被称为障碍物相对于清洁机器人的位置信息,或简称为障碍物的位置信息。Wherein, in the step S21, the measuring device is caused to measure the position information of the obstacle in the area where the cleaning robot is located relative to the cleaning robot, and determine the position information occupied by the candidate door in the area. Wherein, the area is, for example, a room. The obstacle may be any physical object in the room that can reflect the measurement medium. Using the measuring device mentioned in any of the above examples, the position information of the obstacle relative to the measuring device can be measured to obtain the contour information of the obstacle, and the contour information is used to determine the candidate doors in the area and their occupation location information. Wherein, the position information includes: deflection angle information and corresponding distance information, and the distance information and deflection angle information are called the position information of the obstacle relative to the cleaning robot, or simply the position information of the obstacle.
参阅图11,图11显示为本申请的一具体实施例中确定在区域内候选门所占的位置信息的流程示意图。即在一些实施例中,步骤S21中的所述确定在区域内候选门所占的位置信息的步骤包括图11所示的步骤S211和步骤S212。Referring to FIG. 11, FIG. 11 is a schematic diagram of a process for determining the position information of candidate doors in an area in a specific embodiment of this application. That is, in some embodiments, the step of determining the position information occupied by the candidate door in the area in step S21 includes step S211 and step S212 shown in FIG. 11.
在所述步骤S211中,所述处理装置可依据测量所述区域内各障碍物测量点的位置信息以获得一个扫描轮廓及其所占位置信息。In the step S211, the processing device may obtain a scan profile and its occupied position information according to the position information of the measurement points of the obstacles in the measurement area.
在此,利用上述任一示例所提及的测量装置遍历地测量所在区域内二维或三维平面内的障碍物,可获得所在区域内二维或三维平面内由障碍物测量点构成的扫描轮廓。其中,所述障碍物测量点是障碍物上的、用于反射测距传感器所发出的测量介质处的反射点。其中,测量介质举例为激光光束、LED灯束、或红外光束等。所得到的扫描轮廓为一种由各障碍物测量点的位置信息构成的点阵矩阵,其中,所述位置信息包括障碍物测量点相对于测量装置的距离信息和偏角信息,或简称为障碍物测量点的位置信息。利用所测得的各测量障碍点的位置信息所构成的二维或三维阵列,构建障碍物的扫描轮廓。Here, the measurement device mentioned in any of the above examples is used to traversely measure the obstacles in the two-dimensional or three-dimensional plane in the area, and obtain the scanning contour of the obstacle measurement points in the two-dimensional or three-dimensional plane in the area. . Wherein, the obstacle measurement point is a reflection point on the obstacle that is used to reflect the measurement medium emitted by the ranging sensor. Among them, the measurement medium is, for example, a laser beam, an LED light beam, or an infrared beam. The obtained scan profile is a lattice matrix composed of the position information of each obstacle measurement point, where the position information includes distance information and deflection angle information of the obstacle measurement point relative to the measurement device, or simply referred to as obstacle The location information of the object measurement point. The two-dimensional or three-dimensional array formed by the measured position information of the measured obstacle points is used to construct the scanning contour of the obstacle.
对于位置信息的面阵列,所述步骤S211包括:基于所述测量装置测得各障碍物测量点的位置信息面阵列,拟合所述清洁机器人的行进平面,以及确定位于所述行进平面上扫描轮廓及其所占位置信息。以所述测量装置为TOF测量装置、且所述TOF测量装置包含激光传感器阵列为例,所述位置信息面阵列为由激光传感器阵列测得。For the area array of position information, the step S211 includes: fitting the travel plane of the cleaning robot based on the area array of the position information of each obstacle measurement point measured by the measuring device, and determining to scan on the travel plane Information about the contour and its position. Taking the measurement device as a TOF measurement device and the TOF measurement device including a laser sensor array as an example, the position information plane array is measured by the laser sensor array.
为测得清洁机器人周围的障碍物,所述测量装置被安装在体侧且靠近行进平面的位置处,例如测量装置安装在清洁机器人的体侧。因此,所获取的各障碍物测量点的位置信息面阵列中可包含地面、放置在上的物体、悬挂于空中的物体等多种障碍物的测量点的位置信息。鉴于所述测量装置的安装位置,根据所测得的障碍物测量点通常包含所述清洁机器人的行进平面,如地面,利用平面拟合方式确定障碍物测量点所构成的平面,所构成的平面被认为是所述行进平面,再根据所确定的行进平面,确定放置在所述行进平面上的扫描轮廓及其所占位 置信息。例如,随机地从所述位置信息面阵列中选取到若干个障碍物测量点的位置信息,利用平面拟合方式选取一平面,其中构成该平面的障碍物测量点的数量最多,并将所述位置信息面阵列中位于所选取的平面上的各障碍物测量点作为处于所述清洁机器人的行进平面的障碍物测量点;按照位置信息面阵列中各像素点的位置信息,将位于所述行进平面上部的像素点的位置信息投影到所述行进平面,由此得到位于所述行进平面上扫描轮廓及其所占位置信息。例如,请参阅图3和图4,其中图3仅示意性的提供了按照清洁机器人中测量装置的安装位置而获取的位置信息面阵列的示意图,图4为基于图3的位置信息面阵列而确定的投影在地面的凳脚投影示意图。其中,按照前述拟合和投影方式,并结合图3和4所示,处理装置根据所得到的位置信息面阵列将从凳子的凳脚高度至地面各位置信息投影到所述行进平面后,会得到对应所述凳脚的块状投影。In order to measure obstacles around the cleaning robot, the measuring device is installed on the side of the body close to the traveling plane, for example, the measuring device is installed on the side of the cleaning robot. Therefore, the acquired position information area array of each obstacle measurement point may include the position information of the measurement points of various obstacles, such as the ground, objects placed on the surface, and objects suspended in the air. In view of the installation position of the measurement device, the measured obstacle measurement points usually include the traveling plane of the cleaning robot, such as the ground, and the plane formed by the obstacle measurement points is determined by the plane fitting method. It is considered as the traveling plane, and then according to the determined traveling plane, the scanning contour placed on the traveling plane and the position information occupied by it are determined. For example, randomly select the position information of a number of obstacle measurement points from the position information surface array, and select a plane using a plane fitting method, where the number of obstacle measurement points constituting the plane is the largest, and the The obstacle measurement points on the selected plane in the position information plane array are taken as obstacle measurement points on the traveling plane of the cleaning robot; according to the position information of each pixel in the position information plane array, the obstacle measurement points are located in the traveling plane. The position information of the pixel points on the upper part of the plane is projected onto the travel plane, thereby obtaining scan contours on the travel plane and the position information occupied by them. For example, please refer to FIGS. 3 and 4, where FIG. 3 only schematically provides a schematic diagram of the position information surface array obtained according to the installation position of the measuring device in the cleaning robot, and FIG. 4 is based on the position information surface array of FIG. A schematic diagram of the projection of the foot of the stool on the ground. Among them, according to the aforementioned fitting and projection method, combined with Figures 3 and 4, the processing device will project the position information from the height of the stool foot to the ground to the travel plane according to the obtained position information area array Obtain a block projection corresponding to the foot of the stool.
对于位置信息的线阵列,所述步骤S211还包括:基于所述测量装置所测得的平行于行进平面的位置信息线阵列,确定位于所述行进平面上扫描轮廓及其所占位置信息。以所述测量装置为扫描激光器为例,所述位置信息的线阵列由扫描激光器测得。For the line array of position information, the step S211 further includes: determining the scanning profile on the traveling plane and the position information occupied by it based on the line array of position information parallel to the traveling plane measured by the measuring device. Taking the measurement device as a scanning laser as an example, the line array of the position information is measured by the scanning laser.
在此,所述激光扫描器可被安装于所述清洁机器人的顶部中间、顶部边缘或体侧。其中,所述扫描激光器的激光发射方向可与所述行进平面平行,且令所述扫描激光器在所述清洁机器人所在的位置以360度的角度进行旋转扫描,并通过所述扫描激光器的角度传感装置获取各障碍物测量点关于所述移动机器人的角度,且通过扫描激光器的测距传感装置(激光或红外测距装置),以测得障碍物测量点与所述清洁机器人之间的距离,进而获得平行于所述行进平面的位置信息线阵列,且由于所述位置信息线阵列与所述行进平面平行,所以,可以直接根据所述位置信息线阵列确定位于所述行进平面上扫描轮廓及其所占位置信息。以所述移动机器人为清洁机器人为例,由于扫描激光器相距地面的距离相当于清洁机器人的高度,因此,利用扫描激光器所得到的位置信息线阵列可表示地面上妨碍清洁机器人移动的障碍物的位置信息。Here, the laser scanner may be installed on the top middle, top edge or body side of the cleaning robot. Wherein, the laser emission direction of the scanning laser can be parallel to the traveling plane, and the scanning laser is rotated and scanned at an angle of 360 degrees at the position where the cleaning robot is located, and is transmitted through the angle of the scanning laser. The sensing device acquires the angle of each obstacle measurement point with respect to the mobile robot, and scans the laser ranging sensing device (laser or infrared ranging device) to measure the distance between the obstacle measurement point and the cleaning robot Distance, and then obtain a position information line array parallel to the travel plane, and since the position information line array is parallel to the travel plane, it can be directly determined to scan on the travel plane according to the position information line array Information about the contour and its position. Taking the mobile robot as a cleaning robot as an example, since the distance between the scanning laser and the ground is equivalent to the height of the cleaning robot, the line array of position information obtained by the scanning laser can indicate the position of obstacles on the ground that hinder the movement of the cleaning robot information.
在一些实际应用中,测量装置的举例量程可以达到8米,而摄像装置通常并不能摄取到相应距离处的清晰图像。为使两种装置能匹配使用,在一些实施例中,所述处理装置令所述测量装置测量所述摄像装置的视场范围内障碍物相对于清洁机器人的位置信息,以令摄像装置获取包含所述测量装置测量的障碍物的图像。例如,处理装置对测量装置测得的位置信息进行筛选,即:剔除所述测量装置对超出所述摄像装置的摄像范围的区域的障碍物测量点的位置信息,以根据剩余的有效位置信息获得测量装置测量的所述摄像装置的视场范围内障碍物相对于清洁机器人的位置信息。换言之,利用有效位置信息得到扫描轮廓及其所占位置信息。在另一些实施例中,所述处理装置令测量装置获取预设距离以内的各位置信息,其预设 距离为固定值。例如,根据通常的屋内使用面积而确定预设距离,以确保测量装置能获取一间房间内各障碍物的位置信息,以及根据所获取的位置信息获得扫描轮廓及其所占位置信息。In some practical applications, the measuring range of the measuring device can reach 8 meters, and the camera device usually cannot capture clear images at the corresponding distance. In order to match the two devices for use, in some embodiments, the processing device causes the measuring device to measure the position information of the obstacle relative to the cleaning robot in the field of view of the camera device, so that the camera device can obtain information including The image of the obstacle measured by the measuring device. For example, the processing device screens the location information measured by the measurement device, that is, removes the location information of the obstacle measurement point in the area beyond the imaging range of the camera by the measurement device to obtain the remaining effective location information The measurement device measures the position information of the obstacle relative to the cleaning robot in the field of view of the camera device. In other words, the effective position information is used to obtain the scan profile and its occupied position information. In some other embodiments, the processing device enables the measuring device to obtain position information within a preset distance, and the preset distance is a fixed value. For example, the preset distance is determined according to the usual indoor use area to ensure that the measuring device can obtain the position information of the obstacles in a room, and obtain the scan contour and its occupied position information according to the obtained position information.
在获得了所述扫描轮廓及其所占位置信息后,处理装置可利用特征线、特征点等组合确定扫描轮廓所描绘的候选识别对象及其位置信息。在一些实施方式中,处理装置还利用线化算法将构成扫描轮廓的点阵信息进行线化处理,得到用长线、短线来描述的扫描轮廓。其中,所述线化算法举例包括:膨胀与腐蚀算法等。参阅图5和图6,其中,图5仅示意性的显示了经测量装置测量而得到的扫描轮廓投影在行进平面的俯视示意图,其中,图6显示为对应图5的扫描轮廓经线化处理后的扫描轮廓投影在行进平面的俯视示意图。其中,图5中,原始获取的扫描轮廓包含间隔小于预设阈值的障碍物测量点构成的轮廓部分B1-B2,间隔大于预设阈值的障碍物测量点构成的轮廓部分B2-B3和B4-B5。与图5对应地,由图6可见,经由线化算法处理后的扫描轮廓包含连续的长线构成的轮廓部分A1-A2,不连续短线构成的轮廓部分A2-A3和A4-A5。After obtaining the scan contour and its position information, the processing device can use a combination of feature lines, feature points, etc. to determine the candidate recognition object depicted by the scan contour and its position information. In some embodiments, the processing device also uses a linearization algorithm to linearize the dot matrix information that constitutes the scan contour to obtain a scan contour described by long lines and short lines. Among them, examples of the linearization algorithm include: expansion and erosion algorithms. Refer to Figures 5 and 6, where Figure 5 only schematically shows the top view of the scanning profile projected on the travel plane measured by the measuring device, where Figure 6 shows the scanning profile corresponding to Figure 5 after being linearized. A schematic top view of the scan profile projected on the travel plane. Among them, in Fig. 5, the originally acquired scan contour includes contour parts B1-B2 formed by obstacle measurement points whose intervals are less than a preset threshold, and contour parts B2-B3 and B4- formed by obstacle measurement points whose intervals are greater than the preset threshold. B5. Corresponding to FIG. 5, it can be seen from FIG. 6 that the scan contour processed by the linearization algorithm includes contour parts A1-A2 composed of continuous long lines, and contour parts A2-A3 and A4-A5 composed of discontinuous short lines.
藉由图5和图6所示示例推及至更具普适性的扫描轮廓,扫描轮廓可由连续部分和不连续部分组成。其中,在一些实施方式中,构成连续部分的条件pre1包括以下至少一种或多种组合:1)扫描轮廓中相邻的障碍物测量点之间的间距小于预设长度阈值且这些障碍物测量点的数量大于预设数量阈值的障碍物测量点所构成的轮廓部分,例如图5所示的B1-B2;2)扫描轮廓中线长大于预设长度阈值的连续线所构成的轮廓部分,例如,图6所示的A1-A2;3)基于1)和/或2)而确定的轮廓部分上各障碍物测量点中,其各位置信息符合预设的连续变化条件,其中,所述连续变化条件包括:相邻障碍物测量点的距离信息的差值小于预设距离突变阈值。例如,图5所示的B4-B5轮廓部分以及图6所示的A4-A5轮廓部分不构成连续部分。在此,前述提及的完整的扫描轮廓由不连续部分与连续部分构成,因此,不连续部分与连续部分可视为逻辑上的“或”关系。例如,图5中的B2-B3及B4-B5轮廓部分为不连续部分,图6中的A2-A3及A4-A5轮廓部分为不连续部分。Using the examples shown in Figures 5 and 6 to extend to a more general scan profile, the scan profile can be composed of continuous and discontinuous parts. Wherein, in some embodiments, the condition pre1 constituting the continuous part includes at least one or more of the following combinations: 1) The distance between adjacent obstacle measurement points in the scan profile is less than a preset length threshold and these obstacles are measured The contour part formed by obstacle measurement points whose number of points is greater than the preset number threshold, such as B1-B2 shown in Figure 5; 2) The contour part formed by continuous lines whose line length is greater than the preset length threshold in the scan contour, for example , A1-A2 shown in Fig. 6; 3) Among the obstacle measurement points on the contour part determined based on 1) and/or 2), the position information of each obstacle meets the preset continuous change condition, wherein the continuous The changing conditions include: the difference between the distance information of adjacent obstacle measurement points is less than the preset distance mutation threshold. For example, the B4-B5 outline part shown in FIG. 5 and the A4-A5 outline part shown in FIG. 6 do not constitute a continuous part. Here, the aforementioned complete scan profile is composed of a discontinuous part and a continuous part. Therefore, the discontinuous part and the continuous part can be regarded as a logical “or” relationship. For example, the contour parts of B2-B3 and B4-B5 in Fig. 5 are discontinuous parts, and the contour parts of A2-A3 and A4-A5 in Fig. 6 are discontinuous parts.
在另一些实施方式中,构成不连续部分的条件pre2包括以下至少一种或多种组合:1)扫描轮廓中相邻的障碍物测量点之间的间距大于预设长度阈值、且至少两端的的障碍物测量点与连续部分相连的轮廓部分,例如图5所示的B2-B3、和B4-B5;2)扫描轮廓中由线长小于预设长度阈值的至少一条且连续的短线所构成的轮廓部分,例如,图6所示的A2-A3、和A4-A5。在此,前述提及的完整的扫描轮廓由不连续部分与连续部分构成,因此,不连续部分与连续部分可视为逻辑上的“或”关系。例如,图5中的B2-B3及B4-B5轮廓部分为不连续部分,图6中的A2-A3及A4-A5轮廓部分为不连续部分。In other embodiments, the condition pre2 constituting the discontinuous part includes at least one or more of the following combinations: 1) The distance between adjacent obstacle measurement points in the scan profile is greater than a preset length threshold, and at least two ends The obstruction measurement point is the contour part connected with the continuous part, such as B2-B3 and B4-B5 shown in Figure 5; 2) The scanning contour is composed of at least one continuous short line whose line length is less than the preset length threshold For example, the outlines of A2-A3 and A4-A5 shown in Figure 6. Here, the aforementioned complete scan profile is composed of a discontinuous part and a continuous part. Therefore, the discontinuous part and the continuous part can be regarded as a logical “or” relationship. For example, the contour parts of B2-B3 and B4-B5 in Fig. 5 are discontinuous parts, and the contour parts of A2-A3 and A4-A5 in Fig. 6 are discontinuous parts.
为利用位置信息和图像识别处于敞开状态的实体门,以使清洁机器人在清洁完一间房间 后再清洁下一房间,由此减少补扫区域。所述步骤S21包括步骤S212,即按照所述扫描轮廓上的不连续部分,确定各候选门所占的位置信息。在此,所述候选门主要用于为处于敞开状态的实体门而提供进一步筛选及确认。In order to use position information and images to identify the physical door that is in an open state, so that the cleaning robot cleans one room before cleaning the next room, thereby reducing the supplementary cleaning area. The step S21 includes step S212, which is to determine the position information occupied by each candidate door according to the discontinuous part on the scan contour. Here, the candidate door is mainly used to provide further screening and confirmation for the physical door in the open state.
在此,处理装置以不连续部分的边界将扫描轮廓进行分段处理,得到由连续部分构成的轮廓部分和由不连续部分构成的轮廓部分。在一些实施例中,将所述不连续部分作为候选识别对象,以及根据不连续部分中的障碍物测量点的位置信息,确定各候选门所占的位置信息。在另一些示例中,利用预设的特征线、特征点等组合将不连续部分作为单独的候选识别对象,以及根据不连续部分中的障碍物测量点的位置信息,确定对应候选门所占的位置信息。应当理解,处理装置也可按照连续部分的边界将扫描轮廓进行分段处理,其应当视为与按照不连续部分的边界将扫描轮廓进行分段处理的方式相同或相似。Here, the processing device performs segmentation processing on the scanned contour at the boundary of the discontinuous part to obtain a contour part composed of a continuous part and a contour part composed of a discontinuous part. In some embodiments, the discontinuous part is used as a candidate recognition object, and the position information occupied by each candidate door is determined according to the position information of the obstacle measurement point in the discontinuous part. In other examples, the discontinuous part is used as a separate candidate identification object by using a combination of preset characteristic lines and characteristic points, and the position information of the obstacle measurement point in the discontinuous part is used to determine the corresponding candidate door location information. It should be understood that the processing device may also perform segmentation processing of the scan contour according to the boundary of the continuous part, which should be regarded as the same or similar to the method of segmenting the scan contour according to the boundary of the discontinuous part.
所述扫描轮廓的不连续部分可形成对应相应实体对象的缺口。例如,实体对象为门,当门打开时连通屋里和屋外两个空间区域,当门关闭时隔断屋里和屋外两个空间区域。事实上,受清洁机器人与屋内各实体对象之间的位置关系、实体对象的形状等影响,扫描轮廓上形成的缺口还可能是由两个实体对象之间形成的间隙、或实体对象的形状引起的。例如,扫描轮廓上的缺口可由两个衣柜之间的间隔、衣柜和墙之间的间隔等产生的。又如,扫描轮廓上的缺口为桌腿之间的间隔产生的。因此,需要进一步对所得到的缺口进行筛选识别。The discontinuous part of the scan contour may form a gap corresponding to the corresponding solid object. For example, the physical object is a door. When the door is opened, it connects the two space areas inside and outside the house. When the door is closed, it separates the two space areas inside and outside the house. In fact, affected by the positional relationship between the cleaning robot and the solid objects in the house, the shape of the solid object, etc., the gap formed on the scanning contour may also be caused by the gap formed between two solid objects or the shape of the solid object of. For example, the gap in the scanned contour can be caused by the interval between two wardrobes, the interval between the wardrobe and the wall, and so on. For another example, the gaps in the scan contour are caused by the space between the legs of the table. Therefore, it is necessary to further screen and identify the obtained gaps.
其中,为提高对候选门的识别效率,在一些示例中,对所得到的扫描轮廓上的缺口及构成缺口两端的连续部分的位置信息进行进一步分析,以对所得到的缺口进行筛除处理。为此,所形成的缺口被限制为依附于所述连续部分而形成的缺口,一些孤立的、不依附于任何连续部分的缺口,例如桌子腿或凳子腿之间所形成的缺口,可为不属于所述候选门所包含的缺口,则这些孤立的缺口所对应的实体对象并不是所述候选门,需要进行筛除。另外,太小或太大的缺口也不应属于所述候选门所包含的缺口。基于上述描述,可执行步骤S2121。Among them, in order to improve the efficiency of identifying candidate doors, in some examples, further analysis is performed on the obtained gaps on the scanned contour and the position information of the continuous parts constituting the two ends of the gaps, so as to screen out the obtained gaps. For this reason, the formed gap is limited to the gap formed by attaching to the continuous part. Some isolated gaps that are not attached to any continuous part, such as the gap formed between table legs or stool legs, may not If they belong to the gaps included in the candidate door, the entity objects corresponding to these isolated gaps are not the candidate doors and need to be filtered out. In addition, gaps that are too small or too large should not be included in the candidate gate. Based on the above description, step S2121 may be performed.
在所述步骤S2121中,按照预设的筛选条件对由不连续部分所形成的缺口进行筛选,并确定筛选后的缺口属于候选门。其中,所述筛选条件包含:缺口位于与其相邻的至少一侧的连续部分所在沿线上、和/或预设的缺口宽度阈值。In the step S2121, the gaps formed by the discontinuous parts are screened according to preset screening conditions, and it is determined that the screened gaps belong to the candidate gate. Wherein, the filtering conditions include: the gap is located along the line where the continuous part of at least one side adjacent to the gap is located, and/or a preset gap width threshold.
在一些示例中,所述筛选条件包括缺口位于与其相邻的至少一侧的连续部分所在沿线上,则该缺口为与所述候选门对应的缺口。举例所述缺口为实体门所对应的缺口,由于实体门一般是依附于墙而设立,镶嵌实体门的至少一侧墙体位于与所述缺口相邻的连续部分的沿线上。因此所述实体门打开时所形成所对应的缺口为对应所述候选门的缺口。而凳子的两条凳腿对应的缺口一般是独立放置于物理空间中的,因此凳子的两条凳腿不处于任何连续部分的沿线上,属于孤立的缺口,则将这些孤立的缺口所对应的对象剔除在候选门之外,并筛除 相应的缺口。In some examples, the screening condition includes that a gap is located along a continuous portion of at least one side adjacent to the gap, and the gap is a gap corresponding to the candidate door. For example, the gap is the gap corresponding to the physical door. Since the physical door is generally set up by attaching to the wall, at least one side wall of the inlaid physical door is located along the continuous part adjacent to the gap. Therefore, the corresponding gap formed when the physical door is opened is the gap corresponding to the candidate door. The gaps corresponding to the two stool legs of the stool are generally placed independently in the physical space. Therefore, the two stool legs of the stool are not located along any continuous part. They are isolated gaps. Then these isolated gaps correspond to The object is excluded from the candidate door, and the corresponding gap is screened out.
在又一些示例中,所述筛选条件包括预设的缺口宽度阈值。其中,所述缺口宽度阈值可为单一数值或一数值区间。例如,门的门框间的宽度一般在60cm~120cm之间,所以还可以以此参数作为筛选候选门的条件,即缺口的宽度在预设的缺口宽度阈值(举例为60cm~120cm)内,则该缺口为与所述候选门象对应的缺口。处理装置对构成缺口的障碍物测量点的位置信息进行缺口宽度的计算,并依据筛选条件对所得到的缺口进行筛除,即该缺口太大或大小都不是所述候选门对应的缺口。In still other examples, the screening condition includes a preset gap width threshold. Wherein, the gap width threshold can be a single value or a range of values. For example, the width between the door frames of a door is generally between 60cm and 120cm, so this parameter can also be used as a condition for screening candidate doors, that is, the width of the gap is within the preset gap width threshold (for example, 60cm~120cm), then The gap is a gap corresponding to the candidate door image. The processing device calculates the gap width on the position information of the obstacle measurement points constituting the gap, and screens out the obtained gap according to the screening conditions, that is, the gap is too large or the size is not the gap corresponding to the candidate door.
在另一些示例中,所述筛选条件包括缺口位于与其相邻的至少一侧的连续部分所在沿线上,且同时相应缺口的宽度在所述预设的缺口宽度阈值范围内。所述处理装置依据该筛选条件确定所述缺口对应候选门。换言之,对扫描轮廓上的缺口的两侧均不处于与其相邻的连续部分所在沿线上,或所述缺口的宽度不处于所述预设的缺口宽度阈值范围内的缺口,确定为需要筛除的缺口。In some other examples, the screening condition includes that the gap is located along a continuous part of at least one side adjacent to the gap, and the width of the corresponding gap is within the preset gap width threshold range. The processing device determines the candidate door corresponding to the gap according to the screening condition. In other words, for the gaps on the scan profile that are not located along the adjacent continuous part on both sides, or the width of the gap is not within the preset gap width threshold range, it is determined that it needs to be filtered out The gap.
为确保所述候选门被准确识别,参见图10,其还包括步骤S22:根据所确定的候选门所占的位置信息,令所述摄像装置获取包含所述候选门的图像,并确定所述候选门为实体门。In order to ensure that the candidate door is accurately identified, refer to FIG. 10, which further includes step S22: according to the determined position information of the candidate door, the camera device is made to acquire an image containing the candidate door, and determine the The candidate door is a physical door.
在此,所述移动机器人包括至少一个摄像装置。所述摄像装置在移动机器人所在位置摄取视场范围内实体对象并投影至所述移动机器人的行进平面,以得到投影图像。例如,移动机器人包含一个摄像装置,其设置于所述移动机器人顶部、肩部或背部,且主光轴垂直于所述移动机器人的行进平面。又如,移动机器人包含多个摄像装置,其中一个摄像装置的主光轴垂直于所述移动机器人的行进平面。再如,所述清洁机器人所包含的摄像装置,其嵌设在体侧或顶部且主光轴与行进平面具有一非垂直的倾斜角度;所述倾斜角度举例为在0°~60°之间的倾斜角度。Here, the mobile robot includes at least one camera. The camera device captures a physical object in the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image. For example, a mobile robot includes a camera device, which is arranged on the top, shoulder or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot. For another example, a mobile robot includes a plurality of camera devices, and the main optical axis of one camera device is perpendicular to the traveling plane of the mobile robot. For another example, the camera device included in the cleaning robot is embedded on the side or top of the body and the main optical axis has a non-vertical tilt angle with the traveling plane; the tilt angle is, for example, between 0° and 60° The tilt angle.
在一些实施方式中,所述摄像装置的主光轴垂直于行进平面,摄像装置所摄取的二维图像所在平面与所述行进平面具有平行关系。请参阅图7,其显示为移动机器人拍摄包含实体对象a的投影图像时,其与实体对象a在相应物理空间的示意图。图7中的移动机器人的至少一摄像装置的主光轴垂直于所述移动机器人的行进平面,在摄像装置拍摄一幅投影图像时,所拍摄到的实体对象a投影到该投影图像M1中的位置D1与同一实体对象a投影到行进平面M2中的位置D2,其中位置D1和D2相对于移动机器人的位置D具有相同角度的特点。以此类推,我们用摄像装置所摄取的实体对象在投影图像中的位置来表示该实体对象投影至所述移动机器人的行进平面的位置,且利用所述实体对象在投影图像中的位置相对于所述移动机器人移动方向的角度来表征该实体对象投影至所述移动机器人的行进平面的位置相对于所述移动机器人移动方向的角度。In some embodiments, the main optical axis of the camera device is perpendicular to the traveling plane, and the plane where the two-dimensional image captured by the camera device is located has a parallel relationship with the traveling plane. Please refer to FIG. 7, which shows a schematic diagram of the mobile robot in the corresponding physical space with the entity object a when it shoots a projection image containing the entity object a. The main optical axis of at least one camera device of the mobile robot in FIG. 7 is perpendicular to the traveling plane of the mobile robot. When the camera device takes a projected image, the photographed physical object a is projected onto the projected image M1. The position D1 and the same solid object a are projected to the position D2 in the traveling plane M2, where the positions D1 and D2 have the same angle characteristics relative to the position D of the mobile robot. By analogy, we use the position of the solid object captured by the camera device in the projection image to indicate the position of the solid object projected onto the traveling plane of the mobile robot, and use the position of the solid object in the projection image relative to The angle of the moving direction of the mobile robot represents the angle of the position of the solid object projected onto the traveling plane of the mobile robot relative to the moving direction of the mobile robot.
在此,所述处理装置令所述测量装置测量所述摄像装置的视场范围内障碍物相对于清洁机器人的位置信息以及令所述摄像装置摄取所述候选门投影至所述清洁机器人的行进平面的图像。其中,利用摄像装置所摄取的候选门在所述图像中的位置来表示该候选门投影至所述清洁机器人的行进平面的位置,且利用所述候选门在所述图像中的位置相对于所述清洁机器人移动方向的角度来表征该候选门投影至所述清洁机器人的行进平面的位置相对于所述清洁机器人移动方向的角度。Here, the processing device causes the measuring device to measure the position information of the obstacle relative to the cleaning robot within the field of view of the imaging device, and causes the imaging device to capture the candidate door and project the progress of the cleaning robot. Flat image. Wherein, the position of the candidate door in the image captured by the camera is used to indicate the position of the candidate door projected onto the travel plane of the cleaning robot, and the position of the candidate door in the image relative to the The angle of the moving direction of the cleaning robot represents the angle of the position of the candidate door projected to the traveling plane of the cleaning robot relative to the moving direction of the cleaning robot.
在另一些实施例中,所述清洁机器人还包括移动装置,当所述测量装置测得的所述候选门所占位置信息在所述摄像装置的视场范围以外时,所述处理装置根据摄像装置的摄像参数控制移动装置运行,即根据所得到的候选门所占位置信息控制所述清洁机器人移动,以摄取到包含候选门的图像。其中,所述摄像参数包括视场范围、变焦区间等。例如,所述摄像装置的主光轴垂直于行进平面,所述处理装置控制移动装置按照测量装置所提供的候选门的角度信息所指示的角度方向移动,并令所述摄像装置摄取所述候选门投影至所述清洁机器人的行进平面的图像。又如,所述摄像装置的主光轴与行进平面之间具有前述提及的倾斜角度,处理装置控制移动装置按照测量装置所提供的候选门的角度信息所指示的角度方向移动,并令所述摄像装置摄取包含候选门的图像。在此,所述清洁机器人的移动装置可包括行走机构和行走驱动机构,其中,所述行走机构可设置于所述机器人本体的底部,所述行走驱动机构内置于所述机器人本体内。所述行走机构可例如包括两个直行行走轮和至少一个辅助转向轮的组合,所述两个直行行走轮分别设于机器人本体的底部的相对两侧,所述两个直行行走轮可分别由对应的两个行走驱动机构实现独立驱动,即,左直行行走轮由左行走驱动机构驱动,右直行行走轮由右行走驱动机构驱动。所述的万向行走轮或直行行走轮可具有偏置下落式悬挂系统,以可移动方式紧固,例如以可旋转方式安装到机器人本体上,且接收向下及远离机器人本体偏置的弹簧偏置。所述弹簧偏置允许万向行走轮或直行行走轮以一定的着地力维持与地面的接触及牵引。在实际的应用中,所述至少一个辅助转向轮未参与的情形下,所述两个直行行走轮主要用于前进和后退,而在所述至少一个辅助转向轮参与并与所述两个直行行走轮配合的情形下,就可实现转向和旋转等移动。所述行走驱动机构可包括驱动电机和控制所述驱动电机的控制电路,利用所述驱动电机可驱动所述行走机构中的行走轮实现移动。在具体实现上,所述驱动电机可例如为可逆驱动电机,且所述驱动电机与所述行走轮的轮轴之间还可设置有变速机构。所述行走驱动机构可以可拆卸地安装到机器人本体上,方便拆装和维修。In other embodiments, the cleaning robot further includes a mobile device, and when the position information of the candidate door measured by the measuring device is outside the field of view of the camera device, the processing device The camera parameters of the device control the operation of the mobile device, that is, control the movement of the cleaning robot according to the obtained position information of the candidate door to capture an image containing the candidate door. Wherein, the imaging parameters include field of view range, zoom interval, etc. For example, the main optical axis of the camera device is perpendicular to the traveling plane, and the processing device controls the mobile device to move in the angular direction indicated by the angle information of the candidate door provided by the measuring device, and causes the camera device to capture the candidate door. The door projects an image of the traveling plane of the cleaning robot. For another example, there is the aforementioned inclination angle between the main optical axis of the imaging device and the traveling plane, and the processing device controls the moving device to move in the angular direction indicated by the angle information of the candidate door provided by the measuring device, and makes the The camera device captures an image including candidate doors. Here, the moving device of the cleaning robot may include a walking mechanism and a walking driving mechanism, wherein the walking mechanism may be provided at the bottom of the robot body, and the walking driving mechanism is built in the robot body. The walking mechanism may, for example, include a combination of two straight walking wheels and at least one auxiliary steering wheel, the two straight walking wheels are respectively provided on opposite sides of the bottom of the robot body, and the two straight walking wheels can be driven by The corresponding two traveling driving mechanisms realize independent driving, that is, the left straight traveling wheel is driven by the left traveling drive mechanism, and the right straight traveling wheel is driven by the right traveling drive mechanism. The universal walking wheel or the straight walking wheel may have a bias drop suspension system, which is fastened in a movable manner, for example, is rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Bias. The spring bias allows the universal traveling wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain ground force. In actual applications, when the at least one auxiliary steering wheel is not involved, the two straight traveling wheels are mainly used to move forward and backward, and when the at least one auxiliary steering wheel participates in and goes straight with the two When the traveling wheels are matched, the steering and rotation can be realized. The walking driving mechanism may include a driving motor and a control circuit that controls the driving motor, and the driving motor can drive the walking wheels in the walking mechanism to move. In terms of specific implementation, the drive motor can be, for example, a reversible drive motor, and a speed change mechanism can also be provided between the drive motor and the axle of the walking wheel. The walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly, assembly and maintenance.
在又一些实施方式中,所述步骤S22包括步骤S221和步骤S222,在步骤S221中,根据所述候选门所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域。在所 述步骤S222中,对所述图像区域进行特征识别以确定所述候选门为实体门。在此实施例中,所述摄像装置的主光轴垂直于行进平面,且参阅图3及其相关描述,所述候选门在图像中的角度范围可表征该候选门所对应的实体对象投影至所述移动机器人的行进平面的角度范围,利用测量装置所测得的候选门所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域。且所述步骤S221还包括步骤S2211和步骤S2212。In still other embodiments, the step S22 includes step S221 and step S222. In step S221, the image area within the corresponding angle range in the image is determined according to the angle range in the position information occupied by the candidate door. In the step S222, feature recognition is performed on the image area to determine that the candidate door is a solid door. In this embodiment, the main optical axis of the camera device is perpendicular to the traveling plane, and referring to FIG. 3 and related descriptions, the angle range of the candidate door in the image can represent the projection of the entity object corresponding to the candidate door to The angular range of the traveling plane of the mobile robot uses the angular range in the position information of the candidate door measured by the measuring device to determine the image area within the corresponding angular range in the image. And the step S221 also includes step S2211 and step S2212.
在所述步骤S2211中,基于所述候选门两端的位置信息确定至少一个角度范围;在所述步骤S2212中,按照所确定的角度范围从所述图像中确定用于识别该候选门是否为实体门的图像区域。In the step S2211, determine at least one angular range based on the position information of the two ends of the candidate door; in the step S2212, determine from the image according to the determined angular range to identify whether the candidate door is an entity The image area of the door.
在一些实施例中,按照候选门两端的位置信息,确定包含所述候选门两端的位置信息的一个角度范围,即该角度范围包含所述候选门的整个缺口,且按照该包含与所述候选门对应的缺口的角度范围,作为用于识别所述候选门是否为实体门的图像区域,例如,请参阅图8,图8显示为本申请的一具体实施例中场景应用示意图。图8中,所述候选门81的两端分别与所述清洁机器人82的移动方向所呈的角度为10度和25度,则选取该10度到25度的角度范围内的区域作为用于识别所述候选门是否为实体门的图像区域。另一些实施例中,分别对所述候选门的两端进行包含单端的小角度范围的选取,即选取关于所述候选门的两端的两个小的角度范围,并将其作为用于识别所述候选门是否为实体门的图像区域。例如,例如,请参阅图9,图9显示为本申请的一具体实施例中场景应用示意图。图9中,所述候选门91的两端分别与所述清洁机器人92的移动方向所呈的角度为10度和25度,则在所述候选门91与所述清洁机器人92的移动方向所呈的角度为10度的一端,选取关于所述清洁机器人92的移动方向呈9度到11度的第一角度范围,以及在所述候选门91与所述清洁机器人92的移动方向所呈的角度为25度的另一端,选取关于所述清洁机器人92的移动方向呈24度到26度的第二角度范围,选取所述第一角度范围和第二角度范围作为用于识别所述候选门91是否为实体门的图像区域。In some embodiments, according to the position information of the two ends of the candidate door, an angle range containing the position information of the two ends of the candidate door is determined, that is, the angle range includes the entire gap of the candidate door, and according to the inclusion and the candidate door The angle range of the gap corresponding to the door is used as an image area for identifying whether the candidate door is a physical door. For example, please refer to FIG. 8, which shows a schematic diagram of a scene application in a specific embodiment of this application. In FIG. 8, the angles between the two ends of the candidate door 81 and the moving direction of the cleaning robot 82 are 10 degrees and 25 degrees, and the area within the angle range of 10 degrees to 25 degrees is selected as the Identify whether the candidate door is an image area of a physical door. In other embodiments, the two ends of the candidate door are selected with a single-ended small angle range, that is, two small angle ranges with respect to the two ends of the candidate door are selected and used as the identification Whether the candidate door is the image area of the entity door. For example, please refer to FIG. 9, which shows a schematic diagram of a scenario application in a specific embodiment of this application. In FIG. 9, the angles between the two ends of the candidate door 91 and the moving direction of the cleaning robot 92 are 10 degrees and 25 degrees, and the angle between the candidate door 91 and the moving direction of the cleaning robot 92 is 10 degrees and 25 degrees. The angle formed is one end of 10 degrees, and a first angle range of 9 degrees to 11 degrees with respect to the moving direction of the cleaning robot 92 is selected, and the angle between the candidate door 91 and the moving direction of the cleaning robot 92 is selected. At the other end with an angle of 25 degrees, a second angle range of 24 degrees to 26 degrees with respect to the moving direction of the cleaning robot 92 is selected, and the first angle range and the second angle range are selected as the candidate door 91 is the image area of the physical door.
在一些示例中,根据摄像装置的主光轴与清洁机器人的行进平面的角度关系,使得实体门的门框投影在图像中会呈现具有灭点的特征,则利用测量装置所得到的候选门若可被识别为实体门,将在所选取的图像区域中包含对应上述特点的特征线。以摄像装置的主光轴与清洁机器人的行进平面为垂直角度关系为例,在实际应用中,清洁机器人的高度一般比较低,摄像装置对门的摄取角度,一般为从下往上的角度,当所述候选门为待识别的实体门时,摄像装置距离实体门的下部较近,而距离实体门的上部较远。由于图像的透视关系,在所述摄像装置所摄取的图像中,距离所述摄像装置较近的实体对象对应的图像较大,而距离所述摄像装置较远的实体对象对象的图像较小,在所述摄像装置摄取的图像中会出现在候选门所在 角度范围内的多条特征线会汇聚于一个灭点。为此,所述步骤S222可包括步骤S2221,即处理装置通过执行步骤S2221来确定候选门是否为实体门。在所述步骤S2221中,在所述图像中识别出至少两条用于表示垂直于所述行进平面的特征线,并基于所识别出的特征线确定所述候选门为实体门。在此,根据候选门所占的位置信息,对与所述位置信息相关的角度范围内的图像区域进行识别,在所述图像区域中识别出至少三条特征线所在直线交汇于一点,则确定该至少三条用于表示垂直于所述行进平面的特征线;进而基于所识别出的特征线确定所述候选门为实体门。In some examples, according to the angular relationship between the main optical axis of the camera device and the traveling plane of the cleaning robot, the projection of the door frame of the solid door will show the feature of vanishing point in the image. If the candidate door obtained by the measuring device is available Recognized as a solid gate, the selected image area will contain the characteristic lines corresponding to the above characteristics. Take the vertical angle relationship between the main optical axis of the camera and the traveling plane of the cleaning robot as an example. In practical applications, the height of the cleaning robot is generally low. The angle of the camera to the door is generally from bottom to top. When the candidate door is a physical door to be identified, the camera device is closer to the lower part of the physical door and farther from the upper part of the physical door. Due to the perspective relationship of the image, in the image captured by the camera device, the image corresponding to the entity object closer to the camera device is larger, while the image of the entity object object farther from the camera device is smaller. In the image captured by the camera device, multiple characteristic lines appearing in the angular range of the candidate door will converge at a vanishing point. To this end, the step S222 may include step S2221, that is, the processing device determines whether the candidate door is a physical door by executing step S2221. In the step S2221, at least two feature lines representing perpendicular to the traveling plane are identified in the image, and the candidate door is determined to be a solid door based on the identified feature lines. Here, according to the position information occupied by the candidate door, the image area within the angle range related to the position information is recognized, and the line where at least three characteristic lines intersect at one point is identified in the image area, and the At least three feature lines are used to represent the characteristic lines perpendicular to the traveling plane; and the candidate door is determined to be a solid door based on the identified feature lines.
在另一些实施例中,还可以基于预设已知的实体门的特征信息,确定所述候选门为实体门;所述实体门的特征信息可为所述实体门的图像特征,所述图像特征能够标识图像中的实体门,所述图像特征例如为关于所述实体门的轮廓特征。其中,所述图像特征包括预设的对应实体门的图形特征,或者经图像处理算法而得到的图像特征。其中,所述图像处理算法包括但不限于以下至少一种:灰度处理、锐化处理、轮廓提取、角提取、线提取,利用经机器学习而得到的图像处理算法。利用经机器学习而得到的图像处理算法包括但不限于:神经网络算法、聚类算法等。且利用预设的图像识别算法,构建所述图像中候选门与已知的实体门的映射关系,以确定候选门为实体门。例如,存储器所存储的程序包含神经网络模型的网络结构及连接方式。在某些实施例中,所述神经网络模型可以为卷积神经网络,所述网络结构包括输入层、至少一层隐藏层和至少一层输出层。其中,所述输入层用于接收所拍摄的图像或者经预处理后的图像;所述隐藏层包含卷积层和激活函数层,甚至还可以包含归一化层、池化层、融合层中的至少一种等;所述输出层用于输出标记有物体种类标签的图像。所述连接方式根据各层在神经网络模型中的连接关系而确定。例如,基于数据传输而设置的前后层连接关系,基于各隐藏层中卷积核尺寸而设置与前层数据的连接关系,以及全连接等。所述神经网络模型从图像中识别出的各物体分类。所述实体门对应的特征信息可为包括两条垂直于所述清洁机器人的行进平面的特征线,且两条特征线间的距离在一预设的宽度阈值范围内,即利用所述图像识别算法,构建所述图像中候选门与实体门的映射关系,进而确定候选门为实体门。In other embodiments, the candidate door may be determined to be a physical door based on preset known characteristic information of the entity door; the characteristic information of the entity door may be the image characteristic of the entity door, and the image The feature can identify the entity door in the image, and the image feature is, for example, a contour feature about the entity door. Wherein, the image feature includes a preset graphic feature corresponding to the entity gate, or an image feature obtained through an image processing algorithm. Wherein, the image processing algorithm includes but is not limited to at least one of the following: grayscale processing, sharpening processing, contour extraction, angle extraction, line extraction, and image processing algorithms obtained through machine learning. Image processing algorithms obtained through machine learning include, but are not limited to: neural network algorithms, clustering algorithms, etc. And a preset image recognition algorithm is used to construct the mapping relationship between the candidate door and the known entity door in the image to determine that the candidate door is the entity door. For example, the program stored in the memory includes the network structure and connection mode of the neural network model. In some embodiments, the neural network model may be a convolutional neural network, and the network structure includes an input layer, at least one hidden layer, and at least one output layer. Wherein, the input layer is used to receive the captured image or the preprocessed image; the hidden layer includes a convolutional layer and an activation function layer, and may even include a normalization layer, a pooling layer, and a fusion layer. The output layer is used to output images marked with object type tags. The connection mode is determined according to the connection relationship of each layer in the neural network model. For example, the connection relationship between the front and back layers is set based on data transmission, the connection relationship with the previous layer data is set based on the size of the convolution kernel in each hidden layer, and the full connection is set. The neural network model classifies each object recognized from the image. The feature information corresponding to the physical door may include two feature lines perpendicular to the traveling plane of the cleaning robot, and the distance between the two feature lines is within a preset width threshold range, that is, using the image recognition The algorithm constructs the mapping relationship between the candidate door and the entity door in the image, and then determines that the candidate door is the entity door.
在一些实施例中,所述划分清洁区域的方法法S2还包括将所确定的实体门及其位置信息标记在用于设置清洁路线的地图中的步骤。在一些实施例中,所述地图为一种栅格地图,预先确定单位栅格尺寸与物理空间的单位尺寸之间的映射关系,将所得到的实体门及其位置信息标记到所述地图的对应栅格位置。例如可以将对应实体门的文字描述、图像标识、或者编号标记于所述地图中,所述文字描述可以为对实体门的名称描述,例如实体门的名称描述为“门”。所述图像标识可以为对应实体门的实际形象的图标。所述编号可以为预先设置的关 于实体门的数字标号,例如“001”。在一些示例中,清洁机器人基于预先确定的清扫区域设计遍历该清扫区域的导航路线,且根据地图中位于所述清扫区域内的实体门的标记信息,确定便于清洁的清洁路线。In some embodiments, the method S2 for dividing a cleaning area further includes a step of marking the determined physical door and its position information in a map for setting a cleaning route. In some embodiments, the map is a grid map, the mapping relationship between the unit grid size and the unit size of the physical space is predetermined, and the obtained physical door and its position information are marked on the map. Corresponding to the grid position. For example, the text description, image identification, or number of the corresponding physical door can be marked on the map, and the text description can be a description of the name of the physical door, for example, the name of the physical door is described as "door". The image identifier may be an icon corresponding to the actual image of the physical door. The number can be a preset number label related to the physical door, such as "001". In some examples, the cleaning robot designs a navigation route to traverse the cleaning area based on a predetermined cleaning area, and determines a cleaning route that is convenient for cleaning according to the marking information of the physical door located in the cleaning area on the map.
利用标记有实体对象的标记信息的地图,所述划分清洁区域的方法还包括步骤S23,即依据所述实体门及其位置信息划分所述清洁机器人的清洁区域,以约束所述清洁机器人的行走范围。在一些实施例中,在所述实体门处设置虚拟墙;以及依据所述虚拟墙及所述清洁机器人所在区域划分清洁机器人的清洁区域。在一些实施例中,所述清洁区域为基于所述实体门而确定的房间区域;例如每个所述房间区域由所述虚拟墙以及实体墙组成,则根据设置的虚拟墙和测量得出的实体墙,可以确定多个房间区域,进而在所述清洁机器人所在的区域划分清洁区域。且所述清洁机器人可以以预先设置遍历所述清洁区域的清洁单位范围,每个所述清洁单位范围可以包含九个栅格区域,每次为所述清洁机器人规划接下来需要清扫的九个栅格,且在该九个栅格区域被清洁完毕后,为所述清洁机器人规划下一个清洁单位范围,且当规划的清洁单位范围由于障碍物(例如墙或柜子)的阻隔而无法规划到九个栅格区域时,则以所述障碍物为截止点,将未被障碍物阻挡的栅格区域作为所述清洁机器人接下来需要遍历清洁的清洁范围,例如由于墙的阻隔,下个规划的清扫区域只能达到六个栅格区域时,则以该六个栅格区域作为所述清洁机器人接下来需要遍历的清扫区域,并以此类推,直到所述清洁机器人遍历了当前所在的清洁区域。Using the map marked with the marking information of the physical objects, the method for dividing the cleaning area further includes step S23, that is, dividing the cleaning area of the cleaning robot according to the physical door and its position information to restrict the walking of the cleaning robot range. In some embodiments, a virtual wall is provided at the physical door; and the cleaning area of the cleaning robot is divided according to the virtual wall and the area where the cleaning robot is located. In some embodiments, the cleaning area is a room area determined based on the physical door; for example, each of the room areas is composed of the virtual wall and the physical wall, and it is determined based on the set virtual wall and measurement The solid wall can determine multiple room areas, and then divide the cleaning area in the area where the cleaning robot is located. Moreover, the cleaning robot may preset a range of cleaning units to traverse the cleaning area, each of the cleaning unit ranges may include nine grid areas, and each time the cleaning robot plans the next nine grids to be cleaned. After the nine grid areas have been cleaned, plan the next cleaning unit range for the cleaning robot, and when the planned cleaning unit range cannot be planned to nine due to obstacles (such as walls or cabinets). When there are two grid areas, the obstacle is the cut-off point, and the grid area not blocked by the obstacle is used as the cleaning range that the cleaning robot needs to traverse next. For example, due to the barrier of the wall, the next planned When the cleaning area can only reach six grid areas, the six grid areas are used as the cleaning area that the cleaning robot needs to traverse next, and so on, until the cleaning robot traverses the current cleaning area .
在另一些实施例中,所述清洁区域为按照预设区域范围和位于所述区域范围内的实体门所占位置信息而划分的区域。例如,依据所述虚拟墙及所述清洁机器人所在区域划分清洁机器人的清洁区域,并约束所述清洁机器人的行走范围。所述预设区域范围例如为用户的家,用户的家可包括客厅、卧室、厨房以及卫生间这三个区域,且每个区域都存在实体门,且当通过所述测量装置和所述摄像装置获取到各实体们对象所占位置信息后,在所述实体门所对应的位置信息处设置虚拟墙,且虚拟墙以及与虚拟墙连接的实体墙的结合组成一个独立的区域,进而根据所述虚拟墙以及所述清洁机器人所在区域划分清洁机器人的清洁区域,例如根据所述虚拟墙将所述用户的家这个区域范围划分为四个清洁区域,分别为客厅、卧室、厨房以及卫生间。且在每个所述清洁区域内以预设的遍历方式进行遍历清洁。In other embodiments, the cleaning area is an area divided according to a preset area range and location information of physical doors located within the area range. For example, the cleaning area of the cleaning robot is divided according to the virtual wall and the area where the cleaning robot is located, and the walking range of the cleaning robot is restricted. The preset area range is, for example, the user's home. The user's home may include three areas: a living room, a bedroom, a kitchen, and a bathroom, and each area has a physical door, and when passing through the measuring device and the camera After obtaining the location information of each entity object, a virtual wall is set at the location information corresponding to the physical door, and the combination of the virtual wall and the physical wall connected to the virtual wall forms an independent area, and then according to the The virtual wall and the area where the cleaning robot is located divide the cleaning area of the cleaning robot. For example, the area of the user's home is divided into four cleaning areas according to the virtual wall, which are a living room, a bedroom, a kitchen, and a bathroom. And traverse cleaning is performed in each of the cleaning areas in a preset traversal manner.
本申请的划分清洁区域的方法,可以根据测距传感装置和角度传感装置,或者TOF测量装置测量清洁机器人所在区域内障碍物相对于清洁机器人的角度和距离,准确的确定在所述区域内的候选门的位置信息,并令摄像装置获取包含该候选门的图像,进而确定该候选门为实体门,依据所述实体门及其位置信息划分所述清洁机器人的清洁区域,以约束所述清洁机器人的行走范围,本申请在已获得较为精确的关于实体门的位置信息后,直接根据该实体门 划分清洁机器人的清洁区域,可以较为准确且合理的划分清洁区域,符合用户平常对区域的划分习惯,且提高运行该划分清洁区域的方法的清洁机器人的人机交互性。The method of dividing the cleaning area in this application can be based on the distance measurement sensor device and the angle sensor device, or the TOF measuring device to measure the angle and distance of the obstacle relative to the cleaning robot in the area where the cleaning robot is located, and accurately determine the area The position information of the candidate door in the camera, and the camera device acquires an image containing the candidate door, and then determines that the candidate door is a physical door, and divides the cleaning area of the cleaning robot according to the physical door and its position information to restrict all Regarding the walking range of the cleaning robot, this application directly divides the cleaning area of the cleaning robot according to the physical door after obtaining more accurate position information about the physical door, which can more accurately and reasonably divide the cleaning area, which is in line with the user's usual area The method of dividing the cleaning area can improve the human-computer interaction of the cleaning robot running the method of dividing the cleaning area.
参阅图12,图12显示为本申请的移动机器人的导航系统在一具体实施例中的组成示意图。移动机器人的导航系统30包括测量装置31、摄像装置32和处理装置33。所述移动机器人包括但不限于:家庭陪伴式移动机器人、清洁机器人、巡逻式移动机器人、擦玻璃的机器人等。Referring to FIG. 12, FIG. 12 shows a schematic diagram of the composition of the navigation system of the mobile robot of this application in a specific embodiment. The navigation system 30 of the mobile robot includes a measurement device 31, a camera device 32 and a processing device 33. The mobile robots include, but are not limited to: family companion mobile robots, cleaning robots, patrol mobile robots, glass cleaning robots, and the like.
所述测量装置31设置于所述移动机器人,用于测量所述移动机器人所在区域内障碍物相对于移动机器人的位置信息。在一些实施例中,所述测量装置可安装于所述移动机器人的体侧(嵌设于所述移动机器人的体侧),所述测量装置31举例可为扫描激光器或TOF传感器。其中扫描激光器包括角度传感装置和测距传感器,且通过所述角度传感器获取对应测距传感器所测量的距离信息的角度信息,且通过激光或者红外来测得障碍物测量点在所述扫描激光器当前角度上与所述测距传感器的距离。所述扫描激光器是相对于一固定参照系随时间改变方向、传播的起点或图样的激光器。扫描激光器基于激光测距原理,通过可旋转的光学部件(激光发射器)发射形成二维的扫描面,以实现区域扫描及轮廓测量功能。扫描激光器的测距原理包括:激光发射器发出激光脉冲波,当激光波碰到物体后,部分能量返回,当激光接收器收到返回激光波时,且返回波的能量足以触发门槛值,则扫描激光器计算它到物体的距离值。扫描激光器连续不停的发射激光脉冲波,激光脉冲波打在高速旋转的镜面上,将激光脉冲波发射向各个方向从而形成一个二维区域的扫描。此二维区域的扫描例如可以实现以下两个功能:1)在扫描激光器的扫描范围内,设置不同形状的保护区域,当有物体进入该区域时,发出报警信号;2)在扫描激光器的扫描范围内,扫描激光器输出每个障碍物测量点的距离,根据此距离信息,可以计算物体的外型轮廓,坐标定位等。The measuring device 31 is provided in the mobile robot, and is used to measure the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located. In some embodiments, the measuring device may be installed on the body side of the mobile robot (embedded on the body side of the mobile robot), and the measuring device 31 may be, for example, a scanning laser or a TOF sensor. The scanning laser includes an angle sensing device and a distance measuring sensor, and the angle information corresponding to the distance information measured by the distance measuring sensor is obtained through the angle sensor, and the obstacle measurement point is measured at the scanning laser through laser or infrared. The distance from the distance measuring sensor at the current angle. The scanning laser is a laser that changes direction, starting point or pattern of propagation with time relative to a fixed frame of reference. The scanning laser is based on the principle of laser distance measurement, which forms a two-dimensional scanning surface through a rotatable optical component (laser transmitter) to achieve area scanning and profile measurement functions. The ranging principle of a scanning laser includes: a laser transmitter emits a laser pulse wave, when the laser wave hits an object, part of the energy returns, when the laser receiver receives the return laser wave, and the energy of the return wave is sufficient to trigger the threshold, then Scan the laser to calculate its distance to the object. The scanning laser continuously emits laser pulse waves. The laser pulse waves hit the high-speed rotating mirror surface and emit the laser pulse waves in all directions to form a two-dimensional area scan. The scanning of this two-dimensional area can, for example, realize the following two functions: 1) Set protection areas of different shapes within the scanning range of the scanning laser, and send out an alarm signal when an object enters the area; 2) Scanning the laser Within the range, the scanning laser outputs the distance of each obstacle measurement point. According to this distance information, the outline and coordinate positioning of the object can be calculated.
所述TOF测量装置31基于TOF技术。TOF技术属于光学非接触式三维深度测量感知方式中的一种,通过给目标连续发送光脉冲,然后用传感器接收从物体返回的光,通过探测这些发射和接收光脉冲的飞行(往返)时间来得到目标物距离。TOF的照射单元都是对光进行高频调制之后再进行发射,一般采用LED或激光(包含激光二极管和VCSEL(Vertical Cavity Surface Emitting Laser,垂直腔面发射激光器))来发射高性能脉冲光,本申请的实施例中,采用激光来发射高性能脉冲光。脉冲可达到100MHz左右,主要采用红外光。TOF测量装置31应用的原理有以下两类,1)基于光学快门的方法;主要实现方式为:发射一束脉冲光波,通过光学快门快速精确获取照射到三维物体后反射回来的光波的时间差t,由于光速c已知,只要知道照射光和接收光的时间差,来回的距离可以通过公示d=t/2·c。2)基于连续波强度调制的方法;主要实现方式为:发射一束照明光,利用发射光波信号与反射光波信号的相位变 化来进行距离测量。其中,照明模组的波长一般是红外波段,且需要进行高频率调制。TOF感光模组与普通手机摄像模组类似,由芯片,镜头,线路板等部件构成,TOF感光芯片每一个像元对发射光波的往返相机与物体之间的具体相位分别进行记录,通过数据处理单元提取出相位差,由公式计算出深度信息。TOF测量装置31的体积小,可以直接输出被探测物体的深度数据,且TOF测量装置31的深度计算结果不受物体表面灰度和特征的影响,可以非常准确的进行三维探测。The TOF measuring device 31 is based on TOF technology. TOF technology is one of the optical non-contact three-dimensional depth measurement and perception methods. It continuously sends light pulses to the target, and then uses the sensor to receive the light returned from the object, and detects the flight (round trip) time of these transmitted and received light pulses. Get the target distance. TOF's irradiating unit all emits after high-frequency modulation of light. Generally, LED or laser (including laser diode and VCSEL (Vertical Cavity Surface Emitting Laser)) is used to emit high-performance pulsed light. In the embodiment of the application, a laser is used to emit high-performance pulsed light. The pulse can reach about 100MHz, and infrared light is mainly used. The principles of TOF measuring device 31 are as follows: 1) A method based on an optical shutter; the main implementation is: emit a pulsed light wave, and quickly and accurately obtain the time difference t of the light wave reflected back after irradiating a three-dimensional object through the optical shutter, Since the speed of light c is known, as long as the time difference between irradiating light and receiving light is known, the back and forth distance can be publicized by d=t/2·c. 2) A method based on continuous wave intensity modulation; the main implementation method is: emit a beam of illuminating light, and use the phase change of the emitted light wave signal and the reflected light wave signal to measure the distance. Among them, the wavelength of the lighting module is generally in the infrared band, and high frequency modulation is required. The TOF photosensitive module is similar to the ordinary mobile phone camera module, which is composed of chips, lenses, circuit boards and other components. Each pixel of the TOF photosensitive chip records the specific phase between the camera and the object that emits light waves. Through data processing The unit extracts the phase difference and calculates the depth information by the formula. The TOF measuring device 31 has a small size and can directly output the depth data of the detected object, and the depth calculation result of the TOF measuring device 31 is not affected by the grayscale and features of the surface of the object, and can perform three-dimensional detection very accurately.
所述摄像装置32,设置于所述移动机器人,用于获取包含所述候选识别对象的图像;所述摄像装置32包括但不限于鱼眼摄像模块、广角(或非广角)摄像模块中的任一种。在此,所述移动机器人包括至少一个摄像装置32。所述摄像装置32在移动机器人所在位置摄取视场范围内实体对象并投影至所述移动机器人的行进平面,以得到投影图像。例如,移动机器人包含一个摄像装置,其设置于所述移动机器人顶部、肩部或背部,且主光轴垂直于所述移动机器人的行进平面。又如,移动机器人包含多个摄像装置32,其中一个摄像装置32的主光轴垂直于所述移动机器人的行进平面。且以上述方式设置的摄像装置32所拍摄的图像在所述移动机器人的行进平面的投影形成的投影图像,相当于所述摄像装置32所拍摄的图像在所述行进平面的垂直投影,例如,所述摄像装置32嵌设于所述移动机器人,且主光轴垂直于所述移动机器人的行进平面。The camera device 32 is set in the mobile robot and is used to obtain an image containing the candidate recognition object; the camera device 32 includes but is not limited to any of a fisheye camera module and a wide-angle (or non-wide-angle) camera module. One kind. Here, the mobile robot includes at least one camera device 32. The camera device 32 captures a physical object in the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image. For example, a mobile robot includes a camera device, which is arranged on the top, shoulder or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot. For another example, a mobile robot includes a plurality of camera devices 32, and the main optical axis of one camera device 32 is perpendicular to the traveling plane of the mobile robot. And the projection image formed by the projection of the image captured by the imaging device 32 set up in the above manner on the traveling plane of the mobile robot is equivalent to the vertical projection of the image captured by the imaging device 32 on the traveling plane, for example, The camera device 32 is embedded in the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
所述处理装置33连接所述测量装置31和摄像装置32,所述处理装置33为一种能够进行数值运算、逻辑运算及数据分析的电子设备,其包括但不限于:CPU、GPU、FPGA等,以及用于暂存运算期间所产生的中间数据的易失性存储器等。所述处理装置33用于运行至少一程序,以执行所述移动机器人的导航方法。所述移动机器人的导航方法参阅图1及关于图1的相关描述,在此不加赘述。The processing device 33 is connected to the measuring device 31 and the camera device 32. The processing device 33 is an electronic device capable of performing numerical operations, logical operations, and data analysis, including but not limited to: CPU, GPU, FPGA, etc. , And volatile memory used to temporarily store intermediate data generated during operation. The processing device 33 is used to run at least one program to execute the navigation method of the mobile robot. For the navigation method of the mobile robot, refer to FIG. 1 and the related description about FIG. 1, which will not be repeated here.
参阅图13,图13显示为本申请的移动机器人在一具体实施例中的组成示意图。移动机器人40包括测量装置41、摄像装置42、第一处理装置43、移动装置44以及第二处理装置45。Refer to FIG. 13, which shows a schematic diagram of the composition of the mobile robot in a specific embodiment of this application. The mobile robot 40 includes a measuring device 41, an imaging device 42, a first processing device 43, a moving device 44, and a second processing device 45.
所述测量装置41设置于所述移动机器人,用于测量所述移动机器人所在区域内障碍物相对于移动机器人的位置信息。在一些实施例中,所述测量装置41可安装于所述移动机器人的体侧(嵌设于所述移动机器人的体侧),所述测量装置41举例可为扫描激光器或TOF传感器。The measuring device 41 is provided in the mobile robot, and is used to measure the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located. In some embodiments, the measuring device 41 can be installed on the body side of the mobile robot (embedded on the body side of the mobile robot), and the measuring device 41 can be, for example, a scanning laser or a TOF sensor.
所述摄像装置42,设置于所述移动机器人,用于获取包含所述候选识别对象的图像;所述摄像装置42包括但不限于鱼眼摄像模块、广角(或非广角)摄像模块中的任一种。在此,所述移动机器人包括至少一个摄像装置42。所述摄像装置42在移动机器人所在位置摄取视 场范围内实体对象并投影至所述移动机器人的行进平面,以得到投影图像。例如,移动机器人包含一个摄像装置,其设置于所述移动机器人顶部、肩部或背部,且主光轴垂直于所述移动机器人的行进平面。又如,移动机器人包含多个摄像装置42,其中一个摄像装置42的主光轴垂直于所述移动机器人的行进平面。且以上述方式设置的摄像装置42所拍摄的图像在所述移动机器人的行进平面的投影形成的投影图像,相当于所述摄像装置42所拍摄的图像在所述行进平面的垂直投影,例如,所述摄像装置42嵌设于所述移动机器人,且主光轴垂直于所述移动机器人的行进平面。The camera device 42 is set in the mobile robot and is used to obtain an image containing the candidate recognition object; the camera device 42 includes but is not limited to any of a fisheye camera module and a wide-angle (or non-wide-angle) camera module. One kind. Here, the mobile robot includes at least one camera 42. The camera device 42 captures a physical object in the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image. For example, a mobile robot includes a camera device, which is arranged on the top, shoulder or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot. For another example, a mobile robot includes a plurality of camera devices 42, wherein the main optical axis of one camera device 42 is perpendicular to the traveling plane of the mobile robot. And the projection image formed by the projection of the image captured by the imaging device 42 in the above-mentioned manner on the traveling plane of the mobile robot is equivalent to the vertical projection of the image captured by the imaging device 42 on the traveling plane, for example, The camera device 42 is embedded in the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
所述第一处理装置43连接所述测量装置41和摄像装置42,所述第一处理装置43为一种能够进行数值运算、逻辑运算及数据分析的电子设备,其包括但不限于:CPU、GPU、FPGA等,以及用于暂存运算期间所产生的中间数据的易失性存储器等。所述第一处理装置43用于运行至少一程序,以执行所述移动机器人的导航方法,以生成导航路线。所述移动机器人的导航方法参阅图1及关于图1的相关描述,在此不加赘述。The first processing device 43 is connected to the measuring device 41 and the camera 42. The first processing device 43 is an electronic device capable of performing numerical operations, logical operations, and data analysis. It includes but is not limited to: CPU, GPU, FPGA, etc., and volatile memory used to temporarily store intermediate data generated during the operation. The first processing device 43 is configured to run at least one program to execute the navigation method of the mobile robot to generate a navigation route. For the navigation method of the mobile robot, refer to FIG. 1 and the related description about FIG. 1, which will not be repeated here.
所述移动装置44设置于所述移动机器人,用于受控地调整所述移动机器人的位置和姿态;所述移动装置44可包括行走机构和行走驱动机构,其中,所述行走机构可设置于所述机器人本体的底部,所述行走驱动机构内置于所述机器人本体内。所述行走机构可例如包括两个直行行走轮和至少一个辅助转向轮的组合,所述两个直行行走轮分别设于机器人本体的底部的相对两侧,所述两个直行行走轮可分别由对应的两个行走驱动机构实现独立驱动,即,左直行行走轮由左行走驱动机构驱动,右直行行走轮由右行走驱动机构驱动。所述的万向行走轮或直行行走轮可具有偏置下落式悬挂系统,以可移动方式紧固,例如以可旋转方式安装到机器人本体上,且接收向下及远离机器人本体偏置的弹簧偏置。所述弹簧偏置允许万向行走轮或直行行走轮以一定的着地力维持与地面的接触及牵引。在实际的应用中,所述至少一个辅助转向轮未参与的情形下,所述两个直行行走轮主要用于前进和后退,而在所述至少一个辅助转向轮参与并与所述两个直行行走轮配合的情形下,就可实现转向和旋转等移动。所述行走驱动机构可包括驱动电机,利用所述驱动电机可驱动所述行走机构中的行走轮实现移动。在具体实现上,所述驱动电机可例如为可逆驱动电机,且所述驱动电机与所述行走轮的轮轴之间还可设置有变速机构。所述行走驱动机构可以可拆卸地安装到机器人本体上,方便拆装和维修。The mobile device 44 is provided on the mobile robot for controlling the position and posture of the mobile robot; the mobile device 44 may include a walking mechanism and a walking driving mechanism, wherein the walking mechanism may be set at At the bottom of the robot body, the walking driving mechanism is built in the robot body. The walking mechanism may, for example, include a combination of two straight walking wheels and at least one auxiliary steering wheel, the two straight walking wheels are respectively provided on opposite sides of the bottom of the robot body, and the two straight walking wheels can be driven by The corresponding two traveling driving mechanisms realize independent driving, that is, the left straight traveling wheel is driven by the left traveling drive mechanism, and the right straight traveling wheel is driven by the right traveling drive mechanism. The universal walking wheel or the straight walking wheel may have a bias drop suspension system, which is fastened in a movable manner, for example, is rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Bias. The spring bias allows the universal traveling wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain ground force. In actual applications, when the at least one auxiliary steering wheel is not involved, the two straight traveling wheels are mainly used to move forward and backward, and when the at least one auxiliary steering wheel participates in and goes straight with the two When the traveling wheels are matched, the steering and rotation can be realized. The traveling driving mechanism may include a driving motor, and the traveling wheels in the traveling mechanism can be driven to move by using the driving motor. In terms of specific implementation, the drive motor can be, for example, a reversible drive motor, and a speed change mechanism can also be provided between the drive motor and the axle of the walking wheel. The walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly, assembly and maintenance.
所述第二处理装置45连接于所述第一处理装置43和移动装置44,用于运行至少一程序,以基于所述第一处理装置43所提供的导航路线,控制所述移动装置44调整位置和姿态,以沿所述导航路线进行自主移动。所述第二处理装置45例如为控制所述移动装置44的驱动电机(马达)运行的控制电路,所述第二处理装置45接收所述第一处理装置43发送的 导航路线后,向所述驱动电机发送驱动命令,以控制所述移动装置调整位置和姿态,且根据预设的栅格地图,移动若干个单位栅格,以令所述移动机器人根据所述导航路线进行移动。The second processing device 45 is connected to the first processing device 43 and the mobile device 44 for running at least one program to control the mobile device 44 to adjust based on the navigation route provided by the first processing device 43 Position and posture to move autonomously along the navigation route. The second processing device 45 is, for example, a control circuit that controls the operation of the driving motor (motor) of the mobile device 44. The second processing device 45 sends the navigation route sent by the first processing device 43 to the The driving motor sends a driving command to control the moving device to adjust the position and posture, and move several unit grids according to a preset grid map, so that the mobile robot moves according to the navigation route.
参阅图14,图14显示为本申请的划分清洁区域的系统在一具体实施例中的组成示意图。所述划分清洁区域的系统50用于清洁机器人,所述划分清洁区域的系统50包括:测量装置51、摄像装置52以及处理装置53。Referring to FIG. 14, FIG. 14 shows a schematic diagram of the composition of the system for dividing a clean area of this application in a specific embodiment. The system 50 for dividing a cleaning area is used for a cleaning robot, and the system 50 for dividing a cleaning area includes a measuring device 51, a camera 52 and a processing device 53.
所述测量装置51设置于所述清洁机器人,用于测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息。在一些实施例中,所述测量装置51可安装于所述清洁机器人的体侧(嵌设于所述移动机器人的体侧),所述测量装置51举例可为扫描激光器或TOF传感器。The measuring device 51 is provided in the cleaning robot and is used to measure the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located. In some embodiments, the measuring device 51 may be installed on the body side of the cleaning robot (embedded on the body side of the mobile robot), and the measuring device 51 may be, for example, a scanning laser or a TOF sensor.
所述摄像装置52,设置于所述清洁机器人,用于获取包含候选门的图像;所述摄像装置52包括但不限于鱼眼摄像模块、广角(或非广角)摄像模块中的任一种。在此,所述清洁机器人包括至少一个摄像装置52。所述摄像装置52在清洁机器人所在位置摄取视场范围内实体对象并投影至所述清洁机器人的行进平面,以得到投影图像。例如,清洁机器人包含一个摄像装置52,其设置于所述清洁机器人顶部、肩部或背部,且主光轴垂直于所述清洁机器人的行进平面。又如,清洁机器人包含多个摄像装置52,其中一个摄像装置52的主光轴垂直于所述清洁机器人的行进平面。且以上述方式设置的摄像装置52所拍摄的图像在所述清洁机器人的行进平面的投影形成的投影图像,相当于所述摄像装置52所拍摄的图像在所述行进平面的垂直投影,例如,所述摄像装置52嵌设于所述清洁机器人,且主光轴垂直于所述清洁机器人的行进平面。The camera device 52 is arranged in the cleaning robot and is used to obtain images including candidate doors; the camera device 52 includes but is not limited to any one of a fisheye camera module and a wide-angle (or non-wide-angle) camera module. Here, the cleaning robot includes at least one camera device 52. The camera device 52 captures a solid object in the field of view at the location of the cleaning robot and projects it onto the traveling plane of the cleaning robot to obtain a projected image. For example, the cleaning robot includes a camera device 52, which is arranged on the top, shoulder or back of the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot. For another example, the cleaning robot includes a plurality of camera devices 52, and the main optical axis of one camera device 52 is perpendicular to the traveling plane of the cleaning robot. And the projection image formed by the projection of the image captured by the imaging device 52 in the above manner on the traveling plane of the cleaning robot is equivalent to the vertical projection of the image captured by the imaging device 52 on the traveling plane, for example, The camera device 52 is embedded in the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
所述处理装置53连接所述测量装置51和摄像装置52,所述处理装置53为一种能够进行数值运算、逻辑运算及数据分析的电子设备,其包括但不限于:CPU、GPU、FPGA等,以及用于暂存运算期间所产生的中间数据的易失性存储器等。所述处理装置53用于运行至少一程序,以执行所述划分清洁区域的方法。所述划分清洁区域的方法参阅图10及关于图10的相关描述,在此不加赘述。The processing device 53 is connected to the measuring device 51 and the camera device 52. The processing device 53 is an electronic device capable of performing numerical operations, logical operations, and data analysis, including but not limited to: CPU, GPU, FPGA, etc. , And volatile memory used to temporarily store intermediate data generated during operation. The processing device 53 is configured to run at least one program to execute the method of dividing a clean area. Refer to FIG. 10 and the related description about FIG. 10 for the method of dividing the clean area, and details are not repeated here.
参阅图15,图15显示为本申请的清洁机器人在一具体实施例中的组成示意图。清洁机器人60包括测量装置61、摄像装置62、第一处理装置63、移动装置64、清洁装置65以及第二处理装置66。Referring to FIG. 15, FIG. 15 shows a schematic diagram of the composition of the cleaning robot in a specific embodiment of the present application. The cleaning robot 60 includes a measuring device 61, an imaging device 62, a first processing device 63, a moving device 64, a cleaning device 65, and a second processing device 66.
所述测量装置61设置于所述清洁机器人,用于测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息。在一些实施例中,所述测量装置41可安装于所述清洁机器人的体侧(嵌设于所述清洁机器人的体侧),所述测量装置41举例可为扫描激光器或TOF传感器。The measuring device 61 is provided in the cleaning robot, and is used to measure the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located. In some embodiments, the measuring device 41 may be installed on the body side of the cleaning robot (embedded on the body side of the cleaning robot), and the measuring device 41 may be, for example, a scanning laser or a TOF sensor.
所述摄像装置62,设置于所述清洁机器人,用于获取包含所述候选识别对象的图像;所述摄像装置62包括但不限于鱼眼摄像模块、广角(或非广角)摄像模块中的任一种。在此,所述清洁机器人包括至少一个摄像装置62。所述摄像装置62在清洁机器人所在位置摄取视场范围内实体对象并投影至所述清洁机器人的行进平面,以得到投影图像。例如,清洁机器人包含一个摄像装置,其设置于所述清洁机器人顶部、肩部或背部,且主光轴垂直于所述清洁机器人的行进平面。又如,清洁机器人包含多个摄像装置62,其中一个摄像装置42的主光轴垂直于所述清洁机器人的行进平面。且以上述方式设置的摄像装置62所拍摄的图像在所述清洁机器人的行进平面的投影形成的投影图像,相当于所述摄像装置62所拍摄的图像在所述行进平面的垂直投影,例如,所述摄像装置62嵌设于所述清洁机器人,且主光轴垂直于所述清洁机器人的行进平面。The camera device 62 is provided in the cleaning robot and is used to obtain an image containing the candidate recognition object; the camera device 62 includes but is not limited to any of a fisheye camera module and a wide-angle (or non-wide-angle) camera module. One kind. Here, the cleaning robot includes at least one camera device 62. The camera device 62 captures a physical object in the field of view at the location of the cleaning robot and projects it onto the traveling plane of the cleaning robot to obtain a projected image. For example, the cleaning robot includes a camera, which is arranged on the top, shoulder or back of the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot. For another example, the cleaning robot includes a plurality of camera devices 62, and the main optical axis of one camera device 42 is perpendicular to the traveling plane of the cleaning robot. And the projection image formed by the projection of the image captured by the imaging device 62 in the above manner on the traveling plane of the cleaning robot is equivalent to the vertical projection of the image captured by the imaging device 62 on the traveling plane, for example, The camera device 62 is embedded in the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
所述第一处理装置63连接所述测量装置61和摄像装置62,所述第一处理装置63为一种能够进行数值运算、逻辑运算及数据分析的电子设备,其包括但不限于:CPU、GPU、FPGA等,以及用于暂存运算期间所产生的中间数据的易失性存储器等。所述第一处理装置63用于运行至少一程序,以执行所述划分清洁区域的方法,并利用所得到的清洁区域生成导航路线。所述划分清洁区域的方法参阅图10及关于图10的相关描述,在此不加赘述。The first processing device 63 is connected to the measuring device 61 and the camera 62. The first processing device 63 is an electronic device capable of performing numerical operations, logical operations, and data analysis, including but not limited to: CPU, GPU, FPGA, etc., and volatile memory used to temporarily store intermediate data generated during the operation. The first processing device 63 is configured to run at least one program to execute the method of dividing a clean area, and use the obtained clean area to generate a navigation route. Refer to FIG. 10 and the related description about FIG. 10 for the method of dividing the clean area, and details are not repeated here.
所述移动装置64设置于所述清洁机器人,用于受控地调整所述清洁机器人的位置和姿态;所述移动装置64可包括行走机构和行走驱动机构,其中,所述行走机构可设置于所述机器人本体的底部,所述行走驱动机构内置于所述机器人本体内。所述行走机构可例如包括两个直行行走轮和至少一个辅助转向轮的组合,所述两个直行行走轮分别设于机器人本体的底部的相对两侧,所述两个直行行走轮可分别由对应的两个行走驱动机构实现独立驱动,即,左直行行走轮由左行走驱动机构驱动,右直行行走轮由右行走驱动机构驱动。所述的万向行走轮或直行行走轮可具有偏置下落式悬挂系统,以可移动方式紧固,例如以可旋转方式安装到机器人本体上,且接收向下及远离机器人本体偏置的弹簧偏置。所述弹簧偏置允许万向行走轮或直行行走轮以一定的着地力维持与地面的接触及牵引。在实际的应用中,所述至少一个辅助转向轮未参与的情形下,所述两个直行行走轮主要用于前进和后退,而在所述至少一个辅助转向轮参与并与所述两个直行行走轮配合的情形下,就可实现转向和旋转等移动。所述行走驱动机构可包括驱动电机,利用所述驱动电机可驱动所述行走机构中的行走轮实现移动。在具体实现上,所述驱动电机可例如为可逆驱动电机,且所述驱动电机与所述行走轮的轮轴之间还可设置有变速机构。所述行走驱动机构可以可拆卸地安装到机器人本体上,方便拆装和维修。The moving device 64 is provided on the cleaning robot for controlled adjustment of the position and posture of the cleaning robot; the moving device 64 may include a walking mechanism and a walking driving mechanism, wherein the walking mechanism may be set at At the bottom of the robot body, the walking driving mechanism is built in the robot body. The walking mechanism may, for example, include a combination of two straight walking wheels and at least one auxiliary steering wheel, the two straight walking wheels are respectively provided on opposite sides of the bottom of the robot body, and the two straight walking wheels can be driven by The corresponding two traveling driving mechanisms realize independent driving, that is, the left straight traveling wheel is driven by the left traveling drive mechanism, and the right straight traveling wheel is driven by the right traveling drive mechanism. The universal walking wheel or the straight walking wheel may have a bias drop suspension system, which is fastened in a movable manner, for example, is rotatably mounted on the robot body, and receives a spring biased downward and away from the robot body Bias. The spring bias allows the universal traveling wheel or the straight traveling wheel to maintain contact and traction with the ground with a certain ground force. In actual applications, when the at least one auxiliary steering wheel is not involved, the two straight traveling wheels are mainly used to move forward and backward, and when the at least one auxiliary steering wheel participates in and goes straight with the two When the traveling wheels are matched, the steering and rotation can be realized. The traveling driving mechanism may include a driving motor, and the traveling wheels in the traveling mechanism can be driven to move by using the driving motor. In terms of specific implementation, the drive motor can be, for example, a reversible drive motor, and a speed change mechanism can also be provided between the drive motor and the axle of the walking wheel. The walking driving mechanism can be detachably installed on the robot body, which is convenient for disassembly, assembly and maintenance.
所述清洁装置65可至少包括清扫组件和吸尘组件。所述清扫组件可包括位于所述清洁 机器人的壳体的底部的清洁边刷以及与用于控制所述清洁边刷的边刷电机,其中,所述清洁边刷的数量可为两个,分别对称设置于所述壳体后端的相对两侧,所述清洁边刷可采用旋转清洁边刷,可在所述边刷电机的控制下作旋转。所述吸尘组件可包括集尘室和吸尘器,其中,所述集尘室内置于所述壳体,所述吸尘器的出气口与所述集尘室连通,所述吸尘器的进气口设于所述壳体的底部。The cleaning device 65 may at least include a cleaning component and a dust suction component. The cleaning assembly may include a side cleaning brush located at the bottom of the housing of the cleaning robot and a side brush motor for controlling the side cleaning brush, wherein the number of the side cleaning brush may be two, respectively They are symmetrically arranged on opposite sides of the rear end of the housing, and the cleaning side brush can be a rotating cleaning side brush, which can be rotated under the control of the side brush motor. The dust collection assembly may include a dust collection chamber and a vacuum cleaner, wherein the dust collection chamber is placed in the housing, the air outlet of the vacuum cleaner is in communication with the dust collection chamber, and the air inlet of the vacuum cleaner is arranged at The bottom of the housing.
所述第二处理装置66连接于所述第一处理装置63并分别控制清洁装置65和移动装置64,用于运行至少一程序,以基于所述第一处理装置63所提供的导航路线,控制所述移动装置64调整位置和姿态,以沿所述导航路线进行自主移动,以及控制清洁装置65执行清洁操作。所述第二处理装置66接收所述第一处理装置63发送的导航路线后,向所述移动装置63的驱动电机发送驱动命令,以控制所述移动装置调整位置和姿态,且根据预设的栅格地图,移动若干个单位栅格,以令所述清洁机器人根据所述导航路线进行移动,且在所述清洁机器人移动的同时,所述第二处理装置66向所述边刷电机发送控制命令,以令所述边刷电机驱动所述清洁边刷进行旋转,且控制所述吸尘器开始工作。The second processing device 66 is connected to the first processing device 63 and controls the cleaning device 65 and the mobile device 64 respectively, and is used to run at least one program to control based on the navigation route provided by the first processing device 63 The moving device 64 adjusts the position and posture to move autonomously along the navigation route, and controls the cleaning device 65 to perform cleaning operations. After the second processing device 66 receives the navigation route sent by the first processing device 63, it sends a driving command to the driving motor of the mobile device 63 to control the mobile device to adjust the position and posture, and according to the preset Grid map, moving several unit grids to make the cleaning robot move according to the navigation route, and while the cleaning robot is moving, the second processing device 66 sends control to the side brush motor Command to make the side brush motor drive the side cleaning brush to rotate and control the vacuum cleaner to start working.
参阅图16,图16显示为本申请的数据处理装置在一具体实施例中的组成示意图。Referring to FIG. 16, FIG. 16 shows a schematic diagram of the composition of the data processing device of this application in a specific embodiment.
数据处理装置70用于移动机器人,所述数据处理装置70包括数据接口71、存储单元72以及处理单元73。The data processing device 70 is used for a mobile robot, and the data processing device 70 includes a data interface 71, a storage unit 72 and a processing unit 73.
所述数据接口71用于连接所述移动机器人的摄像装置和测量装置;所述摄像装置在移动机器人所在位置摄取视场范围内实体对象并投影至所述移动机器人的行进平面,以得到投影图像。例如,移动机器人包含一个摄像装置,其设置于所述移动机器人顶部、肩部或背部,且主光轴垂直于所述移动机器人的行进平面。又如,移动机器人包含多个摄像装置,其中一个摄像装置的主光轴垂直于所述移动机器人的行进平面。The data interface 71 is used to connect a camera device and a measuring device of the mobile robot; the camera device captures a physical object in the field of view at the location of the mobile robot and projects it onto the traveling plane of the mobile robot to obtain a projected image . For example, a mobile robot includes a camera device, which is arranged on the top, shoulder or back of the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot. For another example, a mobile robot includes a plurality of camera devices, and the main optical axis of one camera device is perpendicular to the traveling plane of the mobile robot.
所述存储单元72,用于存储至少一程序;The storage unit 72 is used to store at least one program;
所述处理单元73,与所述存储单元72和数据接口71相连,用于藉由所述数据接口71获取所述测量装置所提供的位置信息,以及获取所述摄像装置拍摄的图像;以及用于执行所述导航方法或划分清洁区域的方法。其中,所述导航方法参阅图1及关于图1的相关描述,所述划分清洁区域的方法参阅图10及关于图10的相关描述,在此不加赘述。The processing unit 73 is connected to the storage unit 72 and the data interface 71, and is used to obtain the position information provided by the measuring device and the image taken by the camera device through the data interface 71; and To perform the navigation method or the method of dividing a clean area. For the navigation method, refer to FIG. 1 and related descriptions about FIG. 1, and for the method of dividing a clean area, refer to FIG. 10 and related descriptions about FIG. 10, which will not be repeated here.
本申请另一实施例中,还公开一种计算机可读的存储介质,所述计算机可读的存储介质存储至少一种程序,所述至少一种程序在被调用时执行所述导航方法或划分清洁区域的方法。其中,所述导航方法参阅图1及关于图1的相关描述,所述划分清洁区域的方法参阅图10及关于图10的相关描述,在此不加赘述。In another embodiment of the present application, a computer-readable storage medium is also disclosed. The computer-readable storage medium stores at least one program, and the at least one program executes the navigation method or partition when called Methods of cleaning the area. For the navigation method, refer to FIG. 1 and related descriptions about FIG. 1, and for the method of dividing a clean area, refer to FIG. 10 and related descriptions about FIG. 10, which will not be repeated here.
另外需要说明的是,通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到 本申请的部分或全部可借助软件并结合必需的通用硬件平台来实现。基于这样的理解,所述存储介质存储有至少一个程序,所述程序在被调用时执行前述的任一所述的导航方法。In addition, it should be noted that through the description of the above implementation manners, those skilled in the art can clearly understand that part or all of this application can be implemented by software combined with a necessary general hardware platform. Based on this understanding, the storage medium stores at least one program, and the program executes any of the aforementioned navigation methods when called.
基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可包括其上存储有机器可执行指令的一个或多个机器可读介质,这些指令在由诸如计算机、计算机网络或其他电子设备等一个或多个机器执行时可使得该一个或多个机器根据本申请的实施例来执行操作。例如执行机器人的定位方法中的各步骤等。机器可读介质可包括,但不限于,软盘、光盘、CD-ROM(紧致盘-只读存储器)、磁光盘、ROM(只读存储器)、RAM(随机存取存储器)、EPROM(可擦除可编程只读存储器)、EEPROM(电可擦除可编程只读存储器)、磁卡或光卡、闪存、或适于存储机器可执行指令的其他类型的介质/机器可读介质。其中,所述存储介质可位于机器人也可位于第三方服务器中,如位于提供某应用商城的服务器中。在此对具体应用商城不做限制,如小米应用商城、华为应用商城、苹果应用商城等。Based on this understanding, the technical solution of the present application essentially or the part that contributes to the prior art can be embodied in the form of a software product. The computer software product can include one or more machine executable instructions stored thereon. A machine-readable medium, when these instructions are executed by one or more machines, such as a computer, a computer network, or other electronic devices, can cause the one or more machines to perform operations according to the embodiments of the present application. For example, perform the steps in the robot positioning method. Machine-readable media may include, but are not limited to, floppy disks, optical disks, CD-ROM (compact disk-read only memory), magneto-optical disks, ROM (read only memory), RAM (random access memory), EPROM (erasable Except programmable read-only memory), EEPROM (electrically erasable programmable read-only memory), magnetic or optical cards, flash memory, or other types of media/machine-readable media suitable for storing machine-executable instructions. Wherein, the storage medium may be located in a robot or a third-party server, such as a server that provides an application mall. There are no restrictions on specific application malls, such as Xiaomi App Store, Huawei App Store, and Apple App Store.
本申请可用于众多通用或专用的计算系统环境或配置中。例如:个人计算机、服务器计算机、手持设备或便携式设备、平板型设备、多处理器系统、基于微处理器的系统、置顶盒、可编程的消费电子设备、网络PC、小型计算机、大型计算机、包括以上任何系统或设备的分布式计算环境等。This application can be used in many general or special computing system environments or configurations. For example: personal computers, server computers, handheld devices or portable devices, tablet devices, multi-processor systems, microprocessor-based systems, set-top boxes, programmable consumer electronic devices, network PCs, small computers, large computers, including Distributed computing environment of any of the above systems or equipment, etc.
本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。This application may be described in the general context of computer-executable instructions executed by a computer, such as program modules. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types. This application can also be practiced in distributed computing environments. In these distributed computing environments, remote processing devices connected through a communication network perform tasks. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices.
上述实施例仅例示性说明本申请的原理及其功效,而非用于限制本申请。任何熟悉此技术的人士皆可在不违背本申请的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本申请所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本申请的权利要求所涵盖。The foregoing embodiments only exemplarily illustrate the principles and effects of the present application, and are not used to limit the present application. Anyone familiar with this technology can modify or change the above-mentioned embodiments without departing from the spirit and scope of this application. Therefore, all equivalent modifications or changes made by persons with ordinary knowledge in the technical field without departing from the spirit and technical ideas disclosed in this application should still be covered by the claims of this application.

Claims (41)

  1. 一种移动机器人的导航方法,其特征在于,所述移动机器人包含测量装置和摄像装置,所述方法包括以下步骤:A navigation method of a mobile robot, characterized in that the mobile robot includes a measuring device and a camera device, and the method includes the following steps:
    令所述测量装置测量所述移动机器人所在区域内的障碍物相对于移动机器人的位置信息,并确定在所述区域内的候选识别对象所占的位置信息;Enabling the measuring device to measure the position information of the obstacle in the area where the mobile robot is located relative to the mobile robot, and determine the position information occupied by the candidate recognition object in the area;
    根据所确定的候选识别对象所占的位置信息,令所述摄像装置获取包含所述候选识别对象的图像,并确定所述候选识别对象相应的实体对象信息;According to the determined position information occupied by the candidate recognition object, enabling the camera device to obtain an image containing the candidate recognition object, and determine the entity object information corresponding to the candidate recognition object;
    依据所述实体对象信息及其位置信息确定所述移动机器人在所述区域内的导航路线。The navigation route of the mobile robot in the area is determined according to the entity object information and its position information.
  2. 根据权利要求1所述的移动机器人的导航方法,其特征在于,所述确定在区域内候选识别对象所占的位置信息的步骤包括:The navigation method of a mobile robot according to claim 1, wherein the step of determining the position information occupied by the candidate recognition object in the area comprises:
    依据测量所述区域内各障碍物测量点的位置信息以获得一个扫描轮廓及其所占位置信息;Obtain a scan profile and its occupied position information according to measuring the position information of each obstacle measurement point in the area;
    按照所述扫描轮廓上的不连续部分,将所述扫描轮廓划分为多个候选识别对象,并确定各候选识别对象所占的位置信息。According to the discontinuous part on the scanning contour, the scanning contour is divided into a plurality of candidate recognition objects, and the position information occupied by each candidate recognition object is determined.
  3. 根据权利要求2所述的移动机器人的导航方法,其特征在于,所述依据测量所述区域内各障碍物测量点的位置信息以获得一个扫描轮廓及其所占位置信息的步骤包括:The navigation method of a mobile robot according to claim 2, wherein the step of obtaining a scan contour and its occupied position information based on measuring the position information of each obstacle measurement point in the area comprises:
    基于所述测量装置所测得的各障碍物测量点的位置信息面阵列,拟合所述移动机器人的行进平面,以及确定位于所述行进平面上扫描轮廓及其所占位置信息;或者Based on the surface array of the position information of each obstacle measurement point measured by the measuring device, fitting the traveling plane of the mobile robot, and determining the scanning contour on the traveling plane and the position information occupied by it; or
    基于所述测量装置所测得的平行于行进平面的位置信息线阵列,确定位于所述行进平面上扫描轮廓及其所占位置信息。Based on the line array of position information parallel to the traveling plane measured by the measuring device, the scanning profile on the traveling plane and the position information occupied by it are determined.
  4. 根据权利要求2所述的移动机器人的导航方法,其特征在于,所述基于扫描轮廓上的不连续部分,将所述扫描轮廓划分为多个候选识别对象的步骤包括:The navigation method of a mobile robot according to claim 2, wherein the step of dividing the scanning contour into a plurality of candidate recognition objects based on the discontinuous part on the scanning contour comprises:
    基于所述扫描轮廓上由不连续部分所形成的缺口,确定相应候选识别对象为包含缺口的第一候选识别对象;Based on the gap formed by the discontinuous part on the scan contour, determining the corresponding candidate recognition object as the first candidate recognition object containing the gap;
    将所述扫描轮廓上由不连续部分所分隔的连续部分,确定相应候选识别对象为妨碍移动机器人移动的第二候选识别对象。Determine the corresponding candidate recognition object as the second candidate recognition object that hinders the movement of the mobile robot by determining the continuous part separated by the discontinuous part on the scan contour.
  5. 根据权利要求4所述的移动机器人的导航方法,其特征在于,所述基于扫描轮廓上由不连续部分所形成的缺口,确定相应候选识别对象为包含缺口的第一候选识别对象的步骤包 括:The method for navigating a mobile robot according to claim 4, wherein the step of determining the corresponding candidate recognition object as the first candidate recognition object containing the gap based on the gap formed by the discontinuous part on the scanned contour comprises:
    按照预设的筛选条件,对所形成的缺口进行筛选;其中,所述筛选条件包含:缺口位于与其相邻的至少一侧的连续部分所在沿线上、和/或预设的缺口宽度阈值;以及Screen the formed gaps according to preset screening conditions; wherein, the screening conditions include: the gap is located along the line where the continuous part of at least one side adjacent to the gap is located, and/or the preset gap width threshold; and
    基于筛选后的缺口确定相应候选识别对象为包含所述缺口的第一候选识别对象。Based on the screened gaps, the corresponding candidate recognition object is determined to be the first candidate recognition object containing the gap.
  6. 根据权利要求1所述的移动机器人的导航方法,其特征在于,所述令测量装置测量所述移动机器人所在区域内障碍物相对于移动机器人的位置信息的步骤包括:The navigation method of a mobile robot according to claim 1, wherein the step of causing the measuring device to measure the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located comprises:
    令所述测量装置测量所述摄像装置的视场范围内障碍物相对于移动机器人的位置信息。The measurement device is made to measure the position information of the obstacle relative to the mobile robot in the field of view of the camera device.
  7. 根据权利要求1所述的移动机器人的导航方法,其特征在于,所述根据所确定的候选识别对象所占的位置信息,令所述摄像装置获取包含所述候选识别对象的图像的步骤包括:The method for navigating a mobile robot according to claim 1, wherein the step of causing the camera to acquire an image containing the candidate identification object based on the determined position information of the candidate identification object comprises:
    令所述摄像装置摄取所述候选识别对象投影至所述移动机器人的行进平面的图像;或者Enable the camera device to capture an image of the candidate recognition object projected onto the traveling plane of the mobile robot; or
    根据所得到的候选识别对象所占位置信息,控制所述移动机器人移动,并令所述摄像装置摄取包含相应候选识别对象的图像。According to the obtained position information of the candidate recognition object, the mobile robot is controlled to move, and the camera device is made to capture an image containing the corresponding candidate recognition object.
  8. 根据权利要求1所述的移动机器人的导航方法,其特征在于,所述确定候选识别对象相应的实体对象信息的步骤包括:The navigation method of a mobile robot according to claim 1, wherein the step of determining the entity object information corresponding to the candidate recognition object comprises:
    根据所述候选识别对象所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域;Determine the image area in the corresponding angle range in the image according to the angle range in the position information occupied by the candidate recognition object;
    对所述图像区域进行特征识别以确定所述候选识别对象相应的实体对象信息。Perform feature recognition on the image area to determine entity object information corresponding to the candidate recognition object.
  9. 根据权利要求8所述的移动机器人的导航方法,其特征在于,若所述候选识别对象包含带有缺口的第一候选识别对象;对应地,所述根据候选识别对象所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域的步骤包括:The mobile robot navigation method according to claim 8, wherein if the candidate recognition object includes a first candidate recognition object with a gap; correspondingly, the angle in the position information occupied by the candidate recognition object The step of determining the image area within the corresponding angle range in the image includes:
    基于所述候选识别对象两端的位置信息确定至少一个角度范围;Determining at least one angle range based on the position information of the two ends of the candidate recognition object;
    按照所确定的角度范围从所述图像中确定用于识别相应第一候选识别对象的实体对象信息的图像区域。According to the determined angle range, an image area for identifying the entity object information of the corresponding first candidate identification object is determined from the image.
  10. 根据权利要求1或9所述的移动机器人的导航方法,其特征在于,所述候选识别对象包含 带有缺口的第一候选识别对象;对应地,所述确定候选识别对象相应的实体对象信息的步骤包括:The navigation method of a mobile robot according to claim 1 or 9, wherein the candidate recognition object includes a first candidate recognition object with a gap; correspondingly, the entity object information corresponding to the candidate recognition object is determined The steps include:
    根据所述第一候选识别对象所占的位置信息,在所述图像中识别出至少两条用于表示垂直于所述行进平面的特征线;According to the position information occupied by the first candidate recognition object, at least two characteristic lines used to indicate perpendicular to the traveling plane are identified in the image;
    基于所识别出的特征线确定所述第一候选识别对象为用于表示门的实体对象信息。Based on the identified characteristic line, it is determined that the first candidate identification object is the entity object information used to represent the door.
  11. 根据权利要求1所述的移动机器人的导航方法,其特征在于,所述确定候选识别对象相应的实体对象信息的步骤包括:The navigation method of a mobile robot according to claim 1, wherein the step of determining the entity object information corresponding to the candidate recognition object comprises:
    基于预设已知的多种实体对象的特征信息,识别所述图像中的候选识别对象的实体对象信息;Based on preset known feature information of multiple entity objects, identifying entity object information of candidate identification objects in the image;
    利用预设的图像识别算法,构建所述图像中候选识别对象与已知的多种实体对象信息的映射关系,以确定候选识别对象所对应的实体对象信息。A preset image recognition algorithm is used to construct a mapping relationship between the candidate recognition object in the image and various known entity object information to determine the entity object information corresponding to the candidate recognition object.
  12. 根据权利要求1所述的移动机器人的导航方法,其特征在于,还包括:将所确定的实体对象信息及其位置信息标记在用于设置导航路线的地图中。The navigation method of a mobile robot according to claim 1, further comprising: marking the determined entity object information and its position information in a map for setting a navigation route.
  13. 根据权利要求1所述的移动机器人的导航方法,其特征在于,所述移动机器人为清洁机器人;所述依据实体对象信息及其位置信息确定所述移动机器人在所述区域内的导航路线的步骤包括:依据所述实体对象信息及所述移动机器人所在区域划分移动机器人的清洁区域,并设计在所述行走区域中的导航路线。The navigation method of a mobile robot according to claim 1, wherein the mobile robot is a cleaning robot; and the step of determining the navigation route of the mobile robot in the area based on the entity object information and its position information It includes: dividing the cleaning area of the mobile robot according to the physical object information and the area where the mobile robot is located, and designing a navigation route in the walking area.
  14. 根据权利要求13所述的移动机器人的导航方法,其特征在于,所述清洁区域包括以下任一种:基于所述实体对象信息而确定的房间区域;按照预设区域范围和位于所述区域范围内的实体对象信息所占位置信息而划分的区域。The method for navigating a mobile robot according to claim 13, wherein the cleaning area comprises any one of the following: a room area determined based on the physical object information; according to a preset area range and a range located in the area The area divided by the location information occupied by the entity object information within.
  15. 根据权利要求14所述的移动机器人的导航方法,其特征在于,当所确定的实体对象信息包含实体门时,还包括在所述实体门所对应的位置信息处设置虚拟墙的步骤;以便依据所述虚拟墙及所述移动机器人所在区域划分移动机器人的清洁区域,并设计在所述行走区域中的导航路线。The navigation method of a mobile robot according to claim 14, wherein when the determined physical object information includes a physical door, it further comprises the step of setting a virtual wall at the position information corresponding to the physical door; The virtual wall and the area where the mobile robot is located divide the cleaning area of the mobile robot, and design a navigation route in the walking area.
  16. 一种划分清洁区域的方法,用于清洁机器人,其特征在于,所述清洁机器人包含测量装置 和摄像装置,所述方法包括以下步骤:A method for dividing a cleaning area for a cleaning robot, characterized in that the cleaning robot includes a measuring device and a camera device, and the method includes the following steps:
    令所述测量装置测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息,并确定在所述区域内的候选门所占的位置信息;Enabling the measuring device to measure the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located, and determine the position information occupied by the candidate door in the area;
    根据所确定的候选门所占的位置信息,令所述摄像装置获取包含所述候选门的图像,并确定所述候选门为实体门;According to the position information occupied by the determined candidate door, enable the camera device to obtain an image containing the candidate door, and determine that the candidate door is a physical door;
    依据所述实体门及其位置信息划分所述清洁机器人的清洁区域,以约束所述清洁机器人的行走范围。The cleaning area of the cleaning robot is divided according to the physical door and its position information to restrict the walking range of the cleaning robot.
  17. 根据权利要求16所述的划分清洁区域的方法,其特征在于,所述确定在区域内候选门所占的位置信息的步骤包括:The method for dividing a clean area according to claim 16, wherein the step of determining position information occupied by candidate doors in the area comprises:
    依据测量所述区域内各障碍物测量点的位置信息以获得一个扫描轮廓及其所占位置信息;Obtain a scan profile and its occupied position information according to measuring the position information of each obstacle measurement point in the area;
    按照所述扫描轮廓上的不连续部分,确定各候选门所占的位置信息。According to the discontinuous part on the scan contour, the position information occupied by each candidate door is determined.
  18. 根据权利要求17所述的划分清洁区域的方法,其特征在于,所述依据测量所述区域内各障碍物测量点的位置信息以获得一个扫描轮廓及其所占位置信息的步骤包括:The method for dividing a clean area according to claim 17, wherein the step of obtaining a scan profile and its occupied position information based on measuring the position information of each obstacle measurement point in the area comprises:
    基于所述测量装置所测得的各障碍物测量点的位置信息面阵列,拟合所述清洁机器人的行进平面,以及确定位于所述行进平面上扫描轮廓及其所占位置信息;Fitting the travel plane of the cleaning robot based on the surface array of the position information of each obstacle measurement point measured by the measuring device, and determining the scanning contour on the travel plane and the position information occupied by it;
    基于所述测量装置所测得的平行于行进平面的位置信息线阵列,确定位于所述行进平面上扫描轮廓及其所占位置信息。Based on the line array of position information parallel to the traveling plane measured by the measuring device, the scanning profile on the traveling plane and the position information occupied by it are determined.
  19. 根据权利要求17所述的划分清洁区域的方法,其特征在于,所述基于扫描轮廓上的不连续部分,确定各候选门所占的位置信息的步骤包括:The method for dividing a clean area according to claim 17, wherein the step of determining the position information occupied by each candidate door based on the discontinuous part on the scan contour comprises:
    按照预设的筛选条件对由不连续部分所形成的缺口进行筛选,并确定筛选后的缺口属于候选门;其中,所述筛选条件包含:缺口位于与其相邻的至少一侧的连续部分所在沿线上、和/或预设的缺口宽度阈值。The gaps formed by the discontinuous parts are screened according to preset screening conditions, and the screened gaps are determined to belong to candidate gates; wherein, the screening conditions include: the gap is located along the line of the continuous part on at least one side adjacent to it Upper, and/or preset gap width threshold.
  20. 根据权利要求16所述的划分清洁区域的方法,其特征在于,所述令测量装置测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息的步骤包括:The method for dividing a cleaning area according to claim 16, wherein the step of causing the measuring device to measure the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located comprises:
    令所述测量装置测量所述摄像装置的视场范围内障碍物相对于清洁机器人的位置信息。The measuring device is allowed to measure the position information of the obstacle relative to the cleaning robot in the field of view of the imaging device.
  21. 根据权利要求16所述的划分清洁区域的方法,其特征在于,所述根据所确定的候选门所占的位置信息,令所述摄像装置获取包含所述候选门的图像的步骤包括:The method for dividing a clean area according to claim 16, wherein the step of causing the camera device to obtain an image containing the candidate door according to the determined position information occupied by the candidate door comprises:
    令所述摄像装置摄取所述候选门投影至所述清洁机器人的行进平面的图像;或者Enable the camera device to capture an image of the candidate door projected onto the traveling plane of the cleaning robot; or
    根据所得到的候选门所占位置信息,控制所述清洁机器人移动,并令所述摄像装置摄取包含相应候选门的图像。According to the obtained position information of the candidate door, the cleaning robot is controlled to move, and the camera device is made to capture an image containing the corresponding candidate door.
  22. 根据权利要求16所述的划分清洁区域的方法,其特征在于,所述确定候选门为实体门的步骤包括:The method for dividing a clean area according to claim 16, wherein the step of determining that the candidate door is a physical door comprises:
    根据所述候选门所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域;Determine the image area within the corresponding angle range in the image according to the angle range in the position information occupied by the candidate door;
    对所述图像区域进行特征识别以确定所述候选门为实体门。Perform feature recognition on the image area to determine that the candidate door is a solid door.
  23. 根据权利要求22所述的划分清洁区域的方法,其特征在于,所述根据候选门所占位置信息中的角度范围,确定所述图像中对应角度范围内的图像区域的步骤包括:The method for dividing a clean area according to claim 22, wherein the step of determining the image area within the corresponding angle range in the image according to the angle range in the position information occupied by the candidate door comprises:
    基于所述候选门两端的位置信息确定至少一个角度范围;Determining at least one angle range based on the position information at both ends of the candidate door;
    按照所确定的角度范围从所述图像中确定用于识别该候选门是否为实体门的图像区域。According to the determined angle range, an image area for identifying whether the candidate door is a solid door is determined from the image.
  24. 根据权利要求16或23所述的划分清洁区域的方法,其特征在于,所述确定候选门为实体门的步骤包括:在所述图像中识别出至少两条用于表示垂直于所述行进平面的特征线,并基于所识别出的特征线确定所述候选门为实体门。The method for dividing a clean area according to claim 16 or 23, wherein the step of determining that the candidate door is a physical door comprises: identifying in the image at least two items that indicate that they are perpendicular to the traveling plane. And determine the candidate door as a solid door based on the identified characteristic line.
  25. 根据权利要求16所述的划分清洁区域的方法,其特征在于,还包括:将所确定的实体门及其位置信息标记在用于设置清洁路线的地图中。The method for dividing a cleaning area according to claim 16, further comprising: marking the determined physical door and its position information in a map for setting a cleaning route.
  26. 根据权利要求16所述的划分清洁区域的方法,其特征在于,所述依据实体门及其位置信息划分所述清洁机器人的清洁区域的步骤包括:在所述实体门处设置虚拟墙;以及依据所述虚拟墙及所述清洁机器人所在区域划分清洁机器人的清洁区域。The method for dividing a cleaning area according to claim 16, wherein the step of dividing the cleaning area of the cleaning robot according to the physical door and its position information comprises: setting a virtual wall at the physical door; and The virtual wall and the area where the cleaning robot is located divide the cleaning area of the cleaning robot.
  27. 根据权利要求16或26所述的划分清洁区域的方法,其特征在于,所述清洁区域包括以下 任一种:基于所述实体门而确定的房间区域;按照预设区域范围和位于所述区域范围内的实体门所占位置信息而划分的区域。The method for dividing a clean area according to claim 16 or 26, wherein the clean area includes any one of the following: a room area determined based on the physical door; and a preset area range and located in the area The area divided by the location information of the physical door in the range.
  28. 一种移动机器人的导航系统,其特征在于,包括:A navigation system for a mobile robot is characterized in that it comprises:
    测量装置,设置于所述移动机器人,用于测量所述移动机器人所在区域内障碍物相对于移动机器人的位置信息;A measuring device, arranged on the mobile robot, for measuring the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located;
    摄像装置,设置于所述移动机器人,用于获取包含所述候选识别对象的图像;A camera device, which is provided in the mobile robot and used to obtain an image containing the candidate recognition object;
    处理装置,连接所述测量装置和摄像装置,用于运行至少一程序,以执行如权利要求1-15中任一所述的导航方法。The processing device is connected to the measurement device and the camera device, and is used to run at least one program to execute the navigation method according to any one of claims 1-15.
  29. 根据权利要求28所述的移动机器人的导航系统,其特征在于,所述摄像装置嵌设于所述移动机器人,且主光轴垂直于所述移动机器人的行进平面。The navigation system of a mobile robot according to claim 28, wherein the camera device is embedded in the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  30. 根据权利要求28所述的移动机器人的导航系统,其特征在于,所述测量装置嵌设于所述移动机器人的体侧,所述测量装置包括:测距传感装置和角度传感装置,或者TOF测量装置。The navigation system of a mobile robot according to claim 28, wherein the measuring device is embedded on the body side of the mobile robot, and the measuring device comprises: a distance sensing device and an angle sensing device, or TOF measuring device.
  31. 一种移动机器人,其特征在于,包括:A mobile robot, characterized in that it comprises:
    测量装置,设置于所述移动机器人,用于测量所述移动机器人所在区域内障碍物相对于移动机器人的位置信息;A measuring device, arranged on the mobile robot, for measuring the position information of the obstacle relative to the mobile robot in the area where the mobile robot is located;
    摄像装置,设置于所述移动机器人,用于获取包含所述候选识别对象的图像;A camera device, which is provided in the mobile robot and used to obtain an image containing the candidate recognition object;
    第一处理装置,连接所述测量装置和摄像装置,用于运行至少一程序,以执行如权利要求1-15中任一所述的导航方法,以生成导航路线;The first processing device is connected to the measurement device and the camera device, and is used to run at least one program to execute the navigation method according to any one of claims 1-15 to generate a navigation route;
    移动装置,设置于所述移动机器人,用于受控地调整所述移动机器人的位置和姿态;A mobile device, which is provided in the mobile robot, and is used to controllably adjust the position and posture of the mobile robot;
    第二处理装置,连接于所述第一处理装置和移动装置,用于运行至少一程序,以基于所述第一处理装置所提供的导航路线,控制所述移动装置调整位置和姿态,以沿所述导航路线进行自主移动。The second processing device, connected to the first processing device and the mobile device, is used to run at least one program to control the mobile device to adjust the position and posture based on the navigation route provided by the first processing device to move along The navigation route moves autonomously.
  32. 根据权利要求31所述的移动机器人,其特征在于,所述摄像装置嵌设于所述移动机器人,且主光轴垂直于所述移动机器人的行进平面。The mobile robot according to claim 31, wherein the camera device is embedded in the mobile robot, and the main optical axis is perpendicular to the traveling plane of the mobile robot.
  33. 根据权利要求31所述的移动机器人,其特征在于,所述测量装置嵌设于所述移动机器人的体侧,所述测量装置包括:测距传感装置和角度传感装置,或者TOF测量装置。The mobile robot according to claim 31, wherein the measuring device is embedded on the body side of the mobile robot, and the measuring device comprises: a distance measuring sensor device and an angle sensor device, or a TOF measuring device .
  34. 一种划分清洁区域的系统,用于清洁机器人,其特征在于,包括:A system for dividing cleaning areas for cleaning robots, characterized in that it includes:
    测量装置,设置于所述清洁机器人,用于测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息;A measuring device, which is provided in the cleaning robot, and is used to measure the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located;
    摄像装置,设置于所述清洁机器人,用于获取包含所述候选门的图像;A camera device, which is provided in the cleaning robot, and is used to obtain an image containing the candidate door;
    处理装置,连接所述测量装置和摄像装置,用于运行至少一程序,以执行如权利要求16-27中任一所述的划分清洁区域的方法,以便在所生成的清洁区域内设置导航路线。The processing device is connected to the measurement device and the camera device, and is used to run at least one program to execute the method for dividing a clean area according to any one of claims 16-27, so as to set a navigation route in the generated clean area .
  35. 根据权利要求34所述的划分清洁区域的系统,其特征在于,所述摄像装置嵌设于所述清洁机器人,且主光轴垂直于所述清洁机器人的行进平面。The system for dividing a cleaning area according to claim 34, wherein the camera device is embedded in the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
  36. 根据权利要求34所述的划分清洁区域的系统,其特征在于,所述测量装置嵌设于所述清洁机器人的体侧,所述测量装置包括:测距传感装置和角度传感装置,或者TOF测量装置。The system for dividing a cleaning area according to claim 34, wherein the measuring device is embedded on the side of the cleaning robot, and the measuring device comprises: a distance sensing device and an angle sensing device, or TOF measuring device.
  37. 一种清洁机器人,其特征在于,包括:A cleaning robot is characterized by comprising:
    测量装置,设置于所述清洁机器人,用于测量所述清洁机器人所在区域内障碍物相对于清洁机器人的位置信息;A measuring device, which is provided in the cleaning robot, and is used to measure the position information of the obstacle relative to the cleaning robot in the area where the cleaning robot is located;
    摄像装置,设置于所述清洁机器人,用于获取包含所述候选识别对象的图像;A camera device, which is provided in the cleaning robot, and is used to obtain an image containing the candidate recognition object;
    第一处理装置,连接所述测量装置和摄像装置,用于运行至少一程序,以执行如权利要求16-27中任一所述的划分清洁区域的方法,并利用所得到的清洁区域生成导航路线;The first processing device is connected to the measurement device and the camera device, and is used to run at least one program to execute the method for dividing a clean area according to any one of claims 16-27, and use the obtained clean area to generate navigation route;
    移动装置,设置于所述清洁机器人,用于受控地调整所述清洁机器人的位置和姿态;A moving device, which is provided on the cleaning robot, and is used to controllably adjust the position and posture of the cleaning robot;
    清洁装置,设置于所述清洁机器人,用于在清洁机器人移动期间清洁所途经的行进平面;A cleaning device, which is provided on the cleaning robot, and is used to clean the traveling plane passed by the cleaning robot during its movement;
    第二处理装置,连接于所述第一处理装置并分别控制清洁装置和移动装置,用于运行至少一程序,以基于所述第一处理装置所提供的导航路线,控制所述移动装置调整位置和姿态以沿所述导航路线进行自主移动,以及控制清洁装置执行清洁操作。The second processing device is connected to the first processing device and controls the cleaning device and the mobile device respectively, and is used to run at least one program to control the mobile device to adjust the position based on the navigation route provided by the first processing device And posture to autonomously move along the navigation route, and control the cleaning device to perform a cleaning operation.
  38. 根据权利要求37所述的清洁机器人,其特征在于,所述摄像装置嵌设于所述清洁机器人, 且主光轴垂直于所述清洁机器人的行进平面。The cleaning robot according to claim 37, wherein the camera device is embedded in the cleaning robot, and the main optical axis is perpendicular to the traveling plane of the cleaning robot.
  39. 根据权利要求37所述的清洁机器人,其特征在于,所述测量装置嵌设于所述清洁机器人的体侧,所述测量装置包括:测距传感装置和角度传感装置,或者TOF测量装置。The cleaning robot according to claim 37, wherein the measuring device is embedded on the side of the cleaning robot, and the measuring device comprises: a distance measuring sensor device and an angle sensor device, or a TOF measuring device .
  40. 一种数据处理装置,用于移动机器人,其特征在于,包括:A data processing device for a mobile robot, characterized in that it comprises:
    数据接口,用于连接所述移动机器人的摄像装置和测量装置;A data interface for connecting the camera device and the measuring device of the mobile robot;
    存储单元,用于存储至少一程序;Storage unit for storing at least one program;
    处理单元,与所述存储单元和数据接口相连,用于藉由所述数据接口获取所述测量装置所提供的位置信息,以及获取所述摄像装置拍摄的图像,以及用于执行所述至少一程序以执行如权利要求1-15中任一所述的导航方法;或者执行如权利要求16-27中任一所述的划分清洁区域的方法。The processing unit is connected to the storage unit and the data interface, and is used to obtain the position information provided by the measuring device through the data interface, and to obtain the image taken by the camera device, and to execute the at least one The program is to execute the navigation method according to any one of claims 1-15; or execute the method for dividing a clean area according to any one of claims 16-27.
  41. 一种计算机可读的存储介质,其特征在于,存储至少一种程序,所述至少一种程序在被调用时执行如权利要求1-15中任一所述的导航方法;或者执行如权利要求16-27中任一所述的划分清洁区域的方法。A computer-readable storage medium, characterized in that it stores at least one program, and when called, the at least one program executes the navigation method according to any one of claims 1-15; or executes as claimed The method of dividing a clean area as described in any of 16-27.
PCT/CN2019/078963 2019-03-21 2019-03-21 Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot WO2020186493A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2019/078963 WO2020186493A1 (en) 2019-03-21 2019-03-21 Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot
CN201980060807.5A CN112867424B (en) 2019-03-21 2019-03-21 Navigation and cleaning area dividing method and system, and moving and cleaning robot
CN202210292338.3A CN114947652A (en) 2019-03-21 2019-03-21 Navigation and cleaning area dividing method and system, and moving and cleaning robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/078963 WO2020186493A1 (en) 2019-03-21 2019-03-21 Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot

Publications (1)

Publication Number Publication Date
WO2020186493A1 true WO2020186493A1 (en) 2020-09-24

Family

ID=72519531

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078963 WO2020186493A1 (en) 2019-03-21 2019-03-21 Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot

Country Status (2)

Country Link
CN (2) CN114947652A (en)
WO (1) WO2020186493A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112363513A (en) * 2020-11-25 2021-02-12 珠海市一微半导体有限公司 Obstacle classification and obstacle avoidance control method based on depth information
CN112462780A (en) * 2020-11-30 2021-03-09 深圳市杉川致行科技有限公司 Sweeping control method and device, sweeping robot and computer readable storage medium
CN113469000A (en) * 2021-06-23 2021-10-01 追觅创新科技(苏州)有限公司 Regional map processing method and device, storage medium and electronic device
CN114265397A (en) * 2021-11-16 2022-04-01 深圳市普渡科技有限公司 Interaction method and device for mobile robot, mobile robot and storage medium
CN114348201A (en) * 2021-12-31 2022-04-15 国信中船(青岛)海洋科技有限公司 Intelligent cleaning system for cabin wall of culture cabin of culture ship
CN114504273A (en) * 2020-11-16 2022-05-17 科沃斯机器人股份有限公司 Robot control method and device
CN114654482A (en) * 2022-04-26 2022-06-24 北京市商汤科技开发有限公司 Control method for mobile robot, device, equipment and storage medium
CN114916864A (en) * 2022-05-30 2022-08-19 美智纵横科技有限责任公司 Control method and device of sweeper, readable storage medium and sweeper
CN115444325A (en) * 2022-07-21 2022-12-09 深圳银星智能集团股份有限公司 Secondary cleaning method, device, cleaning robot and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114903375B (en) * 2022-05-13 2024-10-15 美智纵横科技有限责任公司 Obstacle positioning method and device and sports equipment
CN115267825A (en) * 2022-06-24 2022-11-01 奥比中光科技集团股份有限公司 Obstacle avoidance and navigation method and device of sweeper based on TOF sensor and storage medium
CN114847809B (en) * 2022-07-07 2022-09-20 深圳市云鼠科技开发有限公司 Environment exploration method and device for cleaning robot, cleaning robot and medium
CN115796846B (en) * 2023-01-31 2023-05-26 北京中海兴达建设有限公司 Equipment cleaning service recommendation method, device, equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110119116A (en) * 2010-04-26 2011-11-02 엘지전자 주식회사 Robot cleaner, remote monitoring system, and monitoring method using robot cleaner
CN207965645U (en) * 2017-12-25 2018-10-12 北京工业大学 A kind of robot autonomous navigation system
CN108885453A (en) * 2015-11-11 2018-11-23 罗伯特有限责任公司 The division of map for robot navigation
CN108968825A (en) * 2018-08-17 2018-12-11 苏州领贝智能科技有限公司 A kind of sweeping robot and robot sweep the floor method
CN109008779A (en) * 2017-06-12 2018-12-18 德国福维克控股公司 Automatically the system that the vehicle to advance in the environment and the door in environment are constituted
CN109443368A (en) * 2019-01-14 2019-03-08 轻客小觅智能科技(北京)有限公司 Air navigation aid, device, robot and the storage medium of unmanned machine people

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3234630B2 (en) * 1992-05-15 2001-12-04 株式会社東芝 Cleaning robot
JP2007232474A (en) * 2006-02-28 2007-09-13 Takaoka Electric Mfg Co Ltd Grid-pattern projection type surface profile measuring apparatus
KR102158695B1 (en) * 2014-02-12 2020-10-23 엘지전자 주식회사 robot cleaner and a control method of the same
CN105865438A (en) * 2015-01-22 2016-08-17 青岛通产软件科技有限公司 Autonomous precise positioning system based on machine vision for indoor mobile robots
CN106383518A (en) * 2016-09-29 2017-02-08 国网重庆市电力公司电力科学研究院 Multi-sensor tunnel robot obstacle avoidance control system and method
CN106863305B (en) * 2017-03-29 2019-12-17 赵博皓 Floor sweeping robot room map creating method and device
CN107741234B (en) * 2017-10-11 2021-10-19 深圳勇艺达机器人有限公司 Off-line map construction and positioning method based on vision
CN208541244U (en) * 2018-03-20 2019-02-26 珊口(上海)智能科技有限公司 Calibration system and mobile robot
CN108958250A (en) * 2018-07-13 2018-12-07 华南理工大学 Multisensor mobile platform and navigation and barrier-avoiding method based on known map

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110119116A (en) * 2010-04-26 2011-11-02 엘지전자 주식회사 Robot cleaner, remote monitoring system, and monitoring method using robot cleaner
CN108885453A (en) * 2015-11-11 2018-11-23 罗伯特有限责任公司 The division of map for robot navigation
CN109008779A (en) * 2017-06-12 2018-12-18 德国福维克控股公司 Automatically the system that the vehicle to advance in the environment and the door in environment are constituted
CN207965645U (en) * 2017-12-25 2018-10-12 北京工业大学 A kind of robot autonomous navigation system
CN108968825A (en) * 2018-08-17 2018-12-11 苏州领贝智能科技有限公司 A kind of sweeping robot and robot sweep the floor method
CN109443368A (en) * 2019-01-14 2019-03-08 轻客小觅智能科技(北京)有限公司 Air navigation aid, device, robot and the storage medium of unmanned machine people

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114504273A (en) * 2020-11-16 2022-05-17 科沃斯机器人股份有限公司 Robot control method and device
CN112363513A (en) * 2020-11-25 2021-02-12 珠海市一微半导体有限公司 Obstacle classification and obstacle avoidance control method based on depth information
CN112462780A (en) * 2020-11-30 2021-03-09 深圳市杉川致行科技有限公司 Sweeping control method and device, sweeping robot and computer readable storage medium
CN112462780B (en) * 2020-11-30 2024-05-21 深圳市杉川致行科技有限公司 Sweeping control method and device, sweeping robot and computer readable storage medium
CN113469000A (en) * 2021-06-23 2021-10-01 追觅创新科技(苏州)有限公司 Regional map processing method and device, storage medium and electronic device
CN114265397A (en) * 2021-11-16 2022-04-01 深圳市普渡科技有限公司 Interaction method and device for mobile robot, mobile robot and storage medium
CN114265397B (en) * 2021-11-16 2024-01-16 深圳市普渡科技有限公司 Interaction method and device of mobile robot, mobile robot and storage medium
CN114348201A (en) * 2021-12-31 2022-04-15 国信中船(青岛)海洋科技有限公司 Intelligent cleaning system for cabin wall of culture cabin of culture ship
CN114348201B (en) * 2021-12-31 2024-05-03 国信中船(青岛)海洋科技有限公司 Intelligent cleaning system for cabin walls of aquaculture engineering ship
CN114654482A (en) * 2022-04-26 2022-06-24 北京市商汤科技开发有限公司 Control method for mobile robot, device, equipment and storage medium
CN114916864A (en) * 2022-05-30 2022-08-19 美智纵横科技有限责任公司 Control method and device of sweeper, readable storage medium and sweeper
CN115444325A (en) * 2022-07-21 2022-12-09 深圳银星智能集团股份有限公司 Secondary cleaning method, device, cleaning robot and storage medium

Also Published As

Publication number Publication date
CN114947652A (en) 2022-08-30
CN112867424A (en) 2021-05-28
CN112867424B (en) 2022-05-06

Similar Documents

Publication Publication Date Title
WO2020186493A1 (en) Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot
US11927450B2 (en) Methods for finding the perimeter of a place using observed coordinates
CN109998429B (en) Mobile cleaning robot artificial intelligence for context awareness
US10545497B1 (en) Control method and device for mobile robot, mobile robot
US11669086B2 (en) Mobile robot cleaning system
EP3104194B1 (en) Robot positioning system
JP5946147B2 (en) Movable human interface robot
JP5963372B2 (en) How to make a mobile robot follow people
US9329598B2 (en) Simultaneous localization and mapping for a mobile robot
KR102398330B1 (en) Moving robot and controlling method thereof
WO2021146862A1 (en) Indoor positioning method for mobile device, mobile device and control system
CN110801180A (en) Operation method and device of cleaning robot
US11561102B1 (en) Discovering and plotting the boundary of an enclosure
WO2022027611A1 (en) Positioning method and map construction method for mobile robot, and mobile robot
TWI739255B (en) Mobile robot
CN112034837A (en) Method for determining working environment of mobile robot, control system and storage medium
TWI771960B (en) Indoor positioning and searching object method for intelligent unmanned vehicle system
WANG 2D Mapping Solutionsfor Low Cost Mobile Robot
KR20220121483A (en) Method of intelligently generating map and mobile robot thereof
TW202344863A (en) Method for establishing semantic distance map and related mobile device
Sujan et al. MOBILE ROBOT LOCALIZATION AND MAPPING USING SPACE INVARIANT TRANSFORMS

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19920442

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/02/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19920442

Country of ref document: EP

Kind code of ref document: A1