CN104112124A - Image identification based indoor positioning method and device - Google Patents
Image identification based indoor positioning method and device Download PDFInfo
- Publication number
- CN104112124A CN104112124A CN201410335949.7A CN201410335949A CN104112124A CN 104112124 A CN104112124 A CN 104112124A CN 201410335949 A CN201410335949 A CN 201410335949A CN 104112124 A CN104112124 A CN 104112124A
- Authority
- CN
- China
- Prior art keywords
- identification
- picture
- identification information
- matching
- sift
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000007781 pre-processing Methods 0.000 claims abstract description 25
- 239000013598 vector Substances 0.000 claims description 126
- 238000012545 processing Methods 0.000 claims description 24
- 238000000605 extraction Methods 0.000 claims description 14
- 238000005520 cutting process Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 5
- 238000003860 storage Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000012163 sequencing technique Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000013138 pruning Methods 0.000 description 3
- 230000002411 adverse Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 239000010931 gold Substances 0.000 description 2
- 229910052737 gold Inorganic materials 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000406668 Loxodonta cyclotis Species 0.000 description 1
- 240000005499 Sasa Species 0.000 description 1
- 244000269722 Thea sinensis Species 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses an image identification based indoor positioning method. The image identification based indoor positioning method includes acquiring a current position identifier picture shot by a user and preprocessing the identifier picture; matching the preprocessed identifier picture with identifier information of each identifier in an identifier information base and identifying the identifier picture; determining the position where the shot position identifier picture is located in a map system according to position information of the identification result in the current map system, and presenting the position where the shot position identifier picture is located in the map system. The invention further discloses an image identification based indoor positioning device. By means of the image identification based indoor positioning method and device, positioning of most indoor areas can be achieved, workload is small, the universality is high, and practical values are high.
Description
Technical Field
The invention relates to the technical field of geographic positioning, in particular to an indoor positioning method and device based on image recognition.
Background
As people live in increasingly diverse lives, people increasingly use large indoor public places to perform various activities, such as: go to shopping malls and supermarkets for shopping, or hold meetings in large-scale business hotels, hold exhibitions in various large-scale meeting places, visit various theme museums and the like. Because the layout of various indoor places is different and the space is large, sometimes it is not easy to determine the current position of the user.
Taking a large mall as an example, a user wants to locate himself in the mall, and the traditional method is to look at a schematic floor plan of the mall posted in the mall. In the plan view, the shop distribution information of the floor of the mall is given, and sometimes, the position of the plan view is directly displayed by a small red dot or a small red star; under the condition that no small red dot or small red star of the current position is marked, the customer can only compare the information of the surrounding shops with the plane schematic diagram, so that the current position of the customer can be judged.
However, the conventional plan views also have various inconveniences, for example, the number of floor plan views that can be posted in a shopping mall is limited, and the requirement of a customer for inquiring the location information cannot be met anytime and anywhere; alternatively, the floor plan may be soiled and the accuracy of the position information provided by the map may be reduced without timely replacement of the soiled floor plan. In addition, once the shop floor is changed, the original floor plan map is disabled, and a new map is required to be created.
The traditional indoor positioning method cannot provide accurate and convenient service for customers due to the limitation of a plurality of factors. However, some current general positioning technologies cannot well meet the requirement of indoor positioning due to practical limitations. For example, as a GPS positioning technology with the widest application range, it cannot be applied to indoor positioning due to signal propagation problems; in addition, the accuracy of the GPS positioning technology is limited, so that accurate positioning within 10 meters cannot be realized; to Wifi and iBeacon indoor positioning technique, although under experimental conditions, Wifi and iBeacon indoor positioning have all reached good effect, in practical application, because there are various environmental disturbance's factor can influence positioning accuracy, and need carry out infrastructure in advance and lay, lead to these two kinds of indoor positioning techniques still in the experimental stage, still have certain distance apart from practical application.
Therefore, in the current indoor positioning technology, the fact that the position information cannot be inquired anytime and anywhere is the most important problem in the current indoor positioning technology.
Disclosure of Invention
In view of this, embodiments of the present invention are expected to provide an indoor positioning method and apparatus based on image recognition, which can realize accurate positioning of most indoor areas by using an intelligent terminal and a camera device in the terminal, and have the characteristics of small workload, strong universality, and high practical value.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the embodiment of the invention provides an indoor positioning method based on image recognition, which comprises the following steps:
acquiring a current position identification picture shot by a user, and preprocessing the identification picture;
matching the preprocessed identification picture with identification information of each identification in an identification information base, and identifying the identification picture;
and determining the position of the shot position mark in the map system according to the position information of the recognition result in the current map system, and presenting the shot position mark on the map.
In the foregoing solution, the preprocessing the identification picture includes: and performing picture cutting and graying processing on the identification picture.
In the above scheme, the method further comprises: and performing scale invariant feature transformation Sift feature vector extraction on the preprocessed identification picture.
In the above scheme, the identification information is stored in the identification information base in a picture or Sift characteristic vector manner.
In the above scheme, matching the preprocessed identification picture with the identification information of each identification in the identification information base, and identifying the identification picture includes:
respectively calculating the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vector of the identification information of each identification in the identification information base;
and determining the identification result of the identification picture according to the identification with the maximum matching point number with the identification picture.
In the above scheme, the identification information base includes identification information under different identification multiple shooting angles;
correspondingly, the calculating the matching points of the Sift feature vector of the identification picture and the Sift feature vector of the identification information of each identification in the identification information base respectively comprises: respectively calculating the sum of the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vectors of the identification information under various shooting angles of each identification in the identification information base; the determining the identification result of the identification picture according to the identification with the maximum number of matching points with the identification picture comprises the following steps: and determining the identification result of the identification picture according to the identification with the highest total matching point number with the identification picture.
In the above scheme, the calculating the sum of the matching points of the Sift feature vector of the identification picture and the Sift feature vectors of the identification information at multiple shooting angles of each identification in the identification information base respectively includes:
acquiring the angle of the shot identification picture input by a user;
determining the sequence of matching the identification picture with the identification information of various shooting angles in the identification information base according to the shooting angles input by the user;
and respectively calculating the sum of the matching points of the Sift characteristic vectors of the identification information under various shooting angles of each identification in the Sift characteristic vectors of the identification pictures and the identification information base according to the sequence.
The embodiment of the invention also provides an indoor positioning device based on image recognition, which comprises: the device comprises an image preprocessing unit, an image matching unit and a position determining unit; wherein,
the image preprocessing unit is used for acquiring a current position identification picture shot by a user and preprocessing the identification picture;
the image matching unit is used for matching the preprocessed identification picture with the identification information of each identification in the identification information base and identifying the identification picture;
and the position determining unit is used for determining the position of the shot position mark in the map system according to the position information of the recognition result in the current map system and presenting the shot position mark on the map.
In the foregoing solution, the preprocessing the identification picture by the image preprocessing unit includes: and performing picture cutting and graying processing on the identification picture.
In the above scheme, the device further includes a feature vector extraction unit, configured to perform scale invariant feature transformation Sift feature vector extraction on the preprocessed identification picture.
In the above scheme, the matching, by the image matching unit, the preprocessed identification picture with the identification information of each identification in the identification information base, and the identifying the identification picture includes:
the image matching unit respectively calculates the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vector of the identification information of each identification in the identification information base;
and determining the identification result of the identification picture according to the identification with the maximum matching point number with the identification picture.
In the above scheme, when the identification information base comprises identification information for identifying various shooting angles,
the image matching unit respectively calculating the matching point number of the Sift feature vector of the identification picture and the Sift feature vector of the identification information of each identification in the identification information base comprises the following steps: respectively calculating the sum of the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vectors of the identification information under various shooting angles of each identification in the identification information base; the determining the identification result of the identification picture according to the identification with the maximum number of matching points with the identification picture comprises the following steps: and determining the identification result of the identification picture according to the identification with the highest total matching point number with the identification picture.
In the above scheme, the sum of the matching points of the Sift feature vector of the identification picture and the Sift feature vectors of the identification information at multiple shooting angles of each identification in the identification information base, which are calculated by the image matching unit, respectively includes:
acquiring the angle of the shot identification picture input by a user;
determining the sequence of matching the identification picture with the identification information of various shooting angles in the identification information base according to the shooting angles input by the user;
and respectively calculating the sum of the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vectors of the identification information under various shooting angles of each identification in the identification information base according to the sequence.
The indoor positioning method and device based on image recognition provided by the embodiment of the invention firstly obtain the current position identification picture shot by a user and preprocess the identification picture; then matching the preprocessed identification picture with identification information of each identification in an identification information base, and identifying the identification picture; and finally, determining the position of the shot position mark in the map system according to the position information of the recognition result in the current map system, and presenting the shot position mark on the map. Therefore, the position of the user is determined by using the identification information and the electronic map acquired by the user terminal without depending on the floor map on the paper, so that the condition that the position of the user cannot be identified by the customer due to the fact that the floor plan schematic diagram is stained can be avoided; when the indoor layout changes, only the position relation of various identifications in the electronic map needs to be updated, so that the labor amount for replacing the floor plan schematic diagram is greatly reduced, and the method has the characteristics of small workload, strong universality and high practical value.
Drawings
Fig. 1 is a schematic flow chart of an indoor positioning method based on image recognition according to an embodiment of the present invention;
FIG. 2 shows identification information captured at multiple shooting angles for the same identification according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a design structure of an identification information base according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of an indoor positioning method based on image recognition according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of an indoor positioning device based on image recognition according to an embodiment of the present invention.
Detailed Description
In the embodiment of the invention, firstly, a current position identification picture shot by a user is obtained, and the identification picture is preprocessed; then matching the preprocessed identification picture with identification information of each identification in an identification information base, and identifying the identification picture; and finally, determining the position of the shot position mark in the map system according to the position information of the recognition result in the current map system, and presenting the shot position mark on the map.
The current map system refers to an electronic complete topographic map of a place where a user is currently located, such as: when the current places of the users are shopping malls, supermarkets, hotels, exhibition halls, museums and libraries, the current map system respectively comprises complete topographic maps of the shopping malls, the supermarkets, the hotels, the exhibition halls, the museums and the libraries and layered topographic maps of all layers.
Here, the preprocessing the identification picture includes: performing picture cutting and graying processing on the identification picture; after preprocessing the identification picture, the method further comprises: performing Scale-invariant feature transformation (Sift) feature vector extraction on the preprocessed identification picture; correspondingly, the identification information is stored in the identification information base in a picture or Sift characteristic vector mode. Here, the identification information is information related to identification picture identification that is stored in advance, and may be a picture related to identification picture identification or a Sift feature vector of the picture related to identification picture identification. For example, when the indoor positioning method based on image recognition is applied to positioning in a shopping mall, the identification picture taken by the user may be a shop logo taken by the user, and the identification information may be prestored shop logo pictures taken at multiple shooting angles of the shop, or a Sift feature vector obtained by extracting Sift feature vectors from the pictures.
Specifically, matching the preprocessed identification picture with identification information of each identification in an identification information base, and identifying the identification picture includes: respectively calculating the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vector of the identification information of each identification in the identification information base; and determining the identification result of the identification picture according to the identification with the maximum matching point number with the identification picture.
When the identification information base comprises identification information under different identification multiple shooting angles, the calculating the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vector of the identification information of each identification in the identification information base respectively comprises: respectively calculating the sum of the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vectors of the identification information under various shooting angles of each identification in the identification information base; the determining the identification result of the identification picture according to the identification with the maximum number of matching points with the identification picture comprises the following steps: and determining the identification result of the identification picture according to the identification with the highest total matching point number with the identification picture.
Specifically, the calculating the sum of the matching points of the Sift feature vector of the identification picture and the Sift feature vectors of the identification information at multiple shooting angles of each identification in the identification information base respectively includes: acquiring the angle of the shot identification picture input by a user; determining the sequence of matching the identification picture with the identification information of various shooting angles in the identification information base according to the shooting angles input by the user; and respectively calculating the sum of the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vectors of the identification information under various shooting angles of each identification in the identification information base according to the sequence.
The following describes in detail a technical solution implementation of the embodiment of the present invention with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a schematic flow chart of an indoor positioning method based on image recognition in an embodiment of the present invention, and as shown in fig. 1, the indoor positioning method based on image recognition in an embodiment of the present invention includes the following steps:
step 101: acquiring a current position identification picture shot by a user, and preprocessing the identification picture;
wherein, the preprocessing the identification picture comprises: and performing picture cutting and graying processing on the identification picture.
The picture taken by the user contains the rest background parts besides the current position mark. The background part does not provide the relevant information of the current position identification, and also introduces additional interference information, which has adverse effect on the identification result. In order to reduce the interference of the background as much as possible, the shot identification picture needs to be cut out, and the part irrelevant to the current position identification is removed, so that the processing of the position mark is more focused.
In addition, because the difference of the position identification is mainly in shape characteristics rather than colors, the image can be subjected to gray scale operation to avoid processing color characteristics with low value, so that subsequent processing can be more focused on processing of the position identification shape characteristics, the processing content is more reasonable, and the efficiency is higher; in addition, the information quantity of the grayed picture is reduced, so that the picture can enable the picture transmission process to be smoother.
After pre-processing the image, the method further comprises: and extracting the Sift characteristic vector of the preprocessed identification picture.
In the feature-based image matching technology, the primary task is to extract and represent features which are suitable for representing image contents and have better stability. The Sift characteristic vector contains information of a plurality of characteristic points in the picture, and can effectively represent the content characteristics of the image. In addition, the Sift features have scale invariance, rotation invariance and illumination invariance, can effectively cope with the influences of photographing distance, photographing angle and light irradiation in the photographing process of a market, has good scene adaptability, and selects the Sift features for extracting images to perform image matching.
Step 102: matching the preprocessed identification picture with identification information of each identification in an identification information base, and identifying the identification picture;
specifically, the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vector of the identification information of each identification in the identification information base are respectively calculated; and determining the identification result of the identification picture according to the identification with the maximum matching point number with the identification picture.
The identification of the identification picture is determined according to the matching degree of the identification picture and the identification information in the identification information base, and after the identification picture and the Sift characteristic vector of the identification information in the identification information base are generated, the Euclidean distance of the Sift characteristic vector of the key point is used as the similarity judgment measurement of the key point in the two images. Specifically, a Sift feature vector of a certain key point of the identification picture is taken, and Sift feature vectors of the first two key points which are closest to the identification information in Euclidean distance are found out, and in the two key points, if the closest distance of the Sift feature vector divided by the second closest distance is less than a certain proportion threshold value, the two key points are considered as a pair of matching points. The more the matching points are, the higher the matching degree of the identification picture and the identification information is, the more likely the identification picture and the identification information belong to one position identification, so that the shot identification picture is matched with the identification information of all the identifications in the identification information base, and the identification picture which is determined as the identification result by the identification which has the most matching points with the identification picture is considered, thereby realizing the identification of the shot identification information.
In practical application, the identification pictures shot by the user cannot be all located at the same angle, so that the requirement that the same identification needs to contain identification information shot at various shooting angles when an identification information base is designed is met. As shown in fig. 2, the identification information base may include identification information for identifying a plurality of angles: including a front view 2-a of the location indicator, a 30 degree left view 2-b and a 30 degree right view 2-c of the location indicator, and a 45 degree right view 2-d and a 45 degree left view 2-e of the location indicator. Fig. 3 is a schematic diagram of a design structure of an identification information base according to an embodiment of the present invention.
When the identification information base comprises identification information under different identification multiple shooting angles, the calculating the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vector of the identification information of each identification in the identification information base respectively comprises: respectively calculating the sum of the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vectors of the identification information under various shooting angles of each identification in the identification information base; the determining the identification result of the identification picture according to the identification with the maximum number of matching points with the identification picture comprises the following steps: and determining the identification result of the identification picture according to the identification with the highest total matching point number with the identification picture.
Taking fig. 2 as an example, when identification information identification is performed, the sum of the matching points of the shot identification picture and five identification information corresponding to each identification is counted respectively, and the identification with the highest matching point sum is taken as the identification result of the identification picture.
In the embodiment of the invention, the identification information can be stored in the identification information base in a picture or Sift characteristic vector mode.
Since the Sift feature vector extraction of the picture has a certain time overhead, if the identification picture is directly matched with one identification information picture in the identification information base, the Sift feature extraction of the shot identification picture is eliminated, and the Sift feature vector extraction of the identification information picture in the identification information base is also needed. When the number of templates in the identification information library is large, it takes long time to extract the Sift feature vector of the identification information picture, and the extraction is not easy to be accepted by users.
Considering that the identification information of each identification in the identification information base can not be changed, the corresponding Sift characteristic vector can not be changed, so that the object can be stored in the identification information base, the original identification information picture can be changed into the characteristic vector corresponding to the identification information picture, and in the identification process, only the shot identification picture Sift characteristic vector needs to be extracted, thereby greatly shortening the time cost.
Considering that matching identification information in an identification information base one by one still causes more time overhead, in the embodiment of the present invention, the sum of the matching points of the Sift feature vector of the identification picture and the Sift feature vectors of the identification information at multiple shooting angles of each identification in the identification information base is calculated respectively, and may be processed by a pruning strategy, specifically:
firstly, acquiring an angle of a shot identification picture input by a user; then, determining the sequence of matching the identification picture with the identification information of various shooting angles in the identification information base according to the shooting angles input by the user; and finally, respectively calculating the sum of the matching points of the Sift characteristic vectors of the identification information under various shooting angles of each identification in the Sift characteristic vector and the identification information base of the identification picture according to the sequence.
Still taking fig. 2 as an example, before matching, the following processing is completed:
a. firstly, acquiring the angle of a shooting identifier input by a user;
here, the angles may be divided into a front view, a left view, a right view, etc., and the embodiment of the present invention takes an input front view as an example;
b. matching the shot identification picture with front view identification information of all the identifications in an identification information base, sequencing all the identifications according to the sequence of the number of matching points from multiple to few, and marking the first n percent of the identifications as a set A; here, the size of n may be determined according to an actual situation, for example, the identifier of the top 1/2 may be selected as the set a, and then n takes a value of 50;
c. matching the shot identification picture with the left view identification information of all the identifications in the set A, sequencing the identifications in the set A according to the sequence of the number of the matching points from multiple to few, and marking the first m% of the identifications as a set B; here, the size of m can be determined according to the actual situation, and the value of m can be the same as or different from n; for example, the identifier of the top 1/2 may be also selected to be the set B, and then m also takes a value of 50;
d. and matching the shot identification picture with the right view identification information of all the identifications in the set B, and counting the number of matching points.
e. Counting the sum of the template matching points of the shot identification picture and all the identifications, and selecting the identification with the highest sum of the matching points as an identification result;
here, the matching point number of the identification information not participating in matching is counted as 0.
The practical operation shows that after the pruning strategy is adopted, the identification time overhead of the mark picture is greatly reduced, and the requirements on identification accuracy and real-time performance can be met.
Step 103: and determining the position of the shot position mark in the map system according to the position information of the recognition result in the current map system, and presenting the shot position mark on the map.
After the shot identification picture is identified, the coordinate position of the identification is inquired by relying on the existing indoor electronic map system and is displayed on the indoor electronic map.
Here, the present location is presented on the map, and is not limited to a specific implementation manner, for example, the location where the location identifier is located may be directly displayed by a small red dot or a small red star in the map system, or a specific location description may be given.
The indoor positioning method based on image recognition is suitable for large-scale public indoor shared places such as shopping malls, supermarkets, hotels, exhibition halls, museums, libraries and the like; the position mark can be a position mark with a relatively fixed position in the large-scale place. For example, when the embodiment of the present invention is applied to a shopping mall, the captured identification picture may be a shop logo in the shopping mall; when the embodiment of the invention is applied to hotels, the shot identification picture can be a house number of a room; when the embodiment of the invention is applied to a museum, the shot identification picture can be a stand number or an exhibit and the like. The positioning mode provided by the embodiment of the invention can also be applied to outdoor positioning, such as: various theme buildings in the theme park are used as identification information, and a user can determine the position of the user by shooting the buildings in front of the user.
By using the indoor positioning method based on image recognition, the positioning function can be realized only by establishing the identification information base and the corresponding position relation of the identification in the current map system and shooting the current scene by the user. When the layout of the place changes, the performance can be updated only by updating the corresponding position relation between the identification information in the identification information base and the identification in the map system. For example, when a new merchant is located in a shopping mall, only the logo information of the new merchant needs to be updated to the identification information base, and the corresponding position relationship of the identification in the map is added to the map system.
Due to the repeatability of the position marks of some special places, when the identification information base is established, corresponding identification information bases do not need to be established for all places respectively. Taking the most representative shopping mall as an example, the logos among the large shopping malls have great repeatability, and many brands of logos exist in each shopping mall, so that the repeated storage of the logo information base is not required to be established repeatedly.
The implementation of the image recognition-based indoor positioning method of the present invention is further described in detail below in conjunction with a specific scenario. Fig. 4 is a schematic flow chart of an indoor positioning method based on image recognition in an embodiment of the present invention, where the embodiment of the present invention takes a mall as an example, and includes the following steps:
step 400: firstly, establishing logo libraries corresponding to all shop logos in a market;
in practical application, a logo library corresponding to each market is required to be provided; meanwhile, for a certain floor of a certain mall, the logo contained in the certain floor is only a subset of the logo library, and the logo is matched with the subset of the logo only when matching is carried out. The establishment of the logo library is shown in table 1 and table 2, wherein table 1 is a corresponding relation between a market and the logo library in the embodiment of the invention; table 2 shows the brand information included in each floor of each mall in the embodiment of the present invention.
Location | Mall ID |
Feature vector library 1 | 1 |
Feature vector library 2 | 2 |
Feature vector library 3 | 3 |
Universal feature vector library | 4 |
Universal feature vector library | 5 |
TABLE 1
Mall ID | Floor | Brand |
1 | Floor1 | AVENIR |
1 | Floor1 | CASIO |
1 | Floor1 | DIVUM |
1 | Floor1 | Elizabeth Arden |
1 | Floor1 | KOSS |
1 | Floor1 | MIDO |
1 | Floor1 | sasa |
1 | Floor1 | TISSOT |
1 | Floor1 | ZA |
1 | Floor1 | Dislocation of repose |
1 | Floor1 | Herborist |
1 | Floor1 | Ganjiakou gold |
1 | Floor1 | Haoya |
1 | Floor1 | Jade-mixing edge |
1 | Floor1 | Hougily world watch center |
1 | Floor1 | Huateng wine |
1 | Floor1 | Huiwen Xiang |
1 | Floor1 | Jiayi tea |
1 | Floor1 | Gold elephant jewelry |
TABLE 2
In table 1, a Logo library corresponding to each shopping mall is provided, and the storage information of the Logo library is stored in the Location attribute. For markets with ID numbers of 4 and 5, no own logo feature vector library is established for the markets, and the logo feature vector library corresponds to the general feature vector library. The reason is that the logo among a plurality of large shopping malls in a shopping mall in a city has great repeatability, and a plurality of brands of logos exist in each shopping mall, so that excessive logo libraries do not need to be repeatedly established for repeated storage.
In table 2, through the mall ID and the floor where the mall ID is located, the logo set C owned by the floor can be queried. In the logo identification process, the set C of the logo libraries corresponding to the floors of the shopping mall can be obtained and matched only by providing the corresponding shopping mall numbers and the floors where the shopping mall numbers are located.
Step 401: acquiring a shop logo picture of a current position shot by a user;
step 402: preprocessing the logo picture of the identified shop;
wherein, the preprocessing of the logo picture comprises the following steps: and performing picture cutting and graying processing on the logo picture.
Step 403: carrying out Sift feature vector extraction on the logo picture;
step 404: matching the logo picture with the information of each logo in a logo library, and identifying the logo picture;
the logo information can be stored in the logo library in a picture or Sift characteristic vector mode.
Specifically, the matching points of the Sift characteristic vector of the logo picture and the Sift characteristic vector of each logo in the logo library are respectively calculated; and determining the identification result of the identification picture according to the identification with the maximum matching point number with the identification picture.
When the logo library comprises logo information under various shooting angles of different shops, the calculating the matching points of the Sift characteristic vector of the logo picture and the Sift characteristic vector of the logo information of each shop in the logo library respectively comprises: respectively calculating the sum of the matching points of the Sift characteristic vector of the logo picture and the Sift characteristic vectors of the logo information under various shooting angles of each shop in the logo library; the step of determining the identification result of the logo picture according to the shop with the maximum matching points with the logo picture comprises the following steps: and determining the identification result of the logo picture according to the shop with the highest total matching point number with the logo picture.
Here, a pruning strategy may be adopted for processing, specifically:
firstly, inputting an angle for shooting a logo picture; then, determining the matching sequence of the logo picture and logo information of various shooting angles in a logo library according to the input shooting angles; and finally, respectively calculating the sum of the matching points of the Sift characteristic vector of the logo picture and the Sift characteristic vectors of the logo information under various shooting angles of each shop logo in the logo library according to the sequence.
Still taking fig. 2 as an example, respectively calculating the sum of the matching points of the Sift feature vector of the logo picture and the Sift feature vectors of the multiple logo information under multiple shooting angles of each shop logo in the logo library includes:
a. acquiring the angle of a shooting identifier input by a user;
here, the angles may be divided into a front view, a left view, a right view, etc., and the embodiment of the present invention takes an input front view as an example;
b. matching the shot logo picture with the front views of all the shop logos in the logo library, sequencing all the shop logos according to the sequence of multiple matching points, and recording the top n% of the shop logos as a set A; here, the size of n may be determined according to an actual situation, for example, the front 1/2 shop logo may be selected to be recorded as the set a, and then the value of n is 50;
c. matching the shot logo picture with the left views of all the shop logos in the set A, sequencing the shop logos in the set A according to the sequence of multiple matching points, and marking the top m% of the shop logos as a set B; here, the size of m can be determined according to the actual situation, and the value of m can be the same as or different from n; for example, the store logo at the top 1/2 can be also selected to be recorded as the set B, and then the value of m is also 50;
d. and matching the shot logo picture with the right views of all the shop logos in the set B, and counting the matching points.
e. Counting the sum of matching points of the shot logo picture and logo information shot under various shooting angles of each shop logo, and selecting the shop with the highest sum of the matching points as an identification result;
here, the matching point number of the logo information not participating in matching is counted as 0.
Step 405: and determining the position of the shot shop logo in the market according to the position information of the recognition result in the current map system, and presenting the position on the map.
After the shot logo picture is identified, the coordinate position of the logo picture is inquired by relying on an existing market indoor electronic map system and is displayed on an indoor electronic map of a market.
Here, the present location is presented on the map, and is not limited to a specific implementation manner, for example, the location where the location identifier is located may be directly displayed by a small red dot or a small red star in the map system, or a specific location description may be given.
An embodiment of the present invention further provides an indoor positioning device based on image recognition, as shown in fig. 5, the device includes: an image preprocessing unit 51, an image matching unit 52, a position determining unit 53; wherein,
the image preprocessing unit 51 is configured to obtain a current position identification picture taken by a user, and preprocess the identification picture;
the image preprocessing unit 51 preprocesses the identification picture, including: and performing picture cutting and graying processing on the identification picture.
The picture taken by the user contains the rest background parts besides the current position mark. The background part does not provide the relevant information of the current position identification, and also introduces additional interference information, which has adverse effect on the identification result. In order to reduce the interference of the background as much as possible, the shot identification picture needs to be cut out, and the part irrelevant to the current position identification is removed, so that the processing of the position mark is more focused.
In addition, because the difference of the position identification is mainly in shape characteristics rather than colors, the image can be subjected to gray scale operation to avoid processing color characteristics with low value, so that subsequent processing can be more focused on processing of the position identification shape characteristics, the processing content is more reasonable, and the efficiency is higher; in addition, the information quantity of the grayed picture is reduced, so that the picture can enable the picture transmission process to be smoother.
The device further comprises a feature vector extraction unit 54, configured to perform scale invariant feature transformation Sift feature vector extraction on the preprocessed identification picture.
In the feature-based image matching technology, the primary task is to extract and represent features which are suitable for representing image contents and have better stability. The Sift characteristic vector contains information of a plurality of characteristic points in the picture, and can effectively represent the content characteristics of the image. In addition, the Sift features have scale invariance, rotation invariance and illumination invariance, can effectively cope with the influences of photographing distance, photographing angle and light irradiation in the photographing process of a market, has good scene adaptability, and selects the Sift features for extracting images to perform image matching.
The apparatus further comprises an identification information base 55 and a map system 56, wherein the identification information base 55 can store the identification information in the form of pictures or Sift feature vectors.
The image matching unit 52 is configured to match the preprocessed identification picture with identification information of each identification in an identification information base, and identify the identification picture;
specifically, the image matching unit 52 matches the preprocessed identification picture with the identification information of each identification in the identification information base, and the identification of the identification picture includes: the image matching unit 52 calculates the matching points of the Sift feature vector of the identification picture and the Sift feature vector of the identification information of each identification in the identification information base respectively; determining the identification result of the identification picture according to the identification with the maximum matching point number with the identification picture
The identification of the identification picture is determined according to the matching degree of the identification picture and the identification information in the identification information base, and after the identification picture and the Sift characteristic vector of the identification information in the identification information base are generated, the Euclidean distance of the Sift characteristic vector of the key point is used as the similarity judgment measurement of the key point in the two images. Specifically, a Sift feature vector of a certain key point of the identification picture is taken, and Sift feature vectors of the first two key points which are closest to the identification information in Euclidean distance are found out, and in the two key points, if the closest distance of the Sift feature vector divided by the second closest distance is less than a certain proportion threshold value, the two key points are considered as a pair of matching points. The more the matching points are, the higher the matching degree of the identification picture and the identification information is, the more likely the identification picture and the identification information belong to one position identification, so that the shot identification picture is matched with the identification information of all the identifications in the identification information base, and the identification picture which is determined as the identification result by the identification which has the most matching points with the identification picture is considered, thereby realizing the identification of the shot identification information.
In practical application, the identification pictures shot by the user cannot be all located at the same angle, so that the requirement that identification information under various shooting angles of different identifications is contained when an identification information base is designed is required. When the identification information base includes identification information of different identifications at a plurality of shooting angles, the image matching unit 52 respectively calculates the matching points of the Sift feature vector of the identification picture and the Sift feature vector of the identification information of each identification in the identification information base, including: respectively calculating the sum of the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vectors of the identification information under various shooting angles of each identification in the identification information base; the determining the identification result of the identification picture according to the identification with the maximum number of matching points with the identification picture comprises the following steps: and determining the identification result of the identification picture according to the identification with the highest total matching point number with the identification picture.
Specifically, the image matching unit 52 calculates the sum of the matching points of the Sift feature vector of the identification picture and the Sift feature vectors of the identification information at multiple shooting angles of each identification in the identification information base, respectively, and includes:
acquiring the angle of the shot identification picture input by a user;
here, the angles may be divided into a front view, a left view, a right view, etc., and the embodiment of the present invention takes an input front view as an example;
determining the sequence of matching the identification picture with the identification information of various shooting angles in the identification information base according to the shooting angles input by the user;
and respectively calculating the sum of the matching points of the Sift characteristic vectors of the identification information under various shooting angles of each identification in the Sift characteristic vectors of the identification pictures and the identification information base according to the sequence.
And the position determining unit 53 is configured to determine, according to the position information of the recognition result in the current map system, a position where the shot position identifier is located in the map system, and present the position on the map.
After the identification recognition unit 52 recognizes the shot identification picture, the position determination unit 53 queries the coordinate position of the shop by relying on the existing market indoor map system, and presents the coordinate position on the indoor map.
Here, the presenting of the current position on the map by the position determining unit 53 is not limited to a specific implementation manner, for example, the position where the position identifier is located may be directly displayed by a small red dot or a small red star in the map system, or a specific position description may be given.
The functions implemented by the processing units in the indoor positioning device based on image recognition shown in fig. 5 can be understood by referring to the related description of the indoor positioning method based on image recognition. It will be appreciated by those skilled in the art that the functions of the processing units in the image recognition based indoor positioning apparatus shown in fig. 5 may be implemented by programs running on various types of processors; the storage unit may also be implemented by various memories, or storage media. For example, one typical implementation may be: the intelligent terminal is responsible for preprocessing the shot identification picture, namely cutting and graying; and transmitting the gray level picture to a background server for processing. The background server can be built by using Apache + MySQL + PHP, wherein the MySQL database is used for storing a Location table and a mall table.
In the embodiments provided in the present invention, it should be understood that the disclosed method and apparatus may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the communication connections between the components shown or discussed may be through interfaces, indirect couplings or communication connections of devices or units, and may be wired, wireless or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit according to the embodiment of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The method and apparatus for indoor positioning based on image recognition described in the embodiments of the present invention are only examples of the above embodiments, but are not limited thereto, and the method and apparatus related to indoor positioning are within the scope of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
Claims (13)
1. An indoor positioning method based on image recognition is characterized by comprising the following steps:
acquiring a current position identification picture shot by a user, and preprocessing the identification picture;
matching the preprocessed identification picture with identification information of each identification in an identification information base, and identifying the identification picture;
and determining the position of the shot position mark in the map system according to the position information of the recognition result in the current map system, and presenting the shot position mark on the map.
2. The method of claim 1, wherein the pre-processing the identification picture comprises: and performing picture cutting and graying processing on the identification picture.
3. The method of claim 1, further comprising: and performing scale invariant feature transformation Sift feature vector extraction on the preprocessed identification picture.
4. The method according to claim 1, wherein the identification information is stored in the identification information base in a picture or a Sift feature vector.
5. The method according to claim 1, wherein the matching the preprocessed identification picture with the identification information of each identification in the identification information base, and the identifying the identification picture comprises:
respectively calculating the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vector of the identification information of each identification in the identification information base;
and determining the identification result of the identification picture according to the identification with the maximum matching point number with the identification picture.
6. The method according to claim 4, wherein the identification information base comprises identification information for identifying different shooting angles;
correspondingly, the calculating the matching points of the Sift feature vector of the identification picture and the Sift feature vector of the identification information of each identification in the identification information base respectively comprises: respectively calculating the sum of the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vectors of the identification information under various shooting angles of each identification in the identification information base;
the determining the identification result of the identification picture according to the identification with the maximum number of matching points with the identification picture comprises the following steps: and determining the identification result of the identification picture according to the identification with the highest total matching point number with the identification picture.
7. The method of claim 6, wherein the step of respectively calculating the sum of the matching points of the Sift feature vector of the identification picture and the Sift feature vectors of the identification information at the plurality of shooting angles of each identification in the identification information base comprises the steps of:
acquiring the angle of the shot identification picture input by a user;
determining the sequence of matching the identification picture with the identification information of various shooting angles in the identification information base according to the shooting angles input by the user;
and respectively calculating the sum of the matching points of the Sift characteristic vectors of the identification information under various shooting angles of each identification in the Sift characteristic vectors of the identification pictures and the identification information base according to the sequence.
8. An indoor positioning device based on image recognition, characterized in that the device comprises: the device comprises an image preprocessing unit, an image matching unit and a position determining unit; wherein,
the image preprocessing unit is used for acquiring a current position identification picture shot by a user and preprocessing the identification picture;
the image matching unit is used for matching the preprocessed identification picture with the identification information of each identification in the identification information base and identifying the identification picture;
and the position determining unit is used for determining the position of the shot position mark in the map system according to the position information of the recognition result in the current map system and presenting the shot position mark on the map.
9. The apparatus of claim 8, wherein the image preprocessing unit preprocesses the identification picture comprising: and performing picture cutting and graying processing on the identification picture.
10. The apparatus according to claim 8, further comprising a feature vector extraction unit, configured to perform scale invariant feature transform, Sift, feature vector extraction on the preprocessed identification picture.
11. The apparatus of claim 8, wherein the image matching unit matches the preprocessed identification picture with the identification information of each identification in the identification information base, and the identifying the identification picture comprises:
the image matching unit respectively calculates the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vector of the identification information of each identification in the identification information base;
and determining the identification result of the identification picture according to the identification with the maximum matching point number with the identification picture.
12. The apparatus according to claim 11, wherein when the identification information base includes identification information for identifying a plurality of photographing angles differently,
the image matching unit respectively calculating the matching point number of the Sift feature vector of the identification picture and the Sift feature vector of the identification information of each identification in the identification information base comprises the following steps: respectively calculating the sum of the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vectors of the identification information under various shooting angles of each identification in the identification information base;
the determining the identification result of the identification picture according to the identification with the maximum number of matching points with the identification picture comprises the following steps: and determining the identification result of the identification picture according to the identification with the highest total matching point number with the identification picture.
13. The apparatus according to claim 12, wherein the image matching unit calculates a sum of the number of matching points of the Sift feature vector of the identification picture and the Sift feature vectors of the identification information at the plurality of photographing angles for each identification in the identification information base respectively comprises:
acquiring the angle of the shot identification picture input by a user;
determining the sequence of matching the identification picture with the identification information of various shooting angles in the identification information base according to the shooting angles input by the user;
and respectively calculating the sum of the matching points of the Sift characteristic vector of the identification picture and the Sift characteristic vectors of the identification information under various shooting angles of each identification in the identification information base according to the sequence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410335949.7A CN104112124A (en) | 2014-07-15 | 2014-07-15 | Image identification based indoor positioning method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410335949.7A CN104112124A (en) | 2014-07-15 | 2014-07-15 | Image identification based indoor positioning method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104112124A true CN104112124A (en) | 2014-10-22 |
Family
ID=51708909
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410335949.7A Pending CN104112124A (en) | 2014-07-15 | 2014-07-15 | Image identification based indoor positioning method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104112124A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184881A (en) * | 2015-08-28 | 2015-12-23 | 宇龙计算机通信科技(深圳)有限公司 | Method, apparatus, server and system for identifying user identity |
CN105517679A (en) * | 2015-03-25 | 2016-04-20 | 北京旷视科技有限公司 | User location determination |
CN106153047A (en) * | 2016-08-15 | 2016-11-23 | 广东欧珀移动通信有限公司 | A kind of indoor orientation method, device and terminal |
CN106650202A (en) * | 2016-09-18 | 2017-05-10 | 中国科学院计算技术研究所 | Date-driven indoor area layout prediction method and system |
CN106683195A (en) * | 2016-12-30 | 2017-05-17 | 上海网罗电子科技有限公司 | AR scene rendering method based on indoor location |
CN107144857A (en) * | 2017-05-17 | 2017-09-08 | 深圳市伊特利网络科技有限公司 | Assisted location method and system |
CN107341829A (en) * | 2017-06-27 | 2017-11-10 | 歌尔科技有限公司 | The localization method and device of virtual reality interactive component |
CN107566975A (en) * | 2017-09-05 | 2018-01-09 | 合肥工业大学 | A kind of location aware method based on Solr correlations |
CN107957859A (en) * | 2017-12-12 | 2018-04-24 | 歌尔科技有限公司 | Image presentation method and virtual reality device for virtual reality child teaching |
CN108170822A (en) * | 2018-01-04 | 2018-06-15 | 维沃移动通信有限公司 | The sorting technique and mobile terminal of a kind of photo |
CN108734734A (en) * | 2018-05-18 | 2018-11-02 | 中国科学院光电研究院 | Indoor orientation method and system |
CN109982239A (en) * | 2019-03-07 | 2019-07-05 | 福建工程学院 | Store floor positioning system and method based on machine vision |
CN110470295A (en) * | 2018-05-09 | 2019-11-19 | 北京智慧图科技有限责任公司 | A kind of indoor walking navigation and method based on AR positioning |
CN111429385A (en) * | 2020-06-10 | 2020-07-17 | 北京云迹科技有限公司 | Map generation method, device and equipment |
CN111723232A (en) * | 2020-07-06 | 2020-09-29 | 许广明 | Method and device for positioning and identifying building through Internet of things |
CN114485605A (en) * | 2020-10-23 | 2022-05-13 | 丰田自动车株式会社 | Position specifying method and position specifying system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101655369A (en) * | 2008-08-22 | 2010-02-24 | 环达电脑(上海)有限公司 | System and method of realizing positioning navigation by using image recognition technology |
US20110090221A1 (en) * | 2009-10-20 | 2011-04-21 | Robert Bosch Gmbh | 3d navigation methods using nonphotorealistic (npr) 3d maps |
CN102629329A (en) * | 2012-02-28 | 2012-08-08 | 北京工业大学 | Personnel indoor positioning method based on adaptive SIFI (scale invariant feature transform) algorithm |
CN103198491A (en) * | 2013-01-31 | 2013-07-10 | 北京工业大学 | Indoor visual positioning method |
CN103424113A (en) * | 2013-08-01 | 2013-12-04 | 毛蔚青 | Indoor positioning and navigating method of mobile terminal based on image recognition technology |
-
2014
- 2014-07-15 CN CN201410335949.7A patent/CN104112124A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101655369A (en) * | 2008-08-22 | 2010-02-24 | 环达电脑(上海)有限公司 | System and method of realizing positioning navigation by using image recognition technology |
US20110090221A1 (en) * | 2009-10-20 | 2011-04-21 | Robert Bosch Gmbh | 3d navigation methods using nonphotorealistic (npr) 3d maps |
CN102629329A (en) * | 2012-02-28 | 2012-08-08 | 北京工业大学 | Personnel indoor positioning method based on adaptive SIFI (scale invariant feature transform) algorithm |
CN103198491A (en) * | 2013-01-31 | 2013-07-10 | 北京工业大学 | Indoor visual positioning method |
CN103424113A (en) * | 2013-08-01 | 2013-12-04 | 毛蔚青 | Indoor positioning and navigating method of mobile terminal based on image recognition technology |
Non-Patent Citations (1)
Title |
---|
代小红: "《基于机器视觉的数字图像处理与识别研究》", 31 March 2012 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105517679A (en) * | 2015-03-25 | 2016-04-20 | 北京旷视科技有限公司 | User location determination |
US10657669B2 (en) | 2015-03-25 | 2020-05-19 | Beijing Kuangshi Technology Co., Ltd. | Determination of a geographical location of a user |
CN105184881A (en) * | 2015-08-28 | 2015-12-23 | 宇龙计算机通信科技(深圳)有限公司 | Method, apparatus, server and system for identifying user identity |
CN106153047A (en) * | 2016-08-15 | 2016-11-23 | 广东欧珀移动通信有限公司 | A kind of indoor orientation method, device and terminal |
CN106650202A (en) * | 2016-09-18 | 2017-05-10 | 中国科学院计算技术研究所 | Date-driven indoor area layout prediction method and system |
CN106650202B (en) * | 2016-09-18 | 2019-03-12 | 中国科学院计算技术研究所 | A kind of the room area layout prediction technique and system of data-driven |
CN106683195A (en) * | 2016-12-30 | 2017-05-17 | 上海网罗电子科技有限公司 | AR scene rendering method based on indoor location |
CN107144857A (en) * | 2017-05-17 | 2017-09-08 | 深圳市伊特利网络科技有限公司 | Assisted location method and system |
CN107341829A (en) * | 2017-06-27 | 2017-11-10 | 歌尔科技有限公司 | The localization method and device of virtual reality interactive component |
CN107566975A (en) * | 2017-09-05 | 2018-01-09 | 合肥工业大学 | A kind of location aware method based on Solr correlations |
CN107957859A (en) * | 2017-12-12 | 2018-04-24 | 歌尔科技有限公司 | Image presentation method and virtual reality device for virtual reality child teaching |
CN108170822A (en) * | 2018-01-04 | 2018-06-15 | 维沃移动通信有限公司 | The sorting technique and mobile terminal of a kind of photo |
CN110470295B (en) * | 2018-05-09 | 2022-09-30 | 北京智慧图科技有限责任公司 | Indoor walking navigation system and method based on AR positioning |
CN110470295A (en) * | 2018-05-09 | 2019-11-19 | 北京智慧图科技有限责任公司 | A kind of indoor walking navigation and method based on AR positioning |
CN108734734A (en) * | 2018-05-18 | 2018-11-02 | 中国科学院光电研究院 | Indoor orientation method and system |
CN109982239A (en) * | 2019-03-07 | 2019-07-05 | 福建工程学院 | Store floor positioning system and method based on machine vision |
CN111429385A (en) * | 2020-06-10 | 2020-07-17 | 北京云迹科技有限公司 | Map generation method, device and equipment |
CN111429385B (en) * | 2020-06-10 | 2021-01-08 | 北京云迹科技有限公司 | Map generation method, device and equipment |
CN111723232A (en) * | 2020-07-06 | 2020-09-29 | 许广明 | Method and device for positioning and identifying building through Internet of things |
CN111723232B (en) * | 2020-07-06 | 2023-09-08 | 深圳高速公路集团数字科技有限公司 | Method and device for positioning and identifying building through Internet of things |
CN114485605A (en) * | 2020-10-23 | 2022-05-13 | 丰田自动车株式会社 | Position specifying method and position specifying system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104112124A (en) | Image identification based indoor positioning method and device | |
CN105338479B (en) | Information processing method and device based on places | |
CN110645986B (en) | Positioning method and device, terminal and storage medium | |
Chen et al. | City-scale landmark identification on mobile devices | |
CN102388392B (en) | Pattern recognition device | |
CN103530881B (en) | Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal | |
US10909369B2 (en) | Imaging system and method for object detection and localization | |
EP3398164B1 (en) | System for generating 3d images for image recognition based positioning | |
CN104936283A (en) | Indoor positioning method, server and system | |
US10127667B2 (en) | Image-based object location system and process | |
US11775788B2 (en) | Arbitrary visual features as fiducial elements | |
CN103761539B (en) | Indoor locating method based on environment characteristic objects | |
CN105117399B (en) | Image searching method and device | |
CN104748738A (en) | Indoor positioning navigation method and system | |
KR101738443B1 (en) | Method, apparatus, and system for screening augmented reality content | |
CN104657389A (en) | Positioning method, system and mobile terminal | |
CN111323024A (en) | Positioning method and device, equipment and storage medium | |
CN103955889B (en) | Drawing-type-work reviewing method based on augmented reality technology | |
CN109446929A (en) | A kind of simple picture identifying system based on augmented reality | |
US9980098B2 (en) | Feature selection for image based location determination | |
WO2018158495A1 (en) | Method and system of providing information pertaining to objects within premises | |
CN108287893A (en) | Self-help tour guide system and method based on digital image recognition auxiliary positioning | |
CN112215964A (en) | Scene navigation method and device based on AR | |
TW201823929A (en) | Method and system for remote management of virtual message for a moving object | |
JP5931646B2 (en) | Image processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20141022 |
|
RJ01 | Rejection of invention patent application after publication |