CN114708382A - Three-dimensional modeling method, device, storage medium and equipment based on augmented reality - Google Patents
Three-dimensional modeling method, device, storage medium and equipment based on augmented reality Download PDFInfo
- Publication number
- CN114708382A CN114708382A CN202210264954.8A CN202210264954A CN114708382A CN 114708382 A CN114708382 A CN 114708382A CN 202210264954 A CN202210264954 A CN 202210264954A CN 114708382 A CN114708382 A CN 114708382A
- Authority
- CN
- China
- Prior art keywords
- target object
- point cloud
- model
- real scene
- stroke
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/203—Drawing of straight lines or curves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a three-dimensional modeling method, a three-dimensional modeling device, a storage medium and equipment based on augmented reality. The three-dimensional modeling method comprises the following steps: generating a real scene point cloud according to a real scene image acquired by augmented reality equipment, wherein the real scene image comprises an image of a target object; constructing a target object rough model according to the acquired user hand-drawing action information and the real scene point cloud; segmenting a target object point cloud from the real scene point cloud according to the target object rough model; and adjusting the rough model of the target object by using the point cloud of the target object to obtain a fine model of the target object. The method fully utilizes the scale and detail information of the target object contained in the scene image, can obtain a high-fineness model under the condition that a user only draws simple strokes by hand, reduces the learning cost and improves the use comfort level.
Description
Technical Field
The invention belongs to the technical field of computer graphics, and particularly relates to a three-dimensional modeling method based on augmented reality, a three-dimensional modeling device, a computer readable storage medium and computer equipment.
Background
Three-dimensional modeling is an important research subject in the fields of graphics and human-computer interaction, three-dimensional models are widely applied to the fields of games, manufacturing industry and the like, and traditional models are usually created on a computer screen through modeling software such as 3dmax and the like in a manual interaction mode, so that the requirements on technical personnel are high, and time and labor are wasted. With the development of VR (Virtual Reality) and AR (Augmented Reality) hardware and software, this enables users to work directly in a three-dimensional environment, and products such as using google tilt Brush can create immersive freeform in the three-dimensional environment, so that the modeling process becomes simpler and more intuitive. The AR technology is a technology for superposing and fusing virtual information and a real world, various technical means such as multi-modal data, three-dimensional modeling, intelligent interaction, sensing and the like are widely applied, virtual information such as images, three-dimensional models and the like generated by a computer is constructed and simulated and then superposed into a real scene, and the two kinds of information supplement each other, so that the real scene is enhanced. The virtual object is overlapped in the real scene in the AR environment and displayed, so that the three-dimensional space perception of the user can be greatly improved, and the three-dimensional interactive operation of the user is facilitated. Meanwhile, the AR provides an online data acquisition function, and can seamlessly combine the data acquisition and modeling processes, so that modeling is started after data is acquired offline.
The advantage of sketch-based modeling technology is that a user can create an object model existing or not existing in real life with a simple sketch to achieve a simple and efficient modeling experience. The two-dimensional sketch-based creation model generally adopts two-dimensional screen space interaction, uses a mouse or a flat panel and other devices to create two-dimensional strokes, and then obtains a three-dimensional model by inferring three-dimensional information through a priori assumption. Although the interaction mode is accurate, the three-dimensional information estimated a priori is not accurate due to the lack of depth information of the two-dimensional sketch. While such interaction is not intuitive, it is often ambiguous. The advantage of using a three-dimensional sketch to create a model is that three-dimensional strokes can be drawn, and the interaction mode is intuitive and concrete. The existing modeling method based on the three-dimensional sketch needs to draw complicated and accurate three-dimensional strokes. However, creating precise strokes is often a difficult task, since using hands to draw in mid-air often results in inaccurate strokes due to jitter. At present, most of the work proposes that auxiliary equipment such as a tablet personal computer and a digital pen are utilized to improve the interaction accuracy.
The prior art has the problem that fine three-dimensional strokes are required to be drawn to create a better model result. The necessary learning cost is required if additional hardware devices are introduced to improve the interaction accuracy, and the use comfort is reduced. None of the current methods use scene data, which is important information for modeling, e.g., they do not consider the scene data and potentially also imply general shape and scale information of objects.
Disclosure of Invention
(I) technical problems to be solved by the invention
The technical problem solved by the invention is as follows: how to effectively utilize scene point cloud data to improve the accuracy of a three-dimensional model.
(II) the technical scheme adopted by the invention
An augmented reality based three-dimensional modeling method, the three-dimensional modeling method comprising:
generating a real scene point cloud according to a real scene image acquired by augmented reality equipment, wherein the real scene image comprises an image of a target object;
constructing a target object rough model according to the acquired user hand-drawing action information and the real scene point cloud;
segmenting a target object point cloud from the real scene point cloud according to the target object rough model;
and adjusting the rough model of the target object by using the point cloud of the target object to obtain a fine model of the target object.
Preferably, the user hand-drawing action information includes a plurality of contour stroke points and a plurality of track stroke points generated in real time according to the identified user drawing action, and the method for generating the rough model of the target object according to the acquired user hand-drawing action information and the real scene point cloud includes:
when generating outline stroke points, retrieving outline matching points from the real scene point cloud, and adsorbing each outline stroke point to the outline matching points to form outline strokes;
when generating track stroke points, retrieving track matching points from the real scene point cloud, and adsorbing each track stroke point to the track matching points to form track strokes;
and generating a target object rough model according to the outline strokes and the track strokes.
Preferably, when generating the contour stroke points, the method for retrieving the contour matching points from the real scene point cloud comprises the following steps:
screening a candidate point set from the real scene point cloud, wherein the distance between each point in the candidate point set and the outline stroke point is smaller than a preset value;
judging whether edge points exist in the candidate point set or not;
if so, taking the edge point with the minimum distance from the outline stroke point as an outline matching point;
and if not, the point in the candidate point set with the minimum distance from the outline stroke point is used as the outline matching point.
Preferably, the method for segmenting the target object point cloud from the real scene point cloud according to the target object rough model comprises the following steps:
determining a candidate space region according to the rough model of the target object, wherein the target object is in the candidate space region;
clustering the point clouds in the candidate space regions to form a plurality of point cloud clusters;
taking part of the point cloud clusters with the point cloud number larger than the preset number as candidate point cloud clusters;
calculating a spatial distance between each track stroke and each candidate point cloud cluster, wherein the spatial distance is the sum of the closest distance between each stroke point of the track strokes and the candidate point cloud cluster;
and taking the candidate point cloud cluster with the minimum spatial distance as the target object point cloud.
Preferably, the method for optimizing the coarse model of the target object by using the point cloud of the target object to obtain the fine model of the target object comprises the following steps:
predicting to obtain the model type of the target object rough model;
carrying out parameterization processing on the rough model of the target object according to the model type to obtain initial parameters;
constructing a distance target function according to the initial parameters and the target object point cloud;
minimizing the distance target function by adopting a least square method to obtain an optimized parameter;
and constructing a fine model of the target object according to the optimization parameters.
Preferably, the method for obtaining the prediction model type of the target object rough model comprises the following steps:
respectively inputting the formed outline strokes and track strokes into a pre-trained prediction model, and predicting to obtain outline stroke types and track stroke types;
and predicting the model type of the target object rough model according to the outline stroke type and the track stroke type.
Preferably, the three-dimensional modeling method further includes:
and optimizing the shape of the outline stroke according to the predicted outline stroke type.
The application also discloses a three-dimensional modeling device based on augmented reality, the three-dimensional modeling device includes:
the real scene point cloud processing device comprises a preprocessing unit, a processing unit and a processing unit, wherein the preprocessing unit is used for generating a real scene point cloud according to a real scene image acquired by augmented reality equipment, and the real scene image comprises an image of a target object;
the model building unit is used for building a rough model of the target object according to the obtained information of the hand-drawing action of the user and the real scene point cloud;
the point cloud segmentation unit is used for segmenting a target object point cloud from the real scene point cloud according to the target object rough model;
and the model optimization unit is used for adjusting the rough model of the target object by using the point cloud of the target object to obtain a fine model of the target object.
The application also discloses a computer readable storage medium which stores an augmented reality-based three-dimensional modeling program, and the augmented reality-based three-dimensional modeling program realizes the augmented reality-based three-dimensional modeling method when being executed by a processor.
The application also discloses a computer device, which comprises a computer readable storage medium, a processor and an augmented reality-based three-dimensional modeling program stored in the computer readable storage medium, wherein the augmented reality-based three-dimensional modeling program realizes the augmented reality-based three-dimensional modeling method when being executed by the processor.
(III) advantageous effects
The invention discloses a three-dimensional modeling method based on augmented reality, which has the following technical effects compared with the traditional modeling method:
the method fully utilizes the scale and detail information of the target object contained in the scene image, can obtain a high-fineness model under the condition that a user only draws simple strokes by hand, reduces the learning cost and improves the use comfort.
Drawings
Fig. 1 is a flowchart of a three-dimensional modeling method based on augmented reality according to a first embodiment of the present invention;
FIG. 2 is a process diagram of a three-dimensional modeling method based on augmented reality according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of generating a scan model according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of outline stroke optimization according to a first embodiment of the present invention;
FIG. 5 is a diagram illustrating a model type prediction process according to a first embodiment of the present invention;
FIG. 6 is a schematic diagram of a modeling process for a different single-part object according to a first embodiment of the invention;
FIG. 7 is a schematic diagram of a modeling process of a multi-component object according to a first embodiment of the invention;
fig. 8 is a schematic block diagram of an augmented reality-based three-dimensional modeling apparatus according to a second embodiment of the present invention;
fig. 9 is a schematic diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Before describing in detail the various embodiments of the present application, the inventive concepts of the present application are first briefly described: in the existing three-dimensional sketch model construction process, in order to obtain accurate interactive strokes, auxiliary equipment such as a tablet personal computer and a digital pen is often required to improve the interaction accuracy, but the higher learning cost and the poorer use comfort degree are inevitably caused. Therefore, according to the augmented reality-based three-dimensional modeling method, firstly, a scene image containing a target object is converted into a real scene point cloud, then a rough target object model is constructed according to the hand-drawing action of a user, the target object point cloud is further segmented from the real scene point cloud, and finally the rough target object model is optimized and adjusted by the aid of the target object point cloud, and a fine target object model is obtained. According to the method and the device, the scale and detail information of the target object contained in the scene image are fully utilized, and the high-fineness model can be obtained under the condition that a user only draws simple strokes by hand, so that the learning cost is reduced, and the use comfort is improved.
As shown in fig. 1, the augmented reality-based three-dimensional modeling method of the first embodiment includes the following steps:
step S10: generating a real scene point cloud according to a real scene image acquired by augmented reality equipment, wherein the real scene image comprises an image of a target object;
step S20: constructing a target object rough model according to the acquired user hand-drawing action information and the real scene point cloud;
step S30: dividing a target object point cloud from the real scene point cloud according to the target object rough model;
step S40: and optimizing the rough model of the target object by using the point cloud of the target object to obtain a fine model of the target object.
In step S10, in the first embodiment, the augmented reality device HoloLens2 is used to acquire a depth map and a color map of a real scene, that is, a real scene image, and point cloud data reconstructed from the depth map is displayed on the head-mounted display in real time. Each time a user's acquisition instruction is collected, such as a quick coincidence of the user's thumb and forefinger twice, a frame of color point cloud is reconstructed from the current depth image and color image. Then, a preset registration method is executed to combine the current frame point cloud and the historical point cloud so as to reduce registration errors caused by attitude estimation in the augmented reality device HoloLens 2. And finally, adopting a density-based denoising algorithm to the acquired point cloud data to obtain a cleaner real scene point cloud. As shown in fig. 2 (a).
Further, in order to further improve the accuracy and comfort of model construction, in the first embodiment, a RANdom SAmple Consensus (RANSAC) algorithm is adopted to perform plane feature detection on the real scene point cloud, and a plane is extracted; and performing edge extraction on the depth map of the real scene by adopting a Canny algorithm, and marking the types of all points of the point cloud of the real scene according to the extraction result, namely each point is an edge point or a non-edge point, wherein the mark types of the points are kept unchanged in the point cloud registration process.
Further, in step S20, the three-dimensional model is constructed by using a scan-based modeling method, that is, creating a three-dimensional scan model requires drawing a contour stroke and a track stroke, wherein the contour stroke is located on a plane in space and the contour stroke sweeps along the track stroke to construct the three-dimensional scan model, and each stroke is composed of a series of three-dimensional vertices. That is, the user hand-drawing action information of the first embodiment includes a plurality of contour stroke points and a plurality of track stroke points generated in real time according to the recognized user drawing action.
Specifically, step S20 includes: when generating outline stroke points, retrieving outline matching points from the real scene point cloud, and adsorbing each outline stroke point to the outline matching points to form outline strokes; when generating track stroke points, retrieving contour matching points from a real scene point cloud, and adsorbing each track stroke point to the track matching points to form track strokes; and generating a target object rough model according to the outline strokes and the track strokes, wherein the target object rough model is shown in (b) of FIG. 2.
When generating the contour stroke points, the method for retrieving the contour matching points from the real scene point cloud comprises the following steps: when a user drawing action is recognized and converted into a contour stroke point, searching real scene point cloud, screening out a candidate point set, wherein the distance between each point in the candidate point set and the contour stroke point is smaller than a preset value; further judging whether edge points exist in the candidate point set; and if the edge points exist, the edge points with the minimum distance from the outline stroke points are used as the outline matching points. It should be noted that, when only one edge point exists, the edge point is directly used as a contour matching point; and if not, the point in the candidate point set with the minimum distance from the outline stroke point is taken as the outline matching point. For the track stroke point, the track matching point is also retrieved by the method and the adsorption is completed, and the specific process can refer to the description above and is not repeated herein.
When the outline stroke is drawn, a plane P where the outline stroke is located is fitted by using a least square algorithm to serve as a supporting plane, and the stroke point is projected onto the plane P. If the plane P is similar to the plane F extracted from the scene point cloud, F is selected as a plane of dependence. Because the outline strokes are positioned on a certain plane in the space, the interaction experience can be improved by finding the depending plane, and the operation difficulty of a user is reduced.
Further, in order to improve interaction convenience, after the outline strokes are formed, the formed outline strokes are respectively input into a pre-trained prediction model, outline stroke types are obtained through prediction, the outline stroke shapes are optimized according to the predicted outline stroke types to obtain stroke results expected by a user, and the results are fed back to the user in real time. The prediction model adopts an LSTM network, the predicted and optimized stroke types comprise straight lines, circles, rectangles, free curves and the like, the straight lines, the circles and the rectangles optimize the strokes by using a least square algorithm, the free curves adopt a cubic b-spline algorithm smoothing algorithm to reduce the influence caused by jitter, and specific optimization results can be shown in figure 4.
After the contour stroke and the track stroke are obtained, the contour stroke is controlled to sweep along the track stroke to form a rough model of the target object, as shown in FIG. 3. It should be noted that the outline stroke may be scanned after the complete track stroke is formed, or may be scanned when a partial section of the track stroke is formed or each track stroke point is formed.
Further, in order to optimize the target object rough model created by the user, the point cloud related to the target object rough model needs to be segmented from the real scene point cloud. To solve this task, the embodiment uses the segmentation semantics potentially included in the coarse model of the target object to assist the point cloud segmentation process to segment the target object point cloud, as shown in fig. 2 (c).
Specifically, the method for segmenting the target object point cloud from the real scene point cloud according to the target object coarse model in step S30 includes the following steps:
step S31: and determining a candidate space region according to the rough model of the target object, wherein the target object is in the candidate space region. The rough model of the target object created by the user has determined the approximate spatial extent of the target object, which is denoted as the candidate spatial region. Preferably, the candidate spatial region is chosen to be 1.5 times the axial bounding box of the coarse model of the target object.
Step S32: and clustering the point clouds in the candidate space region to form a plurality of point cloud clusters. Specifically, the point cloud data in the candidate space region is segmented using a clustering algorithm considering euclidean distance and surface point continuity, wherein constraints that point normals are perpendicular to a main axis direction are added to objects of a cylinder and a cuboid to obtain a better result, and a plurality of point cloud clusters are obtained by clustering the result.
Step S33: and taking part of the point cloud clusters with the point cloud number larger than the preset number as candidate point cloud clusters. The predetermined number needs to be set according to actual conditions, for example, three point cloud clusters with the number of point clouds arranged in the first three may be selected as candidate point cloud clusters.
Step S34: and calculating the space distance between the track stroke and each candidate point cloud cluster, wherein the space distance is the sum of the closest distance between each stroke point of the track stroke and the candidate point cloud cluster.
Step S35: and taking the candidate point cloud cluster with the minimum spatial distance as the target object point cloud.
In particular, since the track stroke is always anchored at the object point cloud surface, the candidate point cloud cluster with the smallest spatial distance from the track stroke is selected as the result of the segmentation. The spatial distance from the trajectory strokes to the candidate point cloud cluster is calculated as: and summing the closest distances from each stroke point in the track strokes to the point cloud cluster. The method for calculating the closest distance from each stroke point to the candidate point cloud cluster comprises the following steps: and calculating the distance between the stroke point and each point in the candidate point cloud cluster, and taking the minimum distance as the closest distance between each stroke point and the candidate point cloud cluster.
In the prior art, a clustering algorithm and a region growing algorithm based on point cloud density are generally adopted to segment the whole scene, only a plurality of parts can be generally segmented roughly, an accurate segmentation result cannot be obtained generally, and a user needs to manually specify the segmentation result. In this embodiment, the segmentation method in step S30 is specially designed for the scan model object, and essentially utilizes the semantic information assistance process of the user. The target point cloud can be segmented more accurately by utilizing the semantic information constraint of the target object category. Meanwhile, the position information of the initial stroke is utilized to improve the segmentation speed and select the target object, and the user does not need to manually specify the target point cloud object additionally.
Further, in step S40, the method for obtaining the fine model of the target object by optimizing the coarse model of the target object using the point cloud of the target object includes the following steps:
step S41: and predicting to obtain the model type of the target object rough model.
Specifically, the formed outline strokes and track strokes are respectively input into a pre-trained prediction model, and outline stroke types and track stroke types are obtained through prediction; and predicting the model type of the target object rough model according to the outline stroke type and the track stroke type. The prediction model adopts an LSTM model, and as shown in FIG. 5, different model types are obtained according to different contour stroke types and track stroke types through prediction.
Step S42: and carrying out parameterization on the rough model of the target object according to the model type to obtain initial parameters.
And determining a parameter capable of representing the model type according to the model type of the target object rough model, and converting the target object rough model into the parameter. For example, for a cylindrical surface, the parameters can be parameterized as an axial direction n (unit vector), a point q on the axis and a radius r, i.e., the cylindrical surface can be characterized by three initial parameters n, q, r.
Step S43: and constructing a distance target function according to the initial parameters and the target object point cloud.
Step S44: and (4) carrying out minimization processing on the distance target function by adopting a least square method to obtain an optimized parameter.
Step S45: and constructing and obtaining a fine model of the target object according to the optimization parameters.
The distance target function is constructed to represent the distance from the target object point cloud to the surface of the target object rough model, the distance target function is minimized, namely the value of the initial parameter is adjusted, so that the value of the distance target function is minimized, the target object point cloud is closest to the surface of the target object rough model on the whole, the adjusted initial parameter is the optimization parameter, and finally the optimized cylindrical surface can be constructed according to the target object fine model, for example, the optimized n, q and r.
The different types of target object coarse model optimization processes are explained below.
The point cloud of the target object segmented from the point cloud of the real scene in the last step is P ═ { P ═ P1,p2,…,pN}。
(1) A cylindrical surface. The cylindrical surface may be parameterized as an axial direction n (unit vector), a point q on the axis, and a radius r. Combining the target object point cloud, the constructed distance target function is as follows:
wherein p isiq represents from piThe vector pointing to q, this function is constructed based on the distance r from the axis for each point on the cylindrical surface.
(2) A conical surface. The cone surface can be parameterized as vertex a, axial direction n (unit vector) and half angle θ of the cone. For a cone, the constructed distance objective function is as follows, the function representing the distance of the point cloud to the cone surface:
(3) cuboid-always parametrized as orthonormal vector W, H, L, corresponding scalar W, H, L and central point c. From these parameters, the set of planes T of the cuboid can be reconstructed as { T1, T2, T3, T4, T5, T6}, and the distance objective function fitted to the cuboid is as follows, where the function represents the distance of the point cloud to the surface of the cuboid:
Δγ=(W.H)2+(H.L)2+(W.L)2,
wherein distance (p)i,tk) Term representation point piTo plane tkThe term Δ γ is used to ensure that the vectors W, H and L are in orthogonal relationship to each other.
(4) The generalized cylinder is represented as a whole consisting of a plurality of circular slices, the centers of the circles are on the axis, so that the optimization target can be simplified into a plurality of coaxial three-dimensional circular surface fitting, and the distance target function is
Where K is the number of circular slices, ci,riAnd n are parameters of the ith circular slice respectively, alpha controls the smoothness of the radius transformation of the circular surface, and gamma is used for ensuring that the centers of the circular slices are on the same axis. For the optimization of the ith circle, firstly, a point set S on the circle is extracted from the point cloud P of the target object, and it is determined that the distance from the point to the plane where the circle i is located is less than a certain threshold, which is 0.005 in the first embodiment.
Therefore, by minimizing the above distance objective functions, corresponding optimization parameters can be obtained, and finally, an optimization model, i.e., a target object fine model, is constructed through the optimization parameters.
According to the three-dimensional modeling method based on augmented reality, a user is not required to draw fine strokes, simple stroke drawing and scene data captured in the field are combined, and a three-dimensional interaction mode is adopted, so that the process of creating a model is simpler. Meanwhile, by means of the augmented reality technology, the defect that a 2D sketch drawn on a two-dimensional screen is difficult to determine is overcome.
Illustratively, FIG. 6 is a schematic representation of a process for three-dimensional modeling of different single part objects. FIG. 7 is a schematic representation of a process for three-dimensional modeling of a multi-component object.
Further, as shown in fig. 8, the second embodiment further discloses a three-dimensional modeling apparatus based on augmented reality, and the three-dimensional modeling apparatus includes a preprocessing unit 100, a model construction unit 200, a point cloud segmentation unit 300, and a model optimization unit 400. The preprocessing unit 100 is configured to generate a real scene point cloud according to a real scene image acquired by an augmented reality device, where the real scene image includes an image of a target object; the model construction unit 200 is configured to construct a rough model of the target object according to the acquired information of the user hand-drawing action and the real scene point cloud; the point cloud segmentation unit 300 is configured to segment a target object point cloud from the real scene point cloud according to the target object rough model; the model optimization unit 400 is configured to adjust the coarse target object model by using the point cloud of the target object to obtain a fine target object model. The specific working processes of the preprocessing unit 100, the model building unit 200, the point cloud segmentation unit 300, and the model optimization unit 400 may refer to the description related to the first embodiment, and are not described herein again.
The third embodiment further discloses a computer-readable storage medium, in which an augmented reality-based three-dimensional modeling program is stored, and when being executed by a processor, the augmented reality-based three-dimensional modeling program implements the augmented reality-based three-dimensional modeling method.
The fourth embodiment of the present application also discloses a computer device, and in a hardware level, as shown in fig. 9, the terminal includes a processor 12, an internal bus 13, a network interface 14, and a computer-readable storage medium 11. The processor 12 reads a corresponding computer program from the computer-readable storage medium and then runs, forming a request processing apparatus on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices. The computer-readable storage medium 11 has stored thereon an augmented reality-based three-dimensional modeling program that, when executed by a processor, implements the augmented reality-based three-dimensional modeling method described above.
Computer-readable storage media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer-readable storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents, and that such changes and modifications are intended to be within the scope of the invention.
Claims (10)
1. An augmented reality-based three-dimensional modeling method, characterized in that the three-dimensional modeling method comprises:
generating a real scene point cloud according to a real scene image acquired by augmented reality equipment, wherein the real scene image comprises an image of a target object;
constructing a target object rough model according to the acquired user hand-drawing action information and the real scene point cloud;
segmenting a target object point cloud from the real scene point cloud according to the target object rough model;
and adjusting the rough model of the target object by using the point cloud of the target object to obtain a fine model of the target object.
2. The augmented reality-based three-dimensional modeling method according to claim 1, wherein the user hand-drawn action information includes a plurality of contour stroke points and a plurality of track stroke points generated in real time according to the identified user drawing action, and the method for generating the rough model of the target object according to the acquired user hand-drawn action information and the real scene point cloud includes:
when generating outline stroke points, retrieving outline matching points from the real scene point cloud, and adsorbing each outline stroke point to the outline matching points to form outline strokes;
when generating track stroke points, retrieving track matching points from the real scene point cloud, and adsorbing each track stroke point to the track matching points to form track strokes;
and generating a target object rough model according to the outline strokes and the track strokes.
3. The augmented reality-based three-dimensional modeling method according to claim 2, wherein the method of retrieving contour matching points from the real scene point cloud when generating contour stroke points comprises:
screening a candidate point set from the real scene point cloud, wherein the distance between each point in the candidate point set and the outline stroke point is smaller than a preset value;
judging whether the candidate point set has edge points or not;
if so, taking the edge point with the minimum distance with the outline stroke point as an outline matching point;
and if not, the point in the candidate point set with the minimum distance from the outline stroke point is used as the outline matching point.
4. The augmented reality-based three-dimensional modeling method according to claim 2, wherein the method of segmenting the target object point cloud from the real scene point cloud according to the target object rough model comprises:
determining a candidate space region according to the rough model of the target object, wherein the target object is in the candidate space region;
clustering the point clouds in the candidate space regions to form a plurality of point cloud clusters;
taking part of the point cloud clusters with the point cloud number larger than the preset number as candidate point cloud clusters;
calculating a spatial distance between each track stroke and each candidate point cloud cluster, wherein the spatial distance is the sum of the closest distance between each stroke point of the track strokes and the candidate point cloud cluster;
and taking the candidate point cloud cluster with the minimum spatial distance as the target object point cloud.
5. The augmented reality-based three-dimensional modeling method according to claim 4, wherein the target object point cloud is used to optimize the target object coarse model, and the method of obtaining the target object fine model comprises:
predicting to obtain the model type of the target object rough model;
carrying out parameterization processing on the rough model of the target object according to the model type to obtain initial parameters;
constructing a distance target function according to the initial parameters and the target object point cloud;
minimizing the distance target function by adopting a least square method to obtain an optimized parameter;
and constructing a fine model of the target object according to the optimization parameters.
6. The augmented reality-based three-dimensional modeling method according to claim 5, wherein the method of obtaining the prediction model type of the coarse model of the target object comprises:
respectively inputting the formed outline strokes and track strokes into a pre-trained prediction model, and predicting to obtain outline stroke types and track stroke types;
and predicting the model type of the target object rough model according to the outline stroke type and the track stroke type.
7. The augmented reality-based three-dimensional modeling method of claim 6, further comprising:
and optimizing the shape of the outline stroke according to the predicted outline stroke type.
8. An augmented reality-based three-dimensional modeling apparatus, comprising:
the real scene point cloud generating unit is used for generating a real scene point cloud according to a real scene image acquired by the augmented reality equipment, wherein the real scene image comprises an image of a target object;
the model building unit is used for building a rough model of the target object according to the obtained information of the hand-drawing action of the user and the real scene point cloud;
the point cloud segmentation unit is used for segmenting a target object point cloud from the real scene point cloud according to the target object rough model;
and the model optimization unit is used for adjusting the rough model of the target object by using the point cloud of the target object to obtain a fine model of the target object.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores an augmented reality-based three-dimensional modeling program, which when executed by a processor implements the augmented reality-based three-dimensional modeling method of any one of claims 1 to 7.
10. A computer device, characterized in that the computer device comprises a computer readable storage medium, a processor and an augmented reality-based three-dimensional modeling program stored in the computer readable storage medium, the augmented reality-based three-dimensional modeling program implementing the augmented reality-based three-dimensional modeling method of any one of claims 1 to 7 when executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210264954.8A CN114708382A (en) | 2022-03-17 | 2022-03-17 | Three-dimensional modeling method, device, storage medium and equipment based on augmented reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210264954.8A CN114708382A (en) | 2022-03-17 | 2022-03-17 | Three-dimensional modeling method, device, storage medium and equipment based on augmented reality |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114708382A true CN114708382A (en) | 2022-07-05 |
Family
ID=82167999
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210264954.8A Pending CN114708382A (en) | 2022-03-17 | 2022-03-17 | Three-dimensional modeling method, device, storage medium and equipment based on augmented reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114708382A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115349967A (en) * | 2022-08-19 | 2022-11-18 | 首都医科大学附属北京口腔医院 | Display method, display device, electronic equipment and computer readable storage medium |
CN118097066A (en) * | 2024-04-26 | 2024-05-28 | 厦门两万里文化传媒有限公司 | Generation method and system for automatically constructing 3D model |
-
2022
- 2022-03-17 CN CN202210264954.8A patent/CN114708382A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115349967A (en) * | 2022-08-19 | 2022-11-18 | 首都医科大学附属北京口腔医院 | Display method, display device, electronic equipment and computer readable storage medium |
CN118097066A (en) * | 2024-04-26 | 2024-05-28 | 厦门两万里文化传媒有限公司 | Generation method and system for automatically constructing 3D model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108648269B (en) | Method and system for singulating three-dimensional building models | |
CN109859296B (en) | Training method of SMPL parameter prediction model, server and storage medium | |
Ji et al. | A novel simplification method for 3D geometric point cloud based on the importance of point | |
JP2020115336A (en) | Machine learning for estimating 3d modeled object | |
US8675951B2 (en) | Method and system for generating a 3D model | |
EP3131060A1 (en) | Method and apparatus for constructing three dimensional model of object | |
WO2014200914A1 (en) | Scalable volumetric 3d reconstruction | |
US20120306876A1 (en) | Generating computer models of 3d objects | |
CN114708382A (en) | Three-dimensional modeling method, device, storage medium and equipment based on augmented reality | |
CN112927353A (en) | Three-dimensional scene reconstruction method based on two-dimensional target detection and model alignment, storage medium and terminal | |
CN111739167B (en) | 3D human head reconstruction method, device, equipment and medium | |
CN115861547B (en) | Model surface spline generating method based on projection | |
Poullis | Large-scale urban reconstruction with tensor clustering and global boundary refinement | |
Boulanger et al. | ATIP: A Tool for 3D Navigation inside a Single Image with Automatic Camera Calibration. | |
Verykokou et al. | A Comparative analysis of different software packages for 3D Modelling of complex geometries | |
Meng et al. | Mirror-3dgs: Incorporating mirror reflections into 3d gaussian splatting | |
KR20230005312A (en) | Method and Apparatus for Generating Floor Plans | |
Wiemann et al. | Automatic Map Creation For Environment Modelling In Robotic Simulators. | |
Jisen | A study on target recognition algorithm based on 3D point cloud and feature fusion | |
Maghoumi et al. | Gemsketch: Interactive image-guided geometry extraction from point clouds | |
Sahebdivani et al. | Deep learning based classification of color point cloud for 3D reconstruction of interior elements of buildings | |
US8730235B2 (en) | Method for determining point connectivity on a two manifold in 3D space | |
Shui et al. | Automatic planar shape segmentation from indoor point clouds | |
Denker et al. | On-line reconstruction of CAD geometry | |
EP3779878A1 (en) | Method and device for combining a texture with an artificial object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |