[go: nahoru, domu]

CN111476912B - Image matching method and system - Google Patents

Image matching method and system Download PDF

Info

Publication number
CN111476912B
CN111476912B CN202010594350.0A CN202010594350A CN111476912B CN 111476912 B CN111476912 B CN 111476912B CN 202010594350 A CN202010594350 A CN 202010594350A CN 111476912 B CN111476912 B CN 111476912B
Authority
CN
China
Prior art keywords
image
constraint
clothing
matching
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010594350.0A
Other languages
Chinese (zh)
Other versions
CN111476912A (en
Inventor
李小波
杜超
李昆仑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Shambala Culture Co ltd
Original Assignee
Hengxin Shambala Culture Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Shambala Culture Co ltd filed Critical Hengxin Shambala Culture Co ltd
Priority to CN202010594350.0A priority Critical patent/CN111476912B/en
Publication of CN111476912A publication Critical patent/CN111476912A/en
Application granted granted Critical
Publication of CN111476912B publication Critical patent/CN111476912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/16Cloth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an image matching method and a system thereof, wherein the image matching method specifically comprises the following steps: acquiring a clothing image and a mannequin image, and preprocessing the clothing image and the mannequin image; carrying out grid reconstruction on the preprocessed clothing image and the mannequin image; in the reconstructed grid, performing first matching of the clothing image and the mannequin image; adjusting and constraining the clothing image to complete the second matching of the clothing image and the mannequin image; and rendering and outputting the clothing image and the mannequin image after the second matching. The image matching method and the system thereof can finely correct the clothes image and the mannequin in the same posture, so that the image matching method and the system thereof achieve a relatively real effect.

Description

Image matching method and system
Technical Field
The present application relates to the field of image processing, and in particular, to an image matching method and system.
Background
In the prior art, a virtual fitting mode is increasingly popularized, and in a virtual fitting product, after various clothes and accessories are photographed to generate images, the images of the clothes need to be converted and then attached to a model body to achieve the effect of fitting the clothes. To achieve the purpose, a manual adjustment method is generally used, image finishing deformation is carried out by using image processing software, the efficiency is low, the pertinence is weak, no pertinence processing logic suitable for human bodies and clothes is available, and the automation degree is not high. Meanwhile, the existing clothing matching display system can only carry out the wearing effect of a single piece of clothing, if a plurality of pieces of clothing are worn and put up, the relative constraint relation among the pieces of clothing is caused to be problematic, for example, the upper clothing and the lower clothing are interspersed or are in incorrect tightening and expansion states, so that the constraint algorithm between the upper clothing and the lower clothing is developed to correct the display effect.
Therefore, a faster and more accurate method for matching the clothing with the mannequin image is needed, so that the clothing in the clothing image can be perfectly fitted in the mannequin image.
Disclosure of Invention
The application aims to provide an image matching method and an image matching system, which can be used for finely correcting and matching clothes images in the same posture with a mannequin so as to achieve a real effect and solve the limitation of requirements and cost problems of virtual fitting and shooting sites.
In order to achieve the above object, the present application provides an image matching method, which specifically includes the following steps:
acquiring a clothing image and a mannequin image, and preprocessing the clothing image and the mannequin image; carrying out grid reconstruction on the preprocessed clothing image and the mannequin image; in the reconstructed grid, performing first matching of the clothing image and the mannequin image; adjusting and constraining the clothing image to complete the second matching of the clothing image and the mannequin image; and rendering and outputting the clothing image and the mannequin image after the second matching.
The method comprises the steps of obtaining a clothing image, obtaining a platform image, obtaining a clothing image, obtaining a cut-out image of the platform image, obtaining a cut-out image of the clothing image, and storing the cut-out image into an image format with a transparent channel.
As described above, the adjusting and constraining of the clothing image is specifically to perform constraint adjustment on the clothing image at the lower layer, perform constraint adjustment on the lower image at the lower layer if the upper image covers the lower image, and perform constraint adjustment on the upper image at the lower layer if the lower image covers the upper image.
As above, the constraint adjustment of the garment specifically includes the following sub-steps: determining a constraint plane; determining a constraint area according to a constraint plane; and carrying out constraint adjustment on the constraint area.
If the upper garment image is pressed, the waist opening characteristic points on the two sides of the waist of the upper garment image are used as basic vertexes, contour vertexes on the two sides of the lower garment waist, which are closest to the basic vertexes, are searched, and the four vertexes, namely the basic vertexes and the contour vertexes on the two sides of the waist, are connected to form a constraint plane; if the lower garment image and the upper garment image appear, the waist opening feature points on the two sides of the waist of the lower garment are used as basic vertexes, contour vertexes on the two sides of the waist of the upper garment, which are closest to the basic vertexes, are searched, and the four vertexes, namely the basic vertexes and the contour vertexes on the two sides of the waist, are connected to form a constraint plane.
The method as above, wherein before the constraint adjustment of the constraint area, the expanding process is further performed on the garment located at the lower layer.
The method comprises the steps of obtaining a first constrained region, a second constrained region and a third constrained region; wherein the first constraint zone is ten percent of the overall garment height from the constraint plane to above the constraint plane; the second constraint area is ten percent of the whole garment height from the constraint plane to the part below the constraint plane; the third constrained region is the second constrained region to the upper or lower garment image end portion.
As above, the constraint adjusting of the constraint region specifically includes the following sub-steps: determining a constraint curve; slowly constraining the first constraint area according to the constraint curve; performing gentle constraint on the second constraint area; and carrying out cutting constraint on the third constraint area.
An image matching system, comprising: the device comprises a preprocessing unit, a grid reconstruction unit, a first matching unit, a second matching unit and an output unit; the preprocessing unit is used for preprocessing the image; the grid reconstruction unit is used for carrying out grid reconstruction on the preprocessed clothing image and the mannequin image; the first matching unit is used for performing first matching on the clothing image and the mannequin image in the reconstructed grid; the second matching unit is used for adjusting and constraining the clothing image to complete the second matching of the clothing and the mannequin; and the output unit is used for outputting the clothing image and the mannequin image which are matched for the second time.
As above, the second matching unit specifically includes the following sub-modules, a constrained plane determining module, a constrained region determining module, and a constraint adjusting module; the constraint plane determining module is used for determining a constraint plane;
the constrained region determining module is connected with the constrained plane determining module and used for determining a constrained region according to the constrained plane; and the constraint adjusting module is connected with the constraint area determining module and is used for carrying out constraint adjustment on the constraint area.
The application has the following beneficial effects:
(1) the image matching method and the system thereof can finely correct the clothes image and the mannequin in the same posture, so that the image matching method and the system thereof achieve a relatively real effect.
(2) The image matching method and the system thereof can automatically match the clothing image with the mannequin image without manual adjustment, reduce personnel cost and time cost, make the wearing and the putting together of a plurality of pieces of clothing possible, and greatly improve the usability of the display system.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flow chart of a method of image matching provided according to an embodiment of the present application;
FIG. 2 is an internal block diagram of an image matching system provided in accordance with an embodiment of the present application;
fig. 3 is a diagram of internal sub-modules of an image matching system provided according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application relates to a method and a system for image fitting correction. According to the method and the device, the clothes images and the mannequin in the same posture can be finely corrected and matched, so that the real effect is achieved, and the limitation of virtual fitting and shooting site requirements and cost problems is solved.
Fig. 1 shows a flowchart of an image matching method provided by the present application, which specifically includes the following steps:
step S110: and acquiring a clothing image and a mannequin image, and preprocessing the clothing image and the mannequin image.
The mannequin image and the clothing image are both images pre-stored in the system, and the clothing image comprises an upper clothing image and a lower clothing image.
The preprocessing of the image comprises the steps of carrying out matting processing on the clothing and the platform image, removing useless information such as background and the like in the image, only reserving the clothing or the platform image, and storing the matting image as the PNG format image with the transparent channel.
Preferably, the matting process can refer to matting techniques in the prior art.
Step S120: and carrying out grid reconstruction on the preprocessed clothing image and the mannequin image.
Wherein step S120 specifically includes the following steps:
step S1201: and carrying out information calibration on the clothing image and the mannequin image.
The information calibration of the garment image specifically comprises the steps of calibrating the garment type (the garment type can be distinguished as an upper garment image or a lower garment image), the coordinates of the characteristic points of the opening parts of the upper garment and the lower garment and the coordinates of the external outlines of the upper garment and the lower garment on the garment image. Preferably, the calibration is performed by taking the upper left corner of the clothing image as the origin of coordinates.
In particular, the garment types may include, among others: short-sleeved clothing, long-sleeved clothing, short-sleeved coats, long-sleeved coats, vests, slings, shorts, trousers, skirts, short-sleeved one-piece dresses, long-sleeved one-piece dresses, vest dresses, slings one-piece dresses, and the like.
The opening mode of the clothes is divided into: neck opening, waist opening, extremities opening, etc.
In the clothing image, some points are marked at the opening position of the clothing as characteristic points, the pixel position of the characteristic points is the characteristic point coordinates of the opening part of the clothing, for example, 2 points are set at the opening position of the left cuff in the short-sleeved clothing as the characteristic points, and the pixel positions of the two points in the image are the characteristic point coordinates of the opening part of the clothing.
The garment external contour coordinates are a plurality of points set on the edges of the upper garment and the lower garment, and the pixel positions of the points on the garment external contour in the garment image are the garment external contour coordinates.
The information calibration of the human body model image comprises the steps of calibrating coordinate information of human body skeleton nodes on the human body model image and obtaining external contour coordinates of the human body according to the coordinate information of the human body skeleton nodes. Preferably, the coordinate origin is the top left corner of the human platform image.
Wherein the nodes of the human skeleton may comprise: the lower abdomen, the navel, the chest, the middle of two shoulders and the throat; shoulder, elbow, wrist; the root of thigh, knee, ankle, etc.
The pixel positions of a plurality of skeleton nodes on the human body external contour in the human body table image are the human body external contour coordinates.
Preferably, the clothing image and the platform image after information calibration are stored in a configuration file of Json (JavaScriptObject Notation), wherein the configuration file of the clothing image is further labeled with a corresponding position of the clothing and a human skeleton node when the clothing is shot, so that the platform image is suitable for matching with the clothing image.
Step S1202: and carrying out grid reconstruction on the clothing image and the mannequin image after information calibration.
Specifically, the garment image and the mannequin image after information calibration and a Json configuration file are loaded into a three-dimensional engine to reconstruct a mesh, wherein the mesh is a triangular mesh.
Specifically, the mesh reconstruction includes a garment image-based triangular mesh reconstruction (including a triangular mesh reconstruction of an upper garment image and a lower garment image) and a mannequin image-based triangular mesh reconstruction, wherein the garment image-based (or mannequin image-based) triangular mesh reconstruction specifically includes the following steps:
step D1: information is acquired.
If triangular mesh reconstruction of the clothing image is carried out, the obtained information comprises the clothing type, the characteristic point coordinates of the opening part of the clothing, the coordinates of the external contour of the upper or lower clothing, the coordinate information of the skeleton node of the human body and the width and height of the clothing image. Because the sizes of the clothing image and the human body platform image are possibly inconsistent, after the information is obtained, the upper or lower clothing image and the human body platform image are subjected to displacement scaling transformation by taking the coordinate information or other information of the human body skeleton node as a reference, so that the upper or lower clothing image and the human body skeleton node are basically matched.
If triangular mesh reconstruction of the human platform image is carried out, the obtained information comprises coordinate information of human skeleton nodes, external contour coordinates of a human body and the width and height of the human platform image.
Step D2: and determining the basic unit of the image according to the acquisition information.
If the triangular mesh reconstruction of the upper garment image and the lower garment image is performed, taking the triangular mesh reconstruction of the garment image as an example, connecting a plurality of points on the outer contour of the upper garment to form a closed curve of the garment contour, and acquiring the width and the height of the closed curve, which are respectively marked as a and b.
Preferably, a length of 30 times of the sum of the width and the height of the closed curve of the upper garment outline can be used as a basic unit for generating the inner vertex in the upper garment outline, namely, the basic unit
Figure 788769DEST_PATH_IMAGE001
The basic unit is determined according to the width and height values of the garment contour closed curve, a base length which is one-30 times of the sum of the width and the height is only taken as a preferred embodiment, and specific numerical values need to be determined according to actual conditions, and are not limited herein.
The basic unit of the lower garment image can be determined by the method, and the specific steps are not repeated herein.
If the reconstruction is the triangular mesh reconstruction of the human table image, the determination of the basic unit still refers to the calculation mode, namely, the connection is carried out according to a plurality of points on the external contour of the human body to form a closed curve of the contour of the human body, the width and the height of the closed curve are obtained,are respectively marked as
Figure 378013DEST_PATH_IMAGE002
And
Figure 688909DEST_PATH_IMAGE003
taking the length of 30 times of the sum of the width and the height of a closed curve of the human body contour as a basic unit for determining an inner vertex in the human body contour, namely the basic unit
Figure 474331DEST_PATH_IMAGE004
Step D3: and generating the vertexes in the outer contour according to the basic unit of the image, and obtaining a vertex set.
The method comprises the steps of determining the positions of original vertexes before vertex filling, wherein the vertex set is a set of an upper and a lower garment outline (or human body outline) vertexes and a plurality of internally generated vertexes, and the position of the upper left corner of a garment image (or a mannequin image) is the position of the original vertexes.
Further, according to the rule that the unit length is based on the distance between two points, firstly, the original vertex position of the clothing image (or the mannequin image) is taken as the basis, and the distance between the original vertex position and the original vertex position in the transverse direction and the longitudinal direction is taken as the basis
Figure 819862DEST_PATH_IMAGE005
(or
Figure 907904DEST_PATH_IMAGE006
) Is taken as the next vertex (horizontal/vertical second vertex), and then the distance from the horizontal/vertical second vertex is taken as the horizontal/vertical second vertex based on the horizontal/vertical second vertex respectively
Figure 315882DEST_PATH_IMAGE005
(or
Figure 311520DEST_PATH_IMAGE006
) The point of (a) is used as the next vertex (the horizontal/vertical third vertex), and the distance (the horizontal/vertical third vertex) is searched
Figure 901770DEST_PATH_IMAGE005
(or
Figure 301659DEST_PATH_IMAGE006
) Until the filling of all the vertexes in the transverse and longitudinal directions is completed on the clothing image (or the mannequin image).
Further, excluding the vertexes outside the garment external contour (or the human body external contour), and selecting the vertexes inside the external contour, thereby obtaining a set of vertexes inside the garment or the human body external contour.
Step D4: and carrying out instantiation generation of the grid.
Before the mesh instantiation generation, the method also comprises the determination of a vertex connecting line and UV coordinate information.
NET library, connecting the garment external contour coordinate with a plurality of vertexes in the garment external contour, and converting the vertex coordinate into a planar UV coordinate according to the position information of the vertexes in the space.
And further submitting the obtained vertex set, the vertex connecting line data and the UV coordinate information to a three-dimensional engine so as to generate mesh instantiation. At this point, the generation of the grid clothing image and the grid mannequin image can be instantiated.
Step S130: and in the reconstructed grid, performing first matching of the clothing image and the mannequin image.
Specifically, the first matching of the clothing image and the mannequin image further includes determining whether the clothing can fit the mannequin, and executing step S140 if the clothing can fit or wrap the body in the mannequin.
Specifically, step S130 includes the following sub-steps:
step Q1: a list of openings of the garment is determined in the grid garment image.
Specifically, since the mesh garment image is generated based on the garment image, and since the garment external contour coordinates and the coordinates of the opening feature points are calibrated on the garment image, it is possible to determine which pixel point corresponds to each vertex on the mesh and in the garment image, that is, it is possible to determine the position of each vertex, that is, it is possible to determine that each vertex on the mesh is an opening vertex or an external contour vertex.
Thus, each vertex data in the mesh garment image is traversed to determine whether it belongs to an opening vertex or an outside contour vertex.
Further, a line segment list at each opening on the garment is obtained according to the obtained vertex data. The list of the opening line segments comprises a plurality of opening line segments, and the opening line segments are obtained by connecting vertexes at the openings.
Illustratively, the top points of the opening position of the left cuff in the garment image are 3, and the opening line segment is obtained by connecting the 3 top points, and for example, if the top points of the opening position of the collar in the garment image are 2, another opening line segment is obtained by connecting the 2 top points.
Step Q2: a list of line segments wrapping the body is thus determined from the list of openings.
Furthermore, because the list of the opening line segments is obtained, the remaining line segments on the external contour of the garment can be reversely obtained by taking the 2 opening line segments as the reference, the corresponding vertexes on the remaining contour are connected to form a plurality of contour line segments, and the plurality of contour line segments are collected to obtain the list of the contour line segments needing to wrap the body.
Step Q3: and judging whether the clothing can cover the body.
Specifically, the list of the opening and the contour line segments and the vertex of the external contour of the human body are placed in the same coordinate system for surrounding comparison, and the configuration file of the clothing image also marks the corresponding positions of the clothing and the bone nodes of the human body, so that a set of a plurality of line segments which should cover the human body but are in the contour of the human body can be obtained. If the garment is able to cover the body, step S140 is performed, otherwise step Q4 is performed.
Step Q4: generating a deformation handle, performing fitting correction on the garment and the mannequin, and completing the first matching of the garment image and the mannequin image.
Specifically, the step Q4 specifically includes the following sub-steps:
step Q410: and (5) fitting the opening part of the garment.
Step Q420: fitting and correcting clothes wrapping the body.
Step Q410 and step Q420 are performed simultaneously. Step Q410 specifically includes the following substeps:
step Q4101: deformed handles are generated on the segments of the unwrapped body.
Specifically, a set of contour line segments within the human body contour is traversed, the midpoint location of each line segment is found, and a deformed handle is generated at that location.
Specifically, the deformation handle is a substantially circular area for representing the coverage area of the deformation, and the area covers a plurality of grid vertexes.
Step Q4102: and searching the vertex of the human body external contour which is closest to the line segment of the non-wrapped body.
And locking the line segment which is not wrapped by the body on the clothing image, and acquiring the vertex position of the human body external contour closest to the line segment.
Step Q4103: and performing displacement of the line segment without wrapping the body according to the vertex to complete the first matching of the clothing image and the mannequin image.
Because the vertex of the human body external contour closest to the line segment without wrapping the body is obtained, the deformation handle is partially displaced towards the vertical direction inside the human body to drive the grid vertex to displace, so that the displacement of the line segment wrapping the body is completed, and the line segment is attached to the human body external contour.
Specifically, the displacement of the deformed handle is to displace part of all covered grid vertexes in the deformed handle towards the vertical direction in the human body, so that the edge pixels are prevented from being excessively stretched.
Preferably, before the position of the deformed handle, the method further comprises the step of presetting a parameter for controlling the displacement of the deformed handle, wherein the parameter is specifically set by a worker.
The parameters specifically include the position of the center point of the deformed handle, the weight of the influence range and an attenuation process curve from the center point of the deformed handle to the periphery of the weight. And controlling the displacement of the deformation handle according to the set parameters.
Through the steps, the clothes which are to wrap the body can be completely attached to the corresponding part of the mannequin.
Wherein, the method for fitting the opening part of the garment can still use the displacement of the deformation handle for fitting, and the step Q420 specifically comprises the following substeps:
step Q4201: creating a key point at the site of the opening.
Wherein, a plurality of key points can be set at the opening position, preferably, 2 key points can be set at two ends of the opening position, and the opening position can be a neck or waist opening.
Step Q4202: and deformation handles are respectively arranged at key points.
Taking the collar as an example, two control handles are respectively created at two key points of the collar, or taking the cuffs and the lower hem thereof as examples, key vertexes at two ends of the opening direction are searched, and then the deformation handles are generated at the end points.
Step Q4203: the vertex of the human body external contour closest to the opening position is searched, and the handle is displaced.
Still taking the neckline as an example, searching the nearest vertex of the external contour of the human body in the direction of 90 degrees of the chest bone point direction, and if the vertex is searched, performing corresponding displacement of the deformed handle. Or taking the cuff and the lap thereof as an example, the vertex of the external contour of the human body closest to the opening part is transversely searched, so that the handle is displaced and deformed to the outside of the contour point of the transversely closest mannequin, and the correction and the fitting are completed.
By the method, the matching of the upper garment image and the lower garment image and the mannequin image is subjected to fitting correction, and the first matching of the garment image and the mannequin image is completed, namely the first matching of the upper garment and the lower garment with the mannequin is completed.
Step S140: and adjusting and constraining the clothing image to complete the second matching of the clothing and the mannequin.
After the first matching between the clothing image and the mannequin image is completed, the intersection area of the upper clothing image and the lower clothing image in the clothing image is not fused, and then the clothing on the lower layer needs to be restricted and adjusted. Two situations can occur, if the situation that the images are packed downwards by the packing image gland, the lower-layer packed images are subjected to constraint adjustment, and if the situation that the images are packed upwards by the packing image gland, the upper-layer packed images are subjected to constraint adjustment.
The adjusting and restricting of the clothing image specifically comprises the following substeps:
step T1: a constraint plane is determined.
Specifically, if the upper garment image is pressed down, the waist opening feature points on both sides of the waist of the upper garment image are used as basic vertexes, the lower garment contour vertexes closest to the basic vertexes are searched, and the four vertexes of the basic vertexes and the waist contour vertexes are connected to form a constraint plane.
If the lower-mounted image and the upper-mounted image appear, the waist opening feature points on two sides of the waist of the lower-mounted image are used as basic vertexes, contour vertexes on two sides of the upper-mounted image, which are closest to the basic vertexes, are searched, and the four vertexes, namely the basic vertexes and the contour vertexes on two sides of the waist, are connected to form a constraint plane.
Step T2: and determining a constraint area according to the constraint plane.
The constraint areas specifically comprise a first constraint area which is ten percent of the whole garment height from the constraint plane to the part above the constraint plane, a second constraint area which is ten percent of the whole garment height from the constraint plane to the part below the constraint plane, and a third constraint area which is arranged at the end of the upper or lower garment of the second constraint area.
Step T3: and carrying out constraint adjustment on the constraint area.
Before the constraint adjustment is carried out on the constraint area, the method also comprises the step of carrying out expansion treatment on the clothes at the lower layer. Taking the upper garment image to be adjusted as an example, the upper garment image is outwards expanded in the direction opposite to the garment, wherein the outwards expansion amplitude is determined according to the elastic coefficient of the garment.
Specifically, the elasticity coefficient of the garment is an original attribute set during garment image entry, each elasticity coefficient corresponds to an expansion curve, the expansion area is from the highest position of a lower garment image to an opening of an upper garment image, the expansion range from the opening of the upper garment image to the highest position of the lower garment image is sequentially reduced, namely, the grid nodes in the area obtain an expansion force from the center to two sides according to the longitudinal position and the transverse position of the grid nodes, the expansion force is obtained from the expansion curve according to the positions of the grid nodes, and the grid nodes are enabled to displace outwards according to the expansion force, so that the garment is expanded.
The step T3 specifically includes the following sub-steps:
step T310: a constraint curve is determined.
The constraint curve is a smooth curve obtained by utilizing the corresponding elastic coefficient of the garment, and the curve represents the size of the deformation after being tightened along with the approach of the garment to the constraint opening position.
Step T320: and slowly constraining the first constraint area according to the constraint curve.
The adjusting the first constraint area is specifically to perform slow constraint on the first constraint area. The slow constraint means that the constraint area is slowly constrained according to a constraint curve corresponding to the elastic coefficient of the garment. In the process of slow constraint, vertex coordinates in the first constraint area are correspondingly shifted, and the shifted vertex coordinates
Figure 587147DEST_PATH_IMAGE007
The concrete expression is as follows:
Figure 979951DEST_PATH_IMAGE008
wherein,
Figure 768915DEST_PATH_IMAGE009
representing the center point of the first constrained region in the constrained plane
Figure 729918DEST_PATH_IMAGE010
The position coordinates of a certain vertex on the left side,
Figure 378068DEST_PATH_IMAGE011
the left vertex coordinate of the opening of the trousers,
Figure 918771DEST_PATH_IMAGE012
Representing the coordinates of the left vertex of the waist of the jacket, and s2 representing the vertex
Figure 220964DEST_PATH_IMAGE013
Distance constraint plane midpoint
Figure 87288DEST_PATH_IMAGE014
The transverse component of (a) is,
Figure 222735DEST_PATH_IMAGE015
representing the midpoint of a plane of constraint
Figure 567128DEST_PATH_IMAGE010
Is determined by the x-coordinate of (c),
Figure 330685DEST_PATH_IMAGE016
the x-coordinate representing the left vertex of the waist of the jacket,
Figure 758124DEST_PATH_IMAGE017
indicating the degree of constraint attenuation in the longitudinal direction of the constraint curve.
Wherein, different clothing elasticity coefficients respectively correspond to a constraint curve as the smooth transition of the edge, and simultaneously, the ratio of the longitudinal component s3 to the length s1 of the whole image is substituted into the abscissa of the constraint curve to obtain the ordinate of the constraint curve, and the ordinate of the constraint curve represents the attenuation degree of the longitudinal constraint of the constraint curve
Figure 974342DEST_PATH_IMAGE017
Step T330: and performing smooth constraint on the second constraint area.
Wherein, because the clothing image is gridded integrally, the grid which exceeds 10 percent of the constraint plane is constrained, in particular to the second constraint area, and the second constraint area is wrapped into the clothing of the lower body integrally.
Specifically, the method for the second constraint area is to perform gradual folding constraint on the second constraint area. The gentle furling and restraining is to be specific to inwardly restrain the clothes, and the restraining strength is enhanced along with the reduction of the height of the clothes. Wherein, when the clothes in the second constraint area are constrained inwards, the vertex coordinates are shifted, and the shifted vertex coordinates
Figure 122427DEST_PATH_IMAGE018
The concrete expression is as follows:
Figure 881435DEST_PATH_IMAGE019
wherein,
Figure 89563DEST_PATH_IMAGE020
indicating that the second constrained region is located at the central point of the constrained plane
Figure 58656DEST_PATH_IMAGE021
The position coordinates of a certain vertex on the left side,
Figure 400644DEST_PATH_IMAGE022
the left vertex coordinate of the opening of the trousers,
Figure 279738DEST_PATH_IMAGE023
Representing the waist left vertex coordinates of the jacket, s4 representing the height of the entire image,
Figure 924346DEST_PATH_IMAGE024
representing vertices
Figure 505369DEST_PATH_IMAGE025
Is determined by the x-coordinate of (c),
Figure 995256DEST_PATH_IMAGE026
representing the center point of a plane of constraint
Figure 587912DEST_PATH_IMAGE027
S2 denotes the vertex
Figure 13208DEST_PATH_IMAGE028
Distance constraint plane midpoint
Figure 691314DEST_PATH_IMAGE029
The transverse component of (a) is,
Figure 250471DEST_PATH_IMAGE030
representing the midpoint of a plane of constraint
Figure 825197DEST_PATH_IMAGE031
Is determined by the x-coordinate of (c),
Figure 546028DEST_PATH_IMAGE032
y-coordinate representing the left vertex of the waist of the jacket, wherein
Figure 711430DEST_PATH_IMAGE033
Represents (-150, 0).
Step T340: and carrying out cutting constraint on the third constraint area.
The third constraint area is specifically constrained by cutting, for example, all vertex coordinates can be reset to be a vertex inside the lower garment, so that the purpose of cutting is achieved, and the garment is prevented from being exposed. The specific algorithm can refer to the prior art, and is not described herein in detail.
Step S150: and outputting the clothing image and the mannequin image after the second matching.
Before outputting, detecting the clothing image after the second matching.
Specifically, the detecting and processing of the clothing image after the second matching is to detect the color saturation of a constraint area of the clothing image, wherein the color saturation value of each pixel point in the constraint area
Figure 418486DEST_PATH_IMAGE034
The concrete expression is as follows:
Figure 985734DEST_PATH_IMAGE035
wherein,
Figure 877466DEST_PATH_IMAGE036
is a pixel pointiThe color value of the image of the restricted area in which the image is located,
Figure 654798DEST_PATH_IMAGE037
is a pixel pointhAnd pixel pointiThe distance of (a) to (b),
Figure 290179DEST_PATH_IMAGE038
representing pixel pointsiThe radius of the fuzzy ring of (a),
Figure 446354DEST_PATH_IMAGE039
representing pixel pointsiThe weight of (c).
Wherein if the color saturation value of the specified number of pixel points
Figure 649933DEST_PATH_IMAGE040
If the value is larger than the specified threshold value, the synthesis engine is used for outputting, otherwise, the process exits.
And the finally deformed garment image can achieve the constraint effect close to nature, and is output by using a synthesis engine.
The present application further provides an image matching system, as shown in fig. 2, the image matching system includes a preprocessing unit 201, a mesh reconstruction unit 202, a first matching unit 203, a second matching unit 204, and an output unit 205.
Wherein the preprocessing unit 201 is used for preprocessing the clothing and the mannequin images.
The mesh reconstruction unit 202 is connected to the preprocessing unit 201, and is configured to perform mesh reconstruction on the preprocessed clothing image and the mannequin image.
Specifically, the mesh reconstruction unit 202 specifically includes a basic unit determination module, a vertex selection module, and a mesh generation module.
And the basic unit determining module is used for acquiring information and determining the basic unit of the internal vertex according to the acquired information.
And the vertex selection module is connected with the basic unit determination module and used for selecting the external contour vertex and the internal generation vertex according to the basic unit to obtain a vertex set.
The grid generation module is connected with the vertex selection module and used for carrying out instantiation generation on the grid.
The first matching unit 203 is connected to the mesh reconstruction unit 202, and is configured to perform first matching between the clothing image and the mannequin image in the reconstructed mesh.
The second matching unit 204 is connected with the first matching unit 203 and is used for adjusting and constraining the clothing image to complete the second matching of the clothing and the mannequin.
As shown in fig. 3, the second matching unit 204 specifically includes the following sub-modules, a constrained plane determining module 301, a constrained region determining module 302, and a constraint adjusting module 303.
The constraint plane determination module 301 is used to determine a constraint plane.
The constrained region determining module 302 is connected to the constrained plane determining module 301, and is configured to determine a constrained region according to the constrained plane.
The constraint adjusting module 303 is connected to the constraint region determining module 302, and is configured to perform constraint adjustment on the constraint region.
The output unit 205 is connected to the second matching unit 204, and is configured to output the clothing image and the mannequin image after the second matching.
The application has the following beneficial effects:
(1) the image matching method and the system thereof can finely correct the clothes image and the mannequin in the same posture, so that the image matching method and the system thereof achieve a relatively real effect.
(2) The image matching method and the system thereof can automatically match the clothing image with the mannequin image without manual adjustment, reduce personnel cost and time cost, make the wearing and the putting together of a plurality of pieces of clothing possible, and greatly improve the usability of the display system.
Although the present application has been described with reference to examples, which are intended to be illustrative only and not to be limiting of the application, changes, additions and/or deletions may be made to the embodiments without departing from the scope of the application.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image matching method is characterized by specifically comprising the following steps:
acquiring a clothing image and a mannequin image, and preprocessing the clothing image and the mannequin image;
carrying out grid reconstruction on the preprocessed clothing image and the mannequin image;
in the reconstructed grid, performing first matching of the clothing image and the mannequin image;
adjusting and constraining the clothing image to complete the second matching of the clothing image and the mannequin image;
rendering and outputting the clothing image and the mannequin image after the second matching;
the step of adjusting and constraining the clothing image comprises the steps of slowly constraining a constraint area according to a constraint curve corresponding to the elastic coefficient of the clothing, wherein in the process of slowly constraining, vertex coordinates in the constraint area correspondingly shift, and the shifted vertex coordinates
Figure 331486DEST_PATH_IMAGE001
The concrete expression is as follows:
Figure 428755DEST_PATH_IMAGE002
Figure 261582DEST_PATH_IMAGE003
representing the center point of the first constrained region in the constrained plane
Figure 356577DEST_PATH_IMAGE004
The position coordinates of a certain vertex on the left side,
Figure 291166DEST_PATH_IMAGE005
the left vertex coordinate of the opening of the trousers,
Figure 192126DEST_PATH_IMAGE006
Representing the coordinates of the left vertex of the waist of the jacket, and s2 representing the vertex
Figure 817142DEST_PATH_IMAGE003
Distance constrained plane center point
Figure 410935DEST_PATH_IMAGE004
The transverse component of (a) is,
Figure 550929DEST_PATH_IMAGE007
representing the center point of a plane of constraint
Figure 458842DEST_PATH_IMAGE004
Is determined by the x-coordinate of (c),
Figure 16994DEST_PATH_IMAGE008
the x-coordinate representing the left vertex of the waist of the jacket,
Figure 516108DEST_PATH_IMAGE009
representing the degree of longitudinal constraint attenuation of the constraint curve;
before outputting, detecting the matched clothing image for the second time, wherein each image isColor saturation value of a pixel
Figure 346661DEST_PATH_IMAGE010
The concrete expression is as follows:
Figure 855003DEST_PATH_IMAGE011
wherein,
Figure 251349DEST_PATH_IMAGE012
is a pixel pointiThe color value of the image of the restricted area in which the image is located,
Figure 124627DEST_PATH_IMAGE013
is a pixel pointhAnd pixel pointiThe distance of (a) to (b),
Figure 987016DEST_PATH_IMAGE014
representing by pixel pointsiThe radius of the fuzzy ring of (a),
Figure 33470DEST_PATH_IMAGE015
representing pixel pointsiThe weight of (c);
wherein if the color saturation value of the specified number of pixel points
Figure 549902DEST_PATH_IMAGE016
If the value is larger than the specified threshold value, the synthesis engine is used for outputting, otherwise, the process exits.
2. The image matching method as claimed in claim 1, wherein the garment image comprises a top-up image and a bottom-up image, the image is pre-processed by performing matting processing on the top-up image and the garment image, and the pre-processed image is stored in an image format with a transparent channel.
3. The image matching method according to claim 1, wherein the adjustment constraint of the clothing image is specifically a constraint adjustment of the clothing image at the lower layer, and if the upper image covers the lower image, the constraint adjustment of the lower image at the lower layer is performed, and if the lower image covers the upper image, the constraint adjustment of the upper image at the lower layer is performed.
4. The image matching method of claim 3, wherein the adjustment constraint of the clothing image specifically comprises the following sub-steps:
determining a constraint plane;
determining a constraint area according to a constraint plane;
and carrying out constraint adjustment on the constraint area.
5. The image matching method according to claim 4, wherein if the upper-mounted image and the lower-mounted image appear, the waist opening feature points on both sides of the waist of the upper-mounted image are taken as basic vertices, contour vertices on both sides of the lower-mounted waist, which are closest to the basic vertices, are found, and the four vertices are connected to form a constraint plane;
if the lower garment image and the upper garment image appear, the waist opening feature points on the two sides of the waist of the lower garment are used as basic vertexes, contour vertexes on the two sides of the waist of the upper garment, which are closest to the basic vertexes, are searched, and the four vertexes are connected to form a constraint plane.
6. The image matching method of claim 4, wherein before performing the constraint adjustment on the constraint area, further comprising performing an expansion process on the underlying garment.
7. The method of image matching according to claim 5, wherein the constrained regions include a first constrained region, a second constrained region, and a third constrained region;
wherein the first constraint zone is ten percent of the overall garment height from the constraint plane to above the constraint plane;
the second constraint area is ten percent of the whole garment height from the constraint plane to the part below the constraint plane;
the third constrained region is the second constrained region to the upper or lower garment image end portion.
8. The image matching method as claimed in claim 7, wherein the constraint adjustment of the constraint region specifically comprises the sub-steps of:
determining a constraint curve;
slowly constraining the first constraint area according to the constraint curve;
performing gentle constraint on the second constraint area;
and carrying out cutting constraint on the third constraint area.
9. An image matching system, characterized by specifically comprising: the device comprises a preprocessing unit, a grid reconstruction unit, a first matching unit, a second matching unit and an output unit;
the preprocessing unit is used for preprocessing the image;
the grid reconstruction unit is used for carrying out grid reconstruction on the preprocessed clothing image and the mannequin image;
the first matching unit is used for performing first matching on the clothing image and the mannequin image in the reconstructed grid;
the second matching unit is used for adjusting and constraining the clothing image to complete the second matching of the clothing and the mannequin;
the output unit is used for outputting the clothing image and the mannequin image which are matched for the second time;
the step of adjusting and constraining the clothing image comprises the steps of slowly constraining a constraint area according to a constraint curve corresponding to the elastic coefficient of the clothing, wherein in the process of slowly constraining, vertex coordinates in the constraint area correspondingly shift, and the shifted vertex coordinates
Figure 594081DEST_PATH_IMAGE001
The concrete expression is as follows:
Figure 461543DEST_PATH_IMAGE017
Figure 796840DEST_PATH_IMAGE003
representing the center point of the first constrained region in the constrained plane
Figure 371041DEST_PATH_IMAGE004
The position coordinates of a certain vertex on the left side,
Figure 382860DEST_PATH_IMAGE005
the left vertex coordinate of the opening of the trousers,
Figure 472038DEST_PATH_IMAGE006
Representing the coordinates of the left vertex of the waist of the jacket, and s2 representing the vertex
Figure 860294DEST_PATH_IMAGE003
Distance constrained plane center point
Figure 289002DEST_PATH_IMAGE004
The transverse component of (a) is,
Figure 488033DEST_PATH_IMAGE007
representing the center point of a plane of constraint
Figure 64508DEST_PATH_IMAGE004
Is determined by the x-coordinate of (c),
Figure 194138DEST_PATH_IMAGE008
the x-coordinate representing the left vertex of the waist of the jacket,
Figure 539669DEST_PATH_IMAGE009
representing the degree of longitudinal constraint attenuation of the constraint curve;
wherein, the transfusion is carried outBefore the step of obtaining the image, the step of detecting and processing the clothing image after the second matching is further included, and the color saturation value of each pixel point
Figure 158869DEST_PATH_IMAGE018
The concrete expression is as follows:
Figure 160323DEST_PATH_IMAGE019
wherein,
Figure 641114DEST_PATH_IMAGE012
is a pixel pointiThe color value of the image of the restricted area in which the image is located,
Figure 841151DEST_PATH_IMAGE013
is a pixel pointhAnd pixel pointiThe distance of (a) to (b),
Figure 834515DEST_PATH_IMAGE014
representing by pixel pointsiThe radius of the fuzzy ring of (a),
Figure 120003DEST_PATH_IMAGE015
representing pixel pointsiThe weight of (c);
wherein if the color saturation value of the specified number of pixel points
Figure 653752DEST_PATH_IMAGE016
If the value is larger than the specified threshold value, the synthesis engine is used for outputting, otherwise, the process exits.
10. The image matching system of claim 9, wherein the second matching unit specifically includes sub-modules of a constrained plane determining module, a constrained region determining module, a constrained adjustment module;
the constraint plane determining module is used for determining a constraint plane;
the constrained region determining module is connected with the constrained plane determining module and used for determining a constrained region according to the constrained plane;
and the constraint adjusting module is connected with the constraint area determining module and is used for carrying out constraint adjustment on the constraint area.
CN202010594350.0A 2020-06-28 2020-06-28 Image matching method and system Active CN111476912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010594350.0A CN111476912B (en) 2020-06-28 2020-06-28 Image matching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010594350.0A CN111476912B (en) 2020-06-28 2020-06-28 Image matching method and system

Publications (2)

Publication Number Publication Date
CN111476912A CN111476912A (en) 2020-07-31
CN111476912B true CN111476912B (en) 2020-09-29

Family

ID=71764015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010594350.0A Active CN111476912B (en) 2020-06-28 2020-06-28 Image matching method and system

Country Status (1)

Country Link
CN (1) CN111476912B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734856A (en) * 2021-01-05 2021-04-30 恒信东方文化股份有限公司 Method and system for determining shooting angle of clothes

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750439A (en) * 2012-05-24 2012-10-24 深圳市美丽同盟科技有限公司 Method and device of figure tracking of clothes
CN105354876A (en) * 2015-10-20 2016-02-24 何家颖 Mobile terminal based real-time 3D fitting method
CN110706359A (en) * 2019-09-30 2020-01-17 恒信东方文化股份有限公司 Image fitting correction method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201102794D0 (en) * 2011-02-17 2011-03-30 Metail Ltd Online retail system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750439A (en) * 2012-05-24 2012-10-24 深圳市美丽同盟科技有限公司 Method and device of figure tracking of clothes
CN105354876A (en) * 2015-10-20 2016-02-24 何家颖 Mobile terminal based real-time 3D fitting method
CN110706359A (en) * 2019-09-30 2020-01-17 恒信东方文化股份有限公司 Image fitting correction method and system

Also Published As

Publication number Publication date
CN111476912A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
US10529127B2 (en) System and method for simulating realistic clothing
US5850222A (en) Method and system for displaying a graphic image of a person modeling a garment
CN104036532B (en) Based on the three-dimensional production method of clothing to the seamless mapping of two-dimentional clothing popularity
JP6290153B2 (en) Method and system for generating a three-dimensional model of an object
CN111291431A (en) Method for producing three-dimensional full-forming knitting pattern
CN108986159A (en) A kind of method and apparatus that three-dimensional (3 D) manikin is rebuild and measured
US20130170715A1 (en) Garment modeling simulation system and process
CN111476912B (en) Image matching method and system
KR102033161B1 (en) method for transferring garment draping between avatars
KR101072944B1 (en) System for creating 3d human body model and method therefor
JP2004501432A (en) Clothes design method by 3D digital
US9695529B2 (en) Knitted outer covering and a method and system for making three-dimensional patterns for the same
JP2003511576A (en) Method and apparatus for simulating and representing dressing of a tailor's dummy
CN112529670B (en) Virtual fitting method
CN112270731A (en) Dress fitting method and device
CN110706359A (en) Image fitting correction method and system
JP6153377B2 (en) Clothing design equipment
US20180061141A1 (en) Method and System for Progressive Drape Update on Avatar Morph for Virtual Fitting
JP6501684B2 (en) Design equipment for apparel products
CN110838182B (en) Method and system for attaching image to mannequin
de Malefette et al. PerfectDart: Automatic Dart Design for Garment Fitting
Glascoe et al. Relationships between rigs and humanoid and coveroid landmarks
NAKAYAMA et al. 3D Distance Field-Based Apparel Modeling
TWI783908B (en) Integrated production system and method of wearing garments
Li et al. A novel method for making a one-piece tight-fitting garment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant