[go: nahoru, domu]

CN108537126B - Face image processing method - Google Patents

Face image processing method Download PDF

Info

Publication number
CN108537126B
CN108537126B CN201810205659.9A CN201810205659A CN108537126B CN 108537126 B CN108537126 B CN 108537126B CN 201810205659 A CN201810205659 A CN 201810205659A CN 108537126 B CN108537126 B CN 108537126B
Authority
CN
China
Prior art keywords
image
face
customer
image processing
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810205659.9A
Other languages
Chinese (zh)
Other versions
CN108537126A (en
Inventor
陈东岳
陈秋生
贾同
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201810205659.9A priority Critical patent/CN108537126B/en
Publication of CN108537126A publication Critical patent/CN108537126A/en
Application granted granted Critical
Publication of CN108537126B publication Critical patent/CN108537126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face image processing system and a method, wherein the system comprises: the model image storage module is used for storing a model image; the face image acquisition module is used for acquiring a face image; the image transmission module is used for transmitting the face image to the image processing module; the image processing module is used for carrying out face image synthesis; and the image display module is used for displaying the acquired face image, the recommended model image and the final composite image. The method mainly comprises the steps of detecting human face areas in an input image and a reference image, extracting human face characteristic points, triangulating two human face images according to the same rule, and the like. The system and the method can separate the image acquisition module from the image processing module, facilitate the selection of the optimal acquisition position, and simultaneously enable a customer to see the acquired face image in real time and select the appropriate face expression and the optimal photographing position.

Description

Face image processing method
Technical Field
The present invention relates to image processing systems and methods, and particularly to a system and method for processing a face image.
Background
Image processing techniques are used in many fields including medical, military, manufacturing, etc. The application of the image processing technology in face image recognition enables people to more conveniently acquire related information in many fields, so that more accurate judgment is made on related conditions.
In many situations, such as hair salons, a customer may wish to have an intuitive decision as to whether a hairstyle will fit himself or herself before cutting his hair, so that a more self-fitting choice can be made. In addition, the method for synthesizing the face image has wide application prospects in privacy protection, virtual fitting, entertainment and leisure and the like.
The main disadvantages of the prior art are:
firstly, if the brightness, contrast and tone of the image to be synthesized are not consistent, the reality of the synthesized image is not high, and therefore useful information cannot be obtained.
Secondly, when the method is applied to the barbershop scene, the facial features of the customer cannot be seamlessly attached to the hairstyle of the model due to the fact that the hairstyle can shield the facial features of the human body, the image effect is unnatural, and the customer experience is poor.
Thirdly, the prior art cannot change the face contour of the synthesized image, and in a popular way, if the provided model is a round face, but the face of a customer is a square face, the prior synthesis technology can only ensure that the synthesized image is still the round face.
In published papers and applications of implementation, some people can change the facial features of the reference image, and the final effect is that the facial features of the customers are the facial features of the five features of the synthetic image, but the three problems are still existed.
A3D Model Based application method (3D morphable models, 3dMM) is adopted in Face swing under Large position variables of Lin Y, Wang S, Lin Q and the like (in IEEE, International Conference on Multimedia and Expo, 2012), and because the technology is immature, only partial facial features can be generated, the final synthesis effect is not real, and the method Based on the 3D Model can spend a Large amount of time and cannot meet most application scenes.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a face image processing system and a face image processing method, so that the face image synthesis effect has higher authenticity, and a more intuitive and more accurate simulation effect is provided for customers.
The technical scheme of the invention is realized as follows:
a face image processing system comprising: the model image storage module is used for storing a model image; the face image acquisition module is used for acquiring a face image; the image transmission module is used for transmitting the face image to the image processing module; the image processing module is used for carrying out face image synthesis; and the image display module is used for displaying the acquired face image, the recommended model image and the final composite image.
Preferably, the system further comprises a data analysis unit for analyzing the data information of the customer, wherein the data mainly analyzed comprises the age, the sex and the hair style suitable for the customer, and the data is prepared for the customer to select the hair style suitable for the customer.
Preferably, the system further comprises a visual user interface including buttons for taking pictures and selecting images, allowing the user to take pictures and select pictures of the customer at the appropriate time.
Preferably, the visual user interface further comprises an option button for recommending a hair style according to data obtained by analyzing the appearance of the customer.
Preferably, the visual user interface further comprises an option button for the customer to view the hairstyle models stored in the database, and when the user wishes to try another hairstyle after seeing the recommended hairstyle, the option allows the user to manually select his favorite hairstyle model in the database, and to use the composition button to generate an image of his same hairstyle as the model.
Preferably, the image transmission module comprises a network unit for wireless data transmission, and the network unit allows a user to remotely control the camera in the same local area network in a wireless manner.
Preferably, the image processing module includes an image preprocessing unit for preprocessing the collected customer image, where the preprocessing includes image graying, histogram equalization, and filtering.
Preferably, the image processing module further comprises a feature extraction unit, configured to detect a face region of the customer and extract face feature points, where the face feature points are a series of points defined in advance and capable of reflecting face features, and are mainly distributed in the facial five sense organ contour.
Preferably, the image processing module further comprises an image synthesis unit for synthesizing images, wherein the image synthesis unit is used for synthesizing the images between the collected customer image and the model with the hairstyle, so that the five sense organs and the face of the synthesized image are consistent with the customer, and the hairstyle is consistent with the hairstyle of the model.
A face image processing method is applicable to any system in the technical scheme, and comprises the following steps:
s1, detecting the human face areas in the input image and the reference image respectively, and extracting human face characteristic points;
s2, respectively triangulating the two face images according to the same rule based on the face characteristic points extracted in S1;
s3, calculating affine transformation between corresponding triangles obtained by triangulation from the input image to the reference image, and filling colors in the triangles in the reference image to obtain an intermediate image;
s4, extracting a human face region of interest (ROI) from the intermediate image in S3;
s5, making a mask image of a human face region of interest (ROI), wherein the mask image is used for processing the unnatural problem of a synthetic image caused by the color difference between the input image and the reference image;
and S6, finishing the color correction of the synthesized image through the human face mask image, enabling the color of the synthesized image to be perfect and excessive, and improving the reality.
The invention has the beneficial effects that:
1. the system framework can separate the image acquisition module from the image processing module, is convenient for selecting the optimal acquisition position, and simultaneously enables a customer to see the acquired face image in real time and select the appropriate face expression and the optimal photographing position.
2. Adopt long-range wireless transmission mode greatly increased the degree of freedom in space, image acquisition equipment small in size moreover can regard as handheld device to use, and customer can handheld image acquisition equipment, as long as acquisition equipment is in same LAN with image processing module, the system just can realize the picture real-time transmission between acquisition equipment and the processing module, can make personnel of shooing and customer observe the picture simultaneously to and real-time adjustment angle of shooing.
3. The visual interface with the function options can enable a customer to manually select a favorite hairstyle model in the database, and generate an image of the same hairstyle as the model by using the synthesis button, so that the operability of the user is greatly improved, and more diversified choices are provided for the customer.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a diagram of a user terminal application scenario for the system of the present invention;
FIG. 3 is a schematic diagram of face detection and face feature point extraction according to an embodiment of the present invention;
FIG. 4 is a first schematic diagram of image triangulation in an embodiment of the invention;
FIG. 5 is a second schematic diagram of image triangulation in an embodiment of the invention;
FIG. 6 is a face mask image produced in an embodiment of the present invention;
FIG. 7 is a flow chart of an image synthesis algorithm according to the present invention;
FIG. 8 is a flow chart of an embodiment of an image synthesis algorithm according to the present invention.
Detailed Description
The invention is explained in more detail below with reference to the figures and examples:
as shown in fig. 1, a face image processing system includes:
the model image storage module is used for storing a model image;
the face image acquisition module is used for acquiring a face image;
the image transmission module is used for transmitting the face image to the image processing module;
the image processing module is used for carrying out face image synthesis;
and the image display module is used for displaying the acquired face image, the recommended model image and the final composite image.
Further, the system also comprises a data analysis unit for analyzing the data information of the customers, wherein the mainly analyzed data comprise the age, the sex and the hair style suitable for the customers, and the data preparation is made for the customers to select the hair style suitable for the customers.
Further, the system also comprises a visual user interface, wherein the visual user interface comprises buttons for taking pictures and selecting images, an option button for allowing a user to take pictures and select pictures for customers at proper time, carrying out hair style recommendation according to data obtained by analyzing the appearance of the customers and an option button for allowing the customer to view hair style models stored in the database, when the user wants to try other hair styles after seeing the recommended hair style, the option allows the user to manually select a favorite hair style model in the database, and an image of the same hair style as the model is generated by using a synthesis button.
Further, the image transmission module comprises a network unit for wireless data transmission, and the network unit allows a user to remotely control the camera in the same local area network in a wireless mode.
Further, the image processing module comprises an image preprocessing unit for preprocessing the collected customer image, wherein the preprocessing comprises image graying, histogram equalization and filtering operation.
Furthermore, the image processing module further comprises a feature extraction unit, which is used for detecting the face area of the customer and extracting the face feature points, wherein the face feature points are a series of points which are defined in advance and can reflect the face features, and are mainly distributed on the contour of the five sense organs of the face. Furthermore, the image processing module also comprises an image synthesis unit for synthesizing images, and the image synthesis unit is used for synthesizing the images between the acquired customer image and the model with the hair style, so that the five sense organs and the face shape of the synthesized image are consistent with the customer, and the hair style is consistent with the hair style of the model.
Further, the image processing module further includes an image synthesizing unit for synthesizing an image. The unit can be used for image synthesis between the collected customer image and the model with the hairstyle, and the final effect is that the five sense organs and the face of the synthesized image are consistent with the customer, but the hairstyle is consistent with the hairstyle of the model. This is a core task of the system.
In this embodiment, as shown in fig. 1 and 2, the facial image acquisition module is used for acquiring an image when a customer enters a store and acquiring a facial image with more accurate hair style selection, the image acquired by the image transmission module is transmitted between the image acquisition module and the image processing module in a wireless manner, the image processing module completes a plurality of system core functions including customer appearance data analysis, model hair style recommendation, image data synthesis, and the like, and the image display module is arranged for displaying a user facial image, displaying an image synthesis progress, displaying a synthesized image, displaying a user interface, and the like, which are used for acquisition. Specifically, the image acquisition module and the image processing module adopt a wireless remote transmission mode, the image processing module occupies a fixed size in space, can be kept stable under the condition of no external force and is suitable for being placed in a fixed space position; the image processing module and the image display module are connected in a wired mode, so that the image processing module and the image display module are placed at the same spatial position. Because the image processing module is too big and is not suitable for being used as mobile equipment, the image acquisition module can be separated from the image processing module for use, so that the optimal acquisition position can be conveniently selected, and meanwhile, a customer can see the acquired face image in real time and select a proper face expression and an optimal photographing position. Meanwhile, the customer cannot take the photo and the keeping action into consideration at any time, so that the specific photo taking action is set in the image processing module and is finished by another person, and the specific collected situation can be displayed in the image processing module in real time. The remote wireless transmission mode greatly increases the freedom of space, the image acquisition equipment is small in size and can be used as handheld equipment, a customer can hold the image acquisition equipment by hands, and as long as the acquisition equipment and the image processing module are in the same local area network, the system can realize real-time transmission of pictures between the acquisition equipment and the processing module, allows a picture-taking person and the customer to observe the pictures simultaneously, and adjusts the picture-taking angle in real time. When the image of the customer is captured by the acquisition module and transmitted to the processing module in a wireless transmission mode, firstly, the image processing module analyzes personal information of the customer to obtain accurate data based on basic information of the customer so as to recommend the hair style more suitable for the customer, at the moment, the image display module displays the recommended hair style suitable for the customer, and the image synthesis module performs image synthesis on the model image of the hair style and the transmitted image of the customer and displays the synthesized image on the display module. On the one hand, the function is beneficial to saving time for customers, because the part of the function is completely automatically operated by the system and does not need to input information manually; on the other hand, the customer can intuitively feel the suitable degree of the hairstyle and the customer, and the consumption experience of the customer is improved.
As shown in fig. 2, a customer can use a handheld device (in the present invention, a smart phone is recommended to be used) to collect information through a front camera, a shot picture can be displayed on a screen of the handheld device in real time, and meanwhile, the user can also receive the picture shot by the customer through a wireless transmission mode on a display screen of an image display module, and the shooting action is controlled by the user, and when the user thinks that a proper picture is shot, the user can press an Esc key on a keyboard to shoot. The collected customer images are received by the image processing module in a wireless transmission mode, the external characteristic information of the customer is firstly analyzed, all model images in the storage module are matched, in the embodiment, 4 best-fit hairstyles of the customer are selected, the effect after the combination is displayed on the display screen, and the customer can manually select all hairstyle models in the library to carry out image combination.
The image processing module in this embodiment includes a feature extraction unit for detecting the face area of the customer and extracting the feature points of the face. As shown in fig. 3, 4 and 5, the facial feature points refer to a series of points defined in advance and capable of reflecting facial features, and are mainly distributed in the facial five sense organ outlines. The feature extraction unit may provide specific positions where the face region of the customer is located, and specific distribution positions of the face feature points. The positions of the face areas of the customers and the distribution positions of the feature points are important preparation data for face information data analysis and face image synthesis.
As shown in fig. 7, a method for processing a face image, which is applicable to any system described in the above embodiments, includes the following steps: s1, detecting the human face areas in the input image and the reference image respectively, and extracting human face characteristic points; s2, respectively triangulating the two face images according to the same rule based on the face characteristic points extracted in S1; s3, calculating affine transformation between corresponding triangles obtained by triangulation from the input image to the reference image, and filling colors in the triangles in the reference image to obtain an intermediate image; s4, extracting a human face region of interest (ROI) from the intermediate image in S3; s5, making a mask image of a human face region of interest (ROI), wherein the mask image is used for processing the unnatural problem of a synthetic image caused by the color difference between the input image and the reference image; and S6, finishing the color correction of the synthesized image through the human face mask image, enabling the color of the synthesized image to be perfect and excessive, and improving the reality.
More specific method embodiments, as shown in FIG. 8:
step 101 is the acquisition of a face image of a customer, and the acquired image is called an input image. The input image provides the five sense organs and facial contour information for the final composite image.
Step 102 is a preprocessing operation of the input image, mainly including various filtering processes, aiming at improving the image quality.
Step 103 is face detection, the face detection algorithm detects a face by using a HAAR-like feature algorithm provided by the existing computer vision library OPENCV, a return value of the face detection algorithm is a square area containing a face area, and a specific mathematical expression is coordinates of a vertex at the upper left corner of the square and the length and width of a rectangle.
Step 104 is face feature point detection, and requires detecting key points of a face to determine the position of a face feature. By using the improved ASM method proposed by Kazemi, Valid, Josephine Sullivan et al In the paper "One Millisecon Face Alignment with An end Of the Regression Trees" (In IEEE, Conference on Computer Vision and Pattern Recognition (CVPR), 2014), 68 personal Face feature points are obtained which outline the eyebrow, eye, nose, mouth and Face contour Of the Face, and FIG. 3 shows the Face detection and feature point detection results Of the image.
Step 105, aligning the face of the input image with the face of the reference image, including scaling and rotating operations, in order to keep the sizes and angles of the faces of the two images consistent, selecting a vector between the 40 th and 43 th feature points as a basic rotating vector, solving an included angle between the vectors of the two images, rotating the input image to keep the angles of the faces of the two images consistent, and then selecting a vector between the 1 st and 17 th feature points as a basic scaling vector to scale the input image. After the faces are aligned, the sizes and angles of the faces of the input image and the reference image can be kept consistent, and after the input image is rotated, the face characteristic points of the input image are changed. This is important for the subsequent work.
Step 106 is triangulation of the input image.
The triangulation needs to be respectively carried out on an input image and a reference image, a plurality of feature points are added firstly, for the input image, 3 feature points are added, and the input image is expressed by the following formula mathematically:
Figure GDA0001665354890000071
Figure GDA0001665354890000072
Figure GDA0001665354890000073
for the reference image, 7 feature points are added:
Figure GDA0001665354890000074
Figure GDA0001665354890000075
Figure GDA0001665354890000076
there are also 4 feature points that are the coordinates of the four corners of the reference image.
Wherein
Figure GDA0001665354890000077
Representing an input imageThe coordinates of the 69 th feature point of (2), which includes two values, represent desired x, y coordinate values of the feature point.
Figure GDA0001665354890000078
The coordinates of the ith feature point of the reference image are expressed, and α is a coefficient, which can be selected by itself, and is 1.2. The three characteristic points are respectively positioned in the middle of the eyes and above the two eyebrows of the human face visually, and in addition, in order to keep the shape of the face after face changing consistent with the input image, the human face characteristic points of the reference image are changed and are expressed mathematically as follows:
Figure 1
,for i ∈(2,3,…,17) (7)
the reference image is first triangulated as shown in fig. 5, which is performed twice for the reference images before and after the feature point change, respectively. After triangulation, a series of triangular patches corresponding to each other are obtained on the two images.
The affine transformation is calculated such that each triangle vertex of the image before the feature point change is mapped onto a corresponding triangle vertex of the image after the feature point change.
The affine transformation includes:
a) rotation (Linear transformation)
b) Translation (vector addition)
Scaling operation (Linear transformation)
Three points in the image can determine an affine transformation, which is usually represented by a 2 × 3 matrix:
Figure GDA0001665354890000081
Figure GDA0001665354890000082
the matrix A, B may be a two-dimensional vector
Figure GDA0001665354890000083
It is transformed, so it can also be expressed in the following form:
Figure GDA0001665354890000084
or
T=M·[x,y,1]T (10)
Figure GDA0001665354890000085
T is the vector after affine transformation M of vector X. In step 112, known are the vector X and the vector T, and the transformation matrix M is solved for.
According to T ═ M · [ x, y,1 ·]TAnd respectively selecting 3 points from the two images to obtain six equations and solving all values in the matrix M.
And then, deforming all pixel points in the triangle in the reference image with unchanged characteristic points into the reference image after the characteristic points are changed by using the calculated affine transformation. By this step, an image in which five sense organs are unchanged but the shape of the face is identical to the input image is obtained, which is called a second-level reference image. The feature points of the second-level reference image are the feature points of the reference image that have changed.
And the second step is to triangulate the input image, the rule of triangulation is as shown in fig. 4, and because only the facial features of the input image are needed, triangulation is not needed for the whole input image. It can be seen from fig. 4 and 5 that the face triangulation of the input image and the second-level reference image are consistent, and therefore the triangles they obtain are also in one-to-one correspondence. Similarly, the above operation is repeated to calculate the affine transformation, but this time with the three vertices of each triangle in the input image mapped into the second level reference image. After the step is finished, an image can be obtained, the face shape and the five-sense organ characteristics of the image are consistent with those of the input image, other parts of the image are consistent with those of the reference image, the image is changed into a third-level reference image, and the characteristic points of the third-level reference image are consistent with those of the second-level reference image.
Steps 107-111 are for reference images, and the specific implementation is consistent with the above steps 101-105, and step 112 has been described in detail above.
After step 112, an intermediate image corresponding to the third-level reference image can be obtained.
In order to automatically implement all functions, the ROI of the face is extracted in step 113 based on the detected feature points. In the paper, a convex hull consisting of 7 feature points is selected as the ROI. Seven points are shown by the following formula:
Figure GDA0001665354890000091
Figure GDA0001665354890000092
Figure GDA0001665354890000093
Figure GDA0001665354890000094
Figure GDA0001665354890000095
Figure GDA0001665354890000096
Figure GDA0001665354890000097
wherein
Figure GDA0001665354890000098
The first feature point coordinate representing the ROI, β is a parameter that can be manually selected, and in the paper let β be 0.05. And extracting the ROI from the third-level reference image, wherein most features of the human face are included in the ROI. The color correction is very important for an algorithm, the authenticity of a face changing result is closely related to the color fitting degree of the ROI and the reference image, then the face changing result is fitted into the second-level reference image, and in order to realize the color seamless conversion of the ROI and the second-level reference image, the function provided in the OPENCV library is used for realizing the seamless butt joint of human face colors, so that the image looks more vivid and natural.
Besides calculating the human face ROI through the feature points, the invention also provides a method for manufacturing the human face mask image.
As shown in fig. 6, the mask image is also a method for extracting a face ROI, and is a binary image with only black and white colors, where the white portion corresponds to the face ROI and the black portion is the information discarding portion. For the hair style models stored in the library, only one mask image needs to be made, and the mask image can be repeatedly called in the subsequent program.
The method of making the mask image is very simple and easy to understand and provides a method of selecting a point on the boundary of the white portion of the mask image by the left mouse button so that the selected point can contain the information of the five sense organs of the model without destroying the hair style information. The number and position of the selected points can be freely controlled.
Step 114 is facial feature exchange, which is fitting the extracted ROI into the second level reference image. There is also a case to be explained in dealing with this problem, which is the position where the ROI is attached. Because the geometry of each face is different, when face changing is carried out, if the position selection of the ROI is inaccurate, the five sense organs can be displaced, and a large amount of time can be wasted by manually selecting a proper position for each face changing operation. First, a minimum rectangle surrounding the ROI is created, the coordinates of the center point of the rectangle are obtained, and C is used1To represent it, and then in the second level reference image, based on finding the ROICharacteristic points, establishing a minimum rectangle surrounding the characteristic points, and solving the position coordinate C of the central point of the rectangle2When finding the ROI position, make C1And C2And a better effect can be obtained by superposition.
The above description is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any design concept of the system and method for simulating and matching the hairstyle of a customer by processing a face image according to the system and method of the present invention is within the scope of the present invention, and any person skilled in the art can substitute or change the technical solution and the concept of the present invention within the technical scope of the present invention, and the technical solution and the concept thereof according to the present invention should be covered by the scope of the present invention.

Claims (6)

1. A facial image processing method, wherein the method is applied to a facial image processing system, and the system comprises: the model image storage module is used for storing a model image; the face image acquisition module is used for acquiring a face image; the image transmission module is used for transmitting the face image to the image processing module; the image processing module is used for carrying out face image synthesis; the image display module is used for displaying the acquired face image, the recommended model image and the final composite image; the system also includes a visual user interface including buttons for taking pictures and image selection, allowing the user to take pictures and select pictures for the customer at the appropriate time; the visual user interface also comprises an option button for recommending the hairstyle according to the data obtained by the appearance analysis of the customer; the visual user interface also includes an option button for the customer to view the hairstyle models stored in the database, the option allowing the user to manually select his favorite hairstyle model in the database when the user wishes to try another hairstyle after seeing the recommended hairstyle, and to use the composition button to generate an image of his same hairstyle as the model;
the face image processing method comprises the following steps:
s1, detecting the human face areas in the input image and the reference image respectively, and extracting human face characteristic points; obtaining 68 individual face feature points of the input image;
s2, respectively triangulating the two face images according to the same rule based on the face characteristic points extracted in S1;
s3, calculating affine transformation between corresponding triangles obtained by triangulation from the input image to the reference image, and filling colors in the triangles in the reference image to obtain an intermediate image;
s4, extracting a human face region of interest (ROI) from the intermediate image in S3;
s5, making a mask image of a human face region of interest (ROI), wherein the mask image is used for processing the unnatural problem of a synthetic image caused by the color difference between the input image and the reference image;
s6, finishing color correction of the synthesized image through the face mask image, enabling the color of the synthesized image to be perfect and excessive, and improving the reality;
the method for obtaining the intermediate image comprises the following steps of calculating affine transformation from an input image to a reference image through the triangulation and obtaining corresponding triangles, and performing color filling on the triangles in the reference image to obtain the intermediate image, wherein the affine transformation comprises the following steps:
triangulating the input image and the reference image adds 3 feature points based on the 68 personal face feature points of the input image, which is mathematically represented using the following formula:
Figure FDA0002866363540000011
Figure FDA0002866363540000021
Figure FDA0002866363540000022
wherein
Figure FDA0002866363540000023
Coordinates representing the 69 th feature point of the input image, which include two values, x, y coordinate values representing the feature points; α is 1.2.
2. The face image processing method according to claim 1, characterized in that: the system also comprises a data analysis unit for analyzing data information of the customer, wherein the main analyzed data comprises the age and the sex of the customer and the hair style suitable for the customer, and the data is prepared for the customer to select the hair style suitable for the customer.
3. The face image processing method according to claim 1, characterized in that: the image transmission module comprises a network unit for wireless data transmission, and the network unit allows a user to remotely control the camera in the same local area network in a wireless mode.
4. The face image processing method according to claim 1, characterized in that: the image processing module comprises an image preprocessing unit used for preprocessing the collected customer images, wherein the preprocessing comprises image graying, histogram equalization and filtering operation.
5. The face image processing method according to claim 1, characterized in that: the image processing module also comprises a feature extraction unit, which is used for detecting the face area of the customer and extracting the face feature points, wherein the face feature points are a series of points which are defined in advance and can reflect the face features, and are mainly distributed on the outline of the five sense organs of the face.
6. The face image processing method according to claim 1, characterized in that: the image processing module also comprises an image synthesis unit for synthesizing images, and the image synthesis unit is used for synthesizing the images between the collected customer image and the model with the hairstyle, so that the five sense organs and the face of the synthesized image are consistent with the customer, and the hairstyle is consistent with the hairstyle of the model.
CN201810205659.9A 2018-03-13 2018-03-13 Face image processing method Active CN108537126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810205659.9A CN108537126B (en) 2018-03-13 2018-03-13 Face image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810205659.9A CN108537126B (en) 2018-03-13 2018-03-13 Face image processing method

Publications (2)

Publication Number Publication Date
CN108537126A CN108537126A (en) 2018-09-14
CN108537126B true CN108537126B (en) 2021-03-23

Family

ID=63484557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810205659.9A Active CN108537126B (en) 2018-03-13 2018-03-13 Face image processing method

Country Status (1)

Country Link
CN (1) CN108537126B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11074733B2 (en) 2019-03-15 2021-07-27 Neocortext, Inc. Face-swapping apparatus and method
CN112102146B (en) * 2019-06-18 2023-11-03 北京陌陌信息技术有限公司 Face image processing method, device, equipment and computer storage medium
CN110555812A (en) * 2019-07-24 2019-12-10 广州视源电子科技股份有限公司 image adjusting method and device and computer equipment
CN110543826A (en) * 2019-08-06 2019-12-06 尚尚珍宝(北京)网络科技有限公司 Image processing method and device for virtual wearing of wearable product
CN110503599B (en) * 2019-08-16 2022-12-13 郑州阿帕斯科技有限公司 Image processing method and device
CN110610456A (en) * 2019-09-27 2019-12-24 上海依图网络科技有限公司 Imaging system and video processing method
CN112769937B (en) * 2021-01-12 2021-09-03 济源职业技术学院 Medical treatment solid waste supervisory systems
CN113807313A (en) * 2021-10-08 2021-12-17 合肥安达创展科技股份有限公司 AI platform analysis system based on Dlib face recognition
CN116228763B (en) * 2023-05-08 2023-07-21 成都睿瞳科技有限责任公司 Image processing method and system for eyeglass printing

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7450740B2 (en) * 2005-09-28 2008-11-11 Facedouble, Inc. Image classification and information retrieval over wireless digital networks and the internet
JP4706849B2 (en) * 2006-03-23 2011-06-22 花王株式会社 Method for forming hairstyle simulation image
CN103646416A (en) * 2013-12-18 2014-03-19 中国科学院计算技术研究所 Three-dimensional cartoon face texture generation method and device
CN105045968B (en) * 2015-06-30 2019-02-12 青岛理工大学 Hair style design method and system
CN105117445A (en) * 2015-08-13 2015-12-02 北京建新宏业科技有限公司 Automatic hairstyle matching method, device and system
CN105354411A (en) * 2015-10-19 2016-02-24 百度在线网络技术(北京)有限公司 Information processing method and apparatus
CN107784134A (en) * 2016-08-24 2018-03-09 南京乐朋电子科技有限公司 A kind of virtual hair style simulation system
CN107741974A (en) * 2017-10-09 2018-02-27 武汉轻工大学 Aid in hairdressing method

Also Published As

Publication number Publication date
CN108537126A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
CN108537126B (en) Face image processing method
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
CN105404392B (en) Virtual method of wearing and system based on monocular cam
CN108305312B (en) Method and device for generating 3D virtual image
CN105556508B (en) The devices, systems, and methods of virtual mirror
CN109690617A (en) System and method for digital vanity mirror
US9959453B2 (en) Methods and systems for three-dimensional rendering of a virtual augmented replica of a product image merged with a model image of a human-body feature
US6633289B1 (en) Method and a device for displaying at least part of the human body with a modified appearance thereof
JP2019510297A (en) Virtual try-on to the user's true human body model
WO2021143282A1 (en) Three-dimensional facial model generation method and apparatus, computer device and storage medium
CN108460398B (en) Image processing method and device and cloud processing equipment
JP2010507854A (en) Method and apparatus for virtual simulation of video image sequence
JP2004094917A (en) Virtual makeup device and method therefor
CN113610612B (en) 3D virtual fitting method, system and storage medium
CN110291560A (en) The method that three-dimensional for founder indicates
CN107680166A (en) A kind of method and apparatus of intelligent creation
CN108664884A (en) A kind of virtually examination cosmetic method and device
WO2020104990A1 (en) Virtually trying cloths & accessories on body model
CN111586428A (en) Cosmetic live broadcast system and method with virtual character makeup function
JP2018195996A (en) Image projection apparatus, image projection method, and image projection program
CN113298956A (en) Image processing method, nail beautifying method and device, and terminal equipment
JPH10240908A (en) Video composing method
CN115019401B (en) Prop generation method and system based on image matching
JP2024503596A (en) Volumetric video from image source
CN114399811A (en) Adjusting method, adjusting device, intelligent fitting mirror system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant