CN105933589B - A kind of image processing method and terminal - Google Patents
A kind of image processing method and terminal Download PDFInfo
- Publication number
- CN105933589B CN105933589B CN201610503757.1A CN201610503757A CN105933589B CN 105933589 B CN105933589 B CN 105933589B CN 201610503757 A CN201610503757 A CN 201610503757A CN 105933589 B CN105933589 B CN 105933589B
- Authority
- CN
- China
- Prior art keywords
- area
- preview image
- terminal
- focusing area
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 238000012545 processing Methods 0.000 claims abstract description 53
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000001514 detection method Methods 0.000 claims description 21
- 238000000605 extraction Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 10
- 230000000694 effects Effects 0.000 abstract description 9
- 238000010586 diagram Methods 0.000 description 17
- 238000004590 computer program Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 241000282326 Felis catus Species 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2624—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing methods, comprising: determines the focusing area and non-focusing area of the preview image of the camera shooting of terminal;The distance between spatial position indicated by each pixel and the camera in the preview image are determined using the laser range sensor of the terminal, obtains N number of distance value, and the pixel number of the preview image is the N, and the N is positive integer;Keep the focusing area clearly under the premise of, Fuzzy processing is carried out to the non-focusing area according to N number of distance value.The embodiment of the invention also provides a kind of terminals.The operating process of background blurring realization can be simplified through the embodiment of the present invention, and due to carrying out background blurring processing to non-focusing area according to distance, thus, obtained background blurring image effect is more true to nature.
Description
Technical Field
The present invention relates to the field of electronic devices, and in particular, to an image processing method and a terminal.
Background
With the rapid development of information technology, terminals (such as mobile phones, tablet computers, and the like) are used more and more frequently, and functions integrated in the terminals are more and more. The photographing has become an important selling point for each mobile phone manufacturer, how to improve the photographing effect, and more differentiation functions, which becomes the focus of competitive use for each manufacturer.
At present, the close shot and the long shot cannot be well realized by terminal shooting, but the two popular cameras in the market mainly have the main effects of realizing the close shot and the long shot, and the principle of the method is that two cameras are used for respectively seeing different areas A and B, and then the difference of the distances between the areas A and B is estimated by the two cameras. If the user manually selects the area A as the focusing area, the image of the area A is kept clear, the corresponding area B is subjected to fuzzification processing, and the fuzzification degree generally needs to be controlled by the user through an upper pull bar and a lower pull bar. The method realizes clear close range and fuzzy long range, on one hand, the operation is not intelligent enough because the user needs to manually select the focusing area, and on the other hand, the operation is not intelligent enough because the distance between the A and B is only the relative distance between the A and B, namely the distance between the A and B, but not the distance between the A or B area and the lens, and the self is not accurate enough when selecting the focusing point, and the relative distance between the A and B is only estimated, so that the error is large.
Disclosure of Invention
The embodiment of the invention provides an image processing method and a terminal, aiming at simplifying the operation process of realizing background blurring and ensuring that the background blurring image effect is more vivid.
A first aspect of an embodiment of the present invention provides an image processing method, including:
determining a focusing area and a non-focusing area of a preview image shot by a camera of a terminal;
determining the distance between the space position indicated by each pixel point in the preview image and the camera by using a laser ranging sensor of the terminal to obtain N distance values, wherein the number of the pixel points in the preview image is N, and N is a positive integer;
and on the premise of keeping the focus area clear, fuzzifying the non-focus area according to the N distance values.
A second aspect of an embodiment of the present invention provides a terminal, including:
the device comprises a first determining unit, a second determining unit and a third determining unit, wherein the first determining unit is used for determining a focusing area and a non-focusing area of a preview image shot by a camera of the terminal;
a second determining unit, configured to determine, by using a laser ranging sensor of the terminal, a distance between a spatial position indicated by each pixel point in the preview image and the camera to obtain N distance values, where the number of pixel points in the preview image is N, and N is a positive integer;
and the processing unit is used for blurring the non-focusing area determined by the first determining unit according to the N distance values determined by the second determining unit on the premise of keeping the focusing area clear.
A third aspect of an embodiment of the present invention provides a terminal, including:
a processor and a memory;
wherein the processor is configured to call the executable program code in the memory, and perform part or all of the steps of the first aspect.
The embodiment of the invention has the following beneficial effects:
it can be seen that, according to the embodiment of the present invention, a focused region and a non-focused region of a preview image captured by a camera of a terminal are determined, a laser ranging sensor of the terminal is used to determine a distance between a spatial position indicated by each pixel point in the preview image and the camera, so as to obtain N distance values, the number of the pixel points in the preview image is N, which is a positive integer, and the non-focused region is fuzzified according to the N distance values on the premise of keeping the focused region clear. Therefore, after the focusing area and the non-focusing area are determined, the non-focusing area can be fuzzified according to the distance value between the terminal and each pixel point, so that the operation process of background blurring in the prior art is simplified, and the background blurring is performed on the non-focusing area according to the distance, so that the obtained background blurring image effect is more vivid.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a first embodiment of an image processing method disclosed in the embodiment of the present invention;
FIG. 1a is a schematic diagram of independent area division of a preview image according to an embodiment of the present invention;
FIG. 1b is a schematic diagram of a laser ranging sensor according to an embodiment of the present disclosure;
FIG. 1c is a diagram illustrating a ranging plane of the laser ranging sensor of FIG. 1b according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a second embodiment of an image processing method according to an embodiment of the present invention;
fig. 3a is a schematic structural diagram of a first embodiment of a terminal according to an embodiment of the present invention;
fig. 3b is a schematic structural diagram of a first determining unit of the terminal depicted in fig. 3a according to an embodiment of the present invention;
fig. 3c is a schematic structural diagram of a processing unit of the terminal depicted in fig. 3a according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a second embodiment of a terminal disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of the invention and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The terminal described in the embodiment of the present invention may include a smart Phone (such as an Android Phone, an iOS Phone, a Windows Phone, etc.), a tablet computer, a palm computer, a notebook computer, a mobile internet device (MID, mobile internet Devices), or a wearable device, and the terminal is merely an example, and is not exhaustive and includes but is not limited to the terminal.
It should be noted that the principle of the laser ranging sensor is as follows: the laser ranging sensor emits modulated near-infrared light and reflects the light after encountering an object, and the distance between the shot scenery and the lens is measured by the laser ranging sensor through calculating the time difference or the phase difference between the light emission and the reflection. For example, a laser diode emits a laser pulse to a target, the laser is reflected by the target and then scattered in all directions, part of the scattered light returns to a sensor receiver, the laser is received by an optical system and then imaged on an avalanche photodiode, and the avalanche photodiode is an optical sensor with an amplification function inside, so that the avalanche photodiode can detect an extremely weak optical signal, record and process the time from the emission of the optical pulse to the return of the optical pulse, and then determine the distance to the target.
Optionally, in the above manner, a distance between a spatial position indicated by each pixel point in the preview image and the camera may be measured, where each pixel point in the preview image corresponds to a certain position of a certain object in the shooting scene, and the certain position is referred to as the spatial position indicated by the pixel point. Assuming that the preview image contains N pixel points, N distance values can be obtained by the method, wherein each distance value represents the distance between the space position indicated by each pixel point and the pixel point, and N is a positive integer.
Fig. 1 is a flowchart illustrating an image processing method according to a first embodiment of the present invention. The image processing method described in the present embodiment includes the steps of:
101. and determining a focusing area and a non-focusing area of a preview image shot by a camera of the terminal.
Wherein, the camera of terminal can be at least one among visible light camera, infrared camera or the ultraviolet camera etc..
Optionally, a target area in a preview image captured by a camera of the terminal may be selected, the target area is used as a focusing area, and an area other than the focusing area is used as a non-focusing area, where the preview image includes a plurality of independent areas, and the target area is at least one of the independent areas. As shown in fig. 1a, the preview image is divided into 9 independent areas (independent areas formed by intersecting dotted lines in the figure), and the target area may be any one of the 9 independent areas of the preview image. Of course, the preview image may be divided into 2, 3, 4, etc. independent areas, which are not listed here.
Further optionally, the selecting the target area in the preview image captured by the camera of the terminal may include the following steps:
1) receiving a selection instruction;
2) taking an independent area indicated by the selection instruction in a preview image shot by a camera of the terminal as a focusing area, and taking an area outside the focusing area as a non-focusing area;
in step 1, a user may select at least one independent area from the plurality of independent areas, and after determining to select the at least one independent area, a selection instruction is generated, and in step 2, the at least one independent area may be used as a focusing area. For example, a selection instruction of the user for at least one of the 9 independent areas may be received, assuming that the independent area i is one of the 9 independent areas, and after the independent area is selected, the independent area i is used as a focusing area, and an area other than the focusing area is used as a non-focusing area.
Optionally, the laser ranging sensor has N receiving areas built therein, each receiving area being independent and capable of receiving external laser energy. Meanwhile, the fact that the laser ranging sensor can receive N regional distance signals is guaranteed through the design of the two lenses, and the fact that N receiving regions of the laser ranging sensor are consistent with N regions pre-divided by preview images of the camera can be guaranteed. For example, as shown in fig. 1b and fig. 1c, in fig. 1b, when the camera is turned on, the shooting scene may form a preview image, the preview image is divided into 9 regions, that is, the shooting scene is divided into 9 regions, and the laser ranging sensors may respectively detect the distance values between the 9 regions and the camera, specifically, the laser ranging sensors emit modulated near infrared light (indicated by dotted lines emitted by the laser ranging sensors in fig. a), and are reflected by objects in the shooting scene and received by the laser ranging sensors, and the distance of the shot object is converted by calculating the time difference or phase difference between light emission and reflection. Assuming that the first detection unit detects the distance value first, the distance value is fed back to the terminal. In fig. 1c, a preview image can be obtained from a shooting scene of the camera, detection regions (including P1, P2, P3, P4, P5, P6, P7, P8 and P9) of the laser ranging sensors can correspond to 9 regions divided by the preview image in a one-to-one manner, and a distance between a P1 detection region corresponding to the first detection unit and the camera is taken as a distance between the corresponding region in the preview image and the camera, so that a distance value between each detection region and the camera can be obtained, that is, a distance between a spatial position of each region of N regions divided in advance in the preview image shot by the camera and the camera can be determined. Thus, the area corresponding to the minimum distance value can be regarded as the focusing area. Of course, the distance between the spatial position of each pixel point in the preview image and the camera can also be measured by using the laser ranging sensor.
Further, the target area in the preview image captured by the camera of the selection terminal may be:
and taking the designated area in the preview image as a focusing area, and taking the area outside the focusing area as a non-focusing area. For example, a certain region in the preview screen may be designated as a designated region in advance, and then a region other than the designated region in the preview image is an unfocused region.
Optionally, step 101 may further include the steps of:
3) performing target detection on a preview image shot by a camera of the terminal;
4) and taking the area where the detected target is located as a focusing area.
Wherein, the target in step 3 may be a person, a car, a cat, a dog, etc., and is not limited herein. The target detection can adopt an infrared sensor for temperature detection or an image recognition algorithm for target detection. For example, when an infrared camera is used for shooting, the target in the preview image can be identified according to the distribution situation of different temperatures.
Further, the step 3 may include the steps of:
31) performing binarization processing on a preview image shot by a camera of the terminal to obtain a binarization preview image;
32) extracting the outline of the binary preview image;
33) and carrying out image recognition on the contour so as to recognize the target in the contour.
The method comprises the steps of obtaining a preview image shot by a camera, wherein the preview image shot by the camera can be subjected to binarization processing, and the binarized preview image obtained after binarization processing only has two gray levels, namely the pixel value is 0 or 255. The threshold value of the binarization processing may be an average brightness value of the preview image or an average brightness value of the focus area, optionally, a plurality of pixel values may be selected from each of the plurality of independent areas, respectively, then, a mean value of all the pixel values is calculated, and the mean value is used as a binarization threshold value.
Optionally, when the preview image is a color image, a luminance component image of the preview image may be extracted, and the luminance component image is subject to target detection according to steps 31, 32, and 33, and first, binarization processing is performed, then, contour extraction is performed, and finally, image recognition is performed.
Optionally, when a camera of the terminal captures the image, the captured image may be focused, wherein a focus area generated in the focusing process may be used as a focus area.
102. And determining the distance between the space position indicated by each pixel point in the preview image and the camera by using a laser ranging sensor of the terminal to obtain N distance values, wherein the number of the pixel points in the preview image is N, and N is a positive integer.
103. And on the premise of keeping the focus area clear, fuzzifying the non-focus area according to the N distance values.
Optionally, the focus area may be kept clear, and the non-focus area is blurred according to the N distance values, so that the obtained image is clear in the focus area, and is blurred in the non-focus area. In the embodiment of the present invention, the main means of the blurring processing may be a gaussian blurring algorithm, and of course, other algorithms may also be used, which is not specifically limited herein.
It should be noted that, the above-mentioned keeping the focus area clear can be understood that no processing may be performed on the focus area, or a certain image enhancement processing may be performed on the focus area, or a certain beautifying effect processing may be performed on the focus area. The image enhancement processing can adopt the following steps: histogram equalization, gray scale stretching, white balance processing, color temperature adjustment, image restoration, and the like, which are not limited herein. In short, keeping the focus area clear can be: ensuring that the definition of the focusing area is not lower than that of the focusing area which is not processed.
Further optionally, under the condition that the focused area is kept clear, the non-focused area may be subjected to blurring processing, which specifically may be:
firstly, determining the mean value of all distance values corresponding to a focusing area to obtain a first mean value;
secondly, respectively determining the mean value of all distance values corresponding to each independent area in the non-focusing area to obtain a plurality of second mean values;
then, determining a fuzzy coefficient of each independent area in the non-focusing area according to the first average value and the plurality of second average values;
and finally, blurring each independent area according to the blurring coefficient of each independent area.
Specifically, the average value of the distance values corresponding to each of the plurality of independent areas is calculated, and then the ratio of the distance average value corresponding to each of the independent areas in the non-focusing area to the distance average value of the focusing area is used as the blur coefficient. Assuming that the distance average of the focused region is a and the distance average of an independent region is B, the blur coefficient is B/a, the larger B is, the larger the blur coefficient is, i.e., the higher the blur degree is, and the smaller B is, the smaller the blur coefficient is, i.e., the smaller the blur degree is.
Optionally, of course, the focusing area may also be subjected to blurring processing, and the non-focusing area is kept clear, so that the effects of clear long-range view and fuzzy short-range view can be obtained.
It can be seen that, according to the embodiment of the present invention, a focused region and a non-focused region of a preview image captured by a camera of a terminal are determined, a laser ranging sensor of the terminal is used to determine a distance between a spatial position indicated by each pixel point in the preview image and the camera, so as to obtain N distance values, the number of the pixel points in the preview image is N, which is a positive integer, and the non-focused region is fuzzified according to the N distance values on the premise of keeping the focused region clear. Therefore, after the focusing area and the non-focusing area are determined, the non-focusing area can be fuzzified according to the distance value between the terminal and each pixel point, so that the operation process of background blurring in the prior art is simplified, and the background blurring is performed on the non-focusing area according to the distance, so that the obtained background blurring image effect is more vivid.
In accordance with the foregoing embodiments, please refer to fig. 2, which is a flowchart illustrating an image processing method according to a second embodiment of the present invention. The image processing method described in the present embodiment includes the steps of:
201. and determining a focusing area and a non-focusing area of a preview image shot by a camera of the terminal.
202. And determining the distance between the space position indicated by each pixel point in the preview image and the camera by using a laser ranging sensor of the terminal to obtain N distance values, wherein the number of the pixel points in the preview image is N, and N is a positive integer.
203. And determining an average distance value according to M distance values, wherein the M distance values are the distance values between all or part of pixel points in the focusing area and the camera, and M is a positive integer and is smaller than N.
Optionally, an average operation may be performed on all distance values in the M distance values, assuming that the focusing region includes J pixel points, where the M distance values may be distance values between some pixel points in the focusing region and the camera, and then M is smaller than J, and certainly, the M distance values may also be distance values between all pixel points in the focusing region and the camera, and then M is equal to J. It can be seen that M is equal to or less than J, which is less than N.
204. And calculating the difference between the distance value corresponding to each pixel point in the non-focusing area and the average distance value to obtain a plurality of difference values.
Optionally, a difference between the distance value corresponding to each pixel point in the non-focusing region and the average distance value may be calculated, the obtained difference may be greater than 0, smaller than 0, or greater than 0, and different differences may be used as a basis for how to perform blurring processing on the pixel point corresponding to the difference.
205. And performing fuzzification processing on the non-focusing area according to the plurality of difference values.
Step 205 may include two different fuzzification processing manners, which are specifically as follows:
in the first blurring processing method, the absolute values of the plurality of difference values are calculated to obtain a plurality of absolute values, and the non-focus area is blurred according to the plurality of absolute values, where a large absolute value results in a large blurring degree, and a small absolute value results in a small blurring degree.
Further, certainly, the multiple absolute values may be divided into multiple levels from small to large, if the multiple levels include a, B, and C, the gaussian coefficient corresponding to the level a is a, the gaussian coefficient corresponding to the level B is B, the gaussian coefficient corresponding to the level C is C, the absolute value corresponding to the level a is smaller than the absolute value corresponding to the level B, the absolute value corresponding to the level B is smaller than the absolute value corresponding to the level C, and accordingly, the blurring degrees are in the order from small to large: and if A is less than B and less than C, performing fuzzification processing on the pixel point corresponding to each absolute value in the grade A according to a Gaussian coefficient a, performing fuzzification processing on the pixel point corresponding to each absolute value in the grade B according to a Gaussian coefficient B, and performing fuzzification processing on the pixel point corresponding to each absolute value in the grade C according to a Gaussian coefficient C.
The first fuzzification processing method may determine the plurality of difference values, and when the difference value is less than or equal to 0, the fuzzification processing is not performed on the pixel point corresponding to the difference value, because the spatial position corresponding to the pixel point corresponding to the difference value is considered to be close view and clear. If the difference is greater than 0, the spatial position corresponding to the pixel point corresponding to the difference is considered as a long-range view, and the fuzzification processing is carried out, wherein the larger the difference is, the larger the fuzzification degree is, and the smaller the difference is, the smaller the fuzzification degree is. Of course, the second blurring processing method may refer to the degree of hierarchical blurring in the first blurring processing method, that is: and dividing the part of the difference value with the difference value larger than 0 into a plurality of grades, and performing fuzzification processing on each grade by adopting a corresponding Gaussian coefficient.
The specific description of other steps in the embodiment described in fig. 2 may refer to the specific description of the image processing method described in fig. 1, and is not repeated here.
The following is an apparatus for implementing the image processing method described in fig. 1 or fig. 2, and specifically includes the following:
please refer to fig. 3a, which is a schematic structural diagram of a terminal according to a first embodiment of the present invention. The terminal described in this embodiment includes: the first determining unit 301, the second determining unit 302 and the processing unit 303 are as follows:
a first determining unit 301, configured to determine a focused region and a non-focused region of a preview image captured by a camera of a terminal;
a second determining unit 302, configured to determine, by using a laser ranging sensor of the terminal, a distance between a spatial position indicated by each pixel point in the preview image and the camera to obtain N distance values, where the number of pixel points in the preview image is N, and N is a positive integer;
a processing unit 303, configured to perform blurring processing on the non-focus area determined by the first determining unit 301 according to the N distance values determined by the second determining unit 302 on the premise that the focus area is kept clear.
Optionally, the first determining unit 301 is specifically configured to:
selecting a target area in a preview image shot by a camera of a terminal, taking the target area as a focusing area, and taking an area outside the focusing area as a non-focusing area, wherein the preview image comprises a plurality of independent areas, and the target area is at least one of the independent areas.
Optionally, the first determining unit 301 is further specifically configured to:
receiving a selection instruction, taking an independent area indicated by the selection instruction in a preview image shot by a camera of a terminal as a focusing area, and taking an area outside the focusing area as a non-focusing area;
or,
and taking the designated area in the preview image as a focusing area, and taking the area outside the focusing area as a non-focusing area.
Optionally, as shown in fig. 3b, the first determining unit 301 of the terminal depicted in fig. 3a includes:
the detection module 3011 is configured to perform target detection on a preview image captured by a camera of the terminal;
a first determining module 3012, configured to use the area where the detected target is located as a focusing area.
Further, the detection module 3011 includes:
a binarization processing module (not shown in the figure) for performing binarization processing on a preview image shot by a camera of the terminal to obtain a binarization preview image;
an extraction module (not shown in the figure) for extracting the outline of the binary preview image;
and the identification module (not shown) is used for carrying out image identification on the outline so as to identify the target in the outline.
Optionally, as shown in fig. 3c, the processing unit 303 of the terminal depicted in fig. 3a includes:
a second determining module 3031, configured to determine an average distance value according to M distance values, where the M distance values are distance values between all or some pixel points in the focus area and the camera, and M is a positive integer and is smaller than N;
a calculating module 3032, configured to calculate a difference between a distance value corresponding to each pixel point in the non-focusing region and the average distance value, so as to obtain a plurality of difference values;
a blurring processing module 3033, configured to perform blurring processing on the non-focus region according to the plurality of difference values.
It can be seen that, with the terminal described in the embodiment of the present invention, a focused region and a non-focused region of a preview image captured by a camera of the terminal can be determined, a laser ranging sensor of the terminal is used to determine a distance between a spatial position indicated by each pixel point in the preview image and the camera, so as to obtain N distance values, the number of the pixel points in the preview image is N, which is a positive integer, and the non-focused region is fuzzified according to the N distance values on the premise of keeping the focused region clear. Therefore, after the focusing area and the non-focusing area are determined, the non-focusing area can be fuzzified according to the distance value between the terminal and each pixel point, so that the operation process of background blurring in the prior art is simplified, and the background blurring is performed on the non-focusing area according to the distance, so that the obtained background blurring image effect is more vivid.
Fig. 4 is a schematic structural diagram of a terminal according to a second embodiment of the present invention. The terminal described in this embodiment includes: at least one input device 1000; at least one output device 2000; at least one processor 3000, e.g., a CPU; and a memory 4000, the input device 1000, the output device 2000, the processor 3000, and the memory 4000 being connected by a bus 5000.
The input device 1000 may be a touch panel, a physical button or a mouse, a fingerprint recognition module, and the like.
The output device 2000 may be a display screen.
The memory 4000 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 4000 is used for storing a set of program codes, and the input device 1000, the output device 2000 and the processor 3000 are used for calling the program codes stored in the memory 4000 to execute the following operations:
the processor 3000 is configured to:
determining a focusing area and a non-focusing area of a preview image shot by a camera of a terminal;
determining the distance between the space position indicated by each pixel point in the preview image and the camera by using a laser ranging sensor of the terminal to obtain N distance values, wherein the number of the pixel points in the preview image is N, and N is a positive integer;
and on the premise of keeping the focus area clear, fuzzifying the non-focus area according to the N distance values.
Optionally, the processor 3000 determines a focused region and a non-focused region of a preview image captured by a camera of the terminal, including:
selecting a target area in a preview image shot by a camera of a terminal, taking the target area as a focusing area, and taking an area outside the focusing area as a non-focusing area, wherein the preview image comprises a plurality of independent areas, and the target area is at least one of the independent areas.
Optionally, the processor 3000 selects a target area in a preview image captured by a camera of the terminal, where the target area includes:
receiving a selection instruction;
taking an independent area indicated by the selection instruction in a preview image shot by a camera of the terminal as a focusing area, and taking an area outside the focusing area as a non-focusing area;
or,
and taking the designated area in the preview image as a focusing area, and taking the area outside the focusing area as a non-focusing area.
Optionally, the processor 3000 determines a focusing area of a preview image captured by a camera of the terminal, including:
performing target detection on a preview image shot by a camera of the terminal;
and taking the area where the detected target is located as a focusing area.
Optionally, the processor 3000 performs target detection on a preview image captured by a camera of the terminal, where the target detection includes:
performing binarization processing on a preview image shot by a camera of a terminal to obtain a binarized preview image;
extracting the outline of the binaryzation preview image;
and carrying out image recognition on the contour so as to identify the target in the contour.
Optionally, the processor 3000 performs blurring processing on the out-of-focus area according to the N distance values, including:
determining an average distance value according to M distance values, wherein the M distance values are the distance values between all or part of pixel points in the focusing area and the camera, and M is a positive integer and is smaller than N;
calculating the difference between the distance value corresponding to each pixel point in the non-focusing area and the average distance value to obtain a plurality of difference values;
and performing fuzzification processing on the non-focusing area according to the plurality of difference values.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program includes some or all of the steps of any one of the image processing methods described in the above method embodiments when executed.
While the invention has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the invention has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the invention. Accordingly, the specification and figures are merely exemplary of the invention as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (13)
1. An image processing method, comprising:
determining a focusing area and a non-focusing area of a preview image shot by a camera of a terminal;
determining the distance between the space position indicated by each pixel point in the preview image and the camera by using a laser ranging sensor of the terminal to obtain N distance values, wherein the number of the pixel points in the preview image is N, and N is a positive integer;
on the premise of keeping the focus area clear, fuzzifying the non-focus area according to the N distance values, wherein the fuzzification processing comprises the following steps: determining the mean value of all distance values corresponding to the focusing area to obtain a first mean value; respectively determining the mean value of all distance values corresponding to each independent area in the non-focusing area to obtain a plurality of second mean values; then, determining a fuzzy coefficient of each independent area in the non-focusing area according to the first average value and the plurality of second average values; and performing fuzzification processing on each independent area according to the fuzzy coefficient of each independent area.
2. The method of claim 1, wherein the determining the in-focus area and the out-of-focus area of the preview image captured by the camera of the terminal comprises:
selecting a target area in a preview image shot by a camera of a terminal, taking the target area as a focusing area, and taking an area outside the focusing area as a non-focusing area, wherein the preview image comprises a plurality of independent areas, and the target area is at least one of the independent areas.
3. The method according to claim 2, wherein selecting the target area in the preview image captured by the camera of the terminal comprises:
receiving a selection instruction;
taking an independent area indicated by the selection instruction in a preview image shot by a camera of the terminal as a focusing area, and taking an area outside the focusing area as a non-focusing area;
or,
and taking the designated area in the preview image as a focusing area, and taking the area outside the focusing area as a non-focusing area.
4. The method of claim 1, wherein the determining a focus area of a preview image taken by a camera of the terminal comprises:
performing target detection on a preview image shot by a camera of the terminal;
and taking the area where the detected target is located as a focusing area.
5. The method according to claim 4, wherein the object detection of the preview image taken by the camera of the terminal comprises:
performing binarization processing on a preview image shot by a camera of a terminal to obtain a binarized preview image;
extracting the outline of the binaryzation preview image;
and carrying out image recognition on the contour so as to identify the target in the contour.
6. The method according to any one of claims 1 to 5, wherein the blurring the out-of-focus region according to the N distance values further comprises:
determining an average distance value according to M distance values, wherein the M distance values are the distance values between all or part of pixel points in the focusing area and the camera, and M is a positive integer and is smaller than N;
calculating the difference between the distance value corresponding to each pixel point in the non-focusing area and the average distance value to obtain a plurality of difference values;
and performing fuzzification processing on the non-focusing area according to the plurality of difference values.
7. A terminal, comprising:
the device comprises a first determining unit, a second determining unit and a third determining unit, wherein the first determining unit is used for determining a focusing area and a non-focusing area of a preview image shot by a camera of the terminal;
a second determining unit, configured to determine, by using a laser ranging sensor of the terminal, a distance between a spatial position indicated by each pixel point in the preview image and the camera to obtain N distance values, where the number of pixel points in the preview image is N, and N is a positive integer;
a processing unit, configured to perform blurring processing on the non-focusing area determined by the first determining unit according to the N distance values determined by the second determining unit on the premise that the focusing area is kept clear, including: determining the mean value of all distance values corresponding to the focusing area to obtain a first mean value; respectively determining the mean value of all distance values corresponding to each independent area in the non-focusing area to obtain a plurality of second mean values; then, determining a fuzzy coefficient of each independent area in the non-focusing area according to the first average value and the plurality of second average values; and performing fuzzification processing on each independent area according to the fuzzy coefficient of each independent area.
8. The terminal according to claim 7, wherein the first determining unit is specifically configured to:
selecting a target area in a preview image shot by a camera of a terminal, taking the target area as a focusing area, and taking an area outside the focusing area as a non-focusing area, wherein the preview image comprises a plurality of independent areas, and the target area is at least one of the independent areas.
9. The terminal according to claim 8, wherein the first determining unit is further specifically configured to:
receiving a selection instruction, taking an independent area indicated by the selection instruction in a preview image shot by a camera of a terminal as a focusing area, and taking an area outside the focusing area as a non-focusing area;
or,
and taking the designated area in the preview image as a focusing area, and taking the area outside the focusing area as a non-focusing area.
10. The terminal according to claim 7, wherein the first determining unit comprises:
the detection module is used for carrying out target detection on a preview image shot by a camera of the terminal;
and the first determining module is used for taking the area where the detected target is located as a focusing area.
11. The terminal of claim 10, wherein the detection module comprises:
the binarization processing module is used for carrying out binarization processing on a preview image shot by a camera of the terminal to obtain a binarization preview image;
the extraction module is used for extracting the outline of the binaryzation preview image;
and the identification module is used for carrying out image identification on the outline so as to identify the target in the outline.
12. The terminal according to any of claims 7 to 11, wherein the processing unit further comprises:
a second determining module, configured to determine an average distance value according to M distance values, where the M distance values are distance values between all or some pixel points in the focus area and the camera, and M is a positive integer and is smaller than N;
the calculation module is used for calculating the difference between the distance value corresponding to each pixel point in the non-focusing area and the average distance value to obtain a plurality of difference values;
and the fuzzification processing module is used for fuzzifying the non-focusing area according to the plurality of difference values.
13. A terminal, comprising:
a processor and a memory; wherein the processor performs the method of any one of claims 1-6 by calling code or instructions in the memory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610503757.1A CN105933589B (en) | 2016-06-28 | 2016-06-28 | A kind of image processing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610503757.1A CN105933589B (en) | 2016-06-28 | 2016-06-28 | A kind of image processing method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105933589A CN105933589A (en) | 2016-09-07 |
CN105933589B true CN105933589B (en) | 2019-05-28 |
Family
ID=56828711
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610503757.1A Active CN105933589B (en) | 2016-06-28 | 2016-06-28 | A kind of image processing method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105933589B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110881103A (en) * | 2019-09-19 | 2020-03-13 | Oppo广东移动通信有限公司 | Focusing control method and device, electronic equipment and computer readable storage medium |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106485790A (en) * | 2016-09-30 | 2017-03-08 | 珠海市魅族科技有限公司 | Method and device that a kind of picture shows |
CN106454123B (en) * | 2016-11-25 | 2019-02-22 | 盐城丝凯文化传播有限公司 | A kind of method and mobile terminal of focusing of taking pictures |
CN106775238A (en) * | 2016-12-14 | 2017-05-31 | 深圳市金立通信设备有限公司 | A kind of photographic method and terminal |
CN106657782B (en) * | 2016-12-21 | 2020-02-18 | 努比亚技术有限公司 | Picture processing method and terminal |
CN106993091B (en) * | 2017-03-29 | 2020-05-12 | 维沃移动通信有限公司 | Image blurring method and mobile terminal |
CN107426493A (en) * | 2017-05-23 | 2017-12-01 | 深圳市金立通信设备有限公司 | A kind of image pickup method and terminal for blurring background |
CN108933890A (en) * | 2017-05-24 | 2018-12-04 | 中兴通讯股份有限公司 | A kind of background-blurring method, equipment and terminal |
CN107395965B (en) * | 2017-07-14 | 2019-11-29 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN107277372B (en) * | 2017-07-27 | 2021-04-23 | Oppo广东移动通信有限公司 | Focusing method, focusing device, computer readable storage medium and mobile terminal |
CN107295262B (en) * | 2017-07-28 | 2021-03-26 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer storage medium |
CN107592466B (en) * | 2017-10-13 | 2020-04-24 | 维沃移动通信有限公司 | Photographing method and mobile terminal |
CN108174085A (en) * | 2017-12-19 | 2018-06-15 | 信利光电股份有限公司 | A kind of image pickup method of multi-cam, filming apparatus, mobile terminal and readable storage medium storing program for executing |
CN109696788B (en) * | 2019-01-08 | 2021-12-14 | 武汉精立电子技术有限公司 | Quick automatic focusing method based on display panel |
CN113126111B (en) * | 2019-12-30 | 2024-02-09 | Oppo广东移动通信有限公司 | Time-of-flight module and electronic device |
CN111182211B (en) * | 2019-12-31 | 2021-09-24 | 维沃移动通信有限公司 | Shooting method, image processing method and electronic equipment |
CN111246092B (en) * | 2020-01-16 | 2021-07-20 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN113138387B (en) * | 2020-01-17 | 2024-03-08 | 北京小米移动软件有限公司 | Image acquisition method and device, mobile terminal and storage medium |
CN112733346B (en) * | 2020-12-31 | 2022-08-09 | 博迈科海洋工程股份有限公司 | Method for planning delightful area in electrical operation room |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101764925A (en) * | 2008-12-25 | 2010-06-30 | 华晶科技股份有限公司 | Simulation method for shallow field depth of digital image |
CN101933040A (en) * | 2007-06-06 | 2010-12-29 | 索尼株式会社 | Image processing device, image processing method, and image processing program |
CN105025226A (en) * | 2015-07-07 | 2015-11-04 | 广东欧珀移动通信有限公司 | Shooting control method and user terminal |
CN105227838A (en) * | 2015-09-28 | 2016-01-06 | 广东欧珀移动通信有限公司 | A kind of image processing method and mobile terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101472064A (en) * | 2007-12-25 | 2009-07-01 | 鸿富锦精密工业(深圳)有限公司 | Filming system and method for processing scene depth |
-
2016
- 2016-06-28 CN CN201610503757.1A patent/CN105933589B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101933040A (en) * | 2007-06-06 | 2010-12-29 | 索尼株式会社 | Image processing device, image processing method, and image processing program |
CN101764925A (en) * | 2008-12-25 | 2010-06-30 | 华晶科技股份有限公司 | Simulation method for shallow field depth of digital image |
CN105025226A (en) * | 2015-07-07 | 2015-11-04 | 广东欧珀移动通信有限公司 | Shooting control method and user terminal |
CN105227838A (en) * | 2015-09-28 | 2016-01-06 | 广东欧珀移动通信有限公司 | A kind of image processing method and mobile terminal |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110881103A (en) * | 2019-09-19 | 2020-03-13 | Oppo广东移动通信有限公司 | Focusing control method and device, electronic equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105933589A (en) | 2016-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105933589B (en) | A kind of image processing method and terminal | |
US9031315B2 (en) | Information extraction method, information extraction device, program, registration device, and verification device | |
CN111523438B (en) | Living body identification method, terminal equipment and electronic equipment | |
JP5228439B2 (en) | Operation input device | |
US10839537B2 (en) | Depth maps generated from a single sensor | |
CN102843509B (en) | Image processing device and image processing method | |
US8538075B2 (en) | Classifying pixels for target tracking, apparatus and method | |
TWI540462B (en) | Gesture recognition method and electronic apparatus using the same | |
CN108733208A (en) | The I-goal of smart machine determines method and apparatus | |
JP6553624B2 (en) | Measurement equipment and system | |
US20110304746A1 (en) | Image capturing device, operator monitoring device, method for measuring distance to face, and program | |
KR20200081450A (en) | Biometric detection methods, devices and systems, electronic devices and storage media | |
CN112534474A (en) | Depth acquisition device, depth acquisition method, and program | |
CN111598065B (en) | Depth image acquisition method, living body identification method, apparatus, circuit, and medium | |
CN110572636B (en) | Camera contamination detection method and device, storage medium and electronic equipment | |
KR101796027B1 (en) | Method and computing device for gender recognition based on facial image | |
CN111008954A (en) | Information processing method and device, electronic equipment and storage medium | |
US8503723B2 (en) | Histogram-based object tracking apparatus and method | |
KR20080032746A (en) | Device and method for motion recognition | |
CN110677580A (en) | Shooting method, shooting device, storage medium and terminal | |
US9104937B2 (en) | Apparatus and method for recognizing image with increased image recognition rate | |
CN106101542B (en) | A kind of image processing method and terminal | |
JP2009193130A (en) | Vehicle surrounding monitoring device, vehicle, program for vehicle surrounding monitoring and vehicle surrounding monitoring method | |
CN113645404A (en) | Automatic focusing method, system, intelligent device, computer device and computer readable storage medium | |
KR101706674B1 (en) | Method and computing device for gender recognition based on long distance visible light image and thermal image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |