CN109189289B - Method and device for generating icon based on screen capture image - Google Patents
Method and device for generating icon based on screen capture image Download PDFInfo
- Publication number
- CN109189289B CN109189289B CN201811020860.6A CN201811020860A CN109189289B CN 109189289 B CN109189289 B CN 109189289B CN 201811020860 A CN201811020860 A CN 201811020860A CN 109189289 B CN109189289 B CN 109189289B
- Authority
- CN
- China
- Prior art keywords
- face area
- face
- picture
- position information
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 8
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000003672 processing method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a method and a device for generating an icon based on a screen capture image, which comprise the following steps: receiving a screen capture instruction, and capturing a screen of an image displayed on a current interface to obtain a captured image; sending the screenshot image to a face recognition server for face recognition; receiving a face recognition result returned by the face recognition server, wherein the face recognition result comprises the name of a person and face area position information; intercepting a face part of the screenshot image as an icon according to the face region position information; and displaying the name of the person and the icon corresponding to the person. The user can clearly know the character information in the picture through the icon of the face part of the screen capture image and the name below the icon, so that the identification degree of the identification result is improved, and the user experience is improved.
Description
Technical Field
The present application relates to the field of information technologies, and in particular, to a method and an apparatus for generating an icon based on a screen capture image.
Background
When a user wants to know a certain person in a certain picture when watching a television, a screenshot instruction (such as a special screenshot key set by the remote controller or some key combination on the remote controller or a voice key on the remote controller is pressed to ask "who the person is) can be sent by the remote controller, the television terminal executes screenshot operation after receiving the screenshot instruction, then image recognition can be performed, information (including name, brief introduction, news, related videos and the like) about the person is searched out in a database or a cloud and displayed to the television terminal, and screenshot storage, screenshot sharing and other operations can be performed naturally.
As shown in fig. 1, in the prior art, pictures (head portrait pictures) in the information searched back by the cloud are from the network, and when the screenshot contains many people, there are also many corresponding head portrait pictures searched back by the cloud. Under the condition that the user does not know the characters in the screenshot, because the returned avatar picture is different from the avatar picture in the screenshot, the user may not be able to link the characters in the screenshot with the network pictures of the returned characters, and often, the user may need to carefully compare to identify who is, so that the purpose of enabling the user to know the information of the characters to be known at a glance is not achieved, the identification result is low in identification degree, and the user experience is not good.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating an icon based on a screenshot image, the image that a user wants to know is screenshot to obtain the screenshot image, the screenshot image is subjected to face recognition, the face part of the screenshot image is captured as the icon according to face region position information in a face recognition result, a one-to-one correspondence relationship between the name of the person and the icon is established and displayed, so that the user can clearly know the character information in the image through the icon of the face part of the screenshot image and the name below the icon, the recognition degree of the recognition result is improved, and the user experience is improved.
On a terminal side, a first aspect of the embodiments of the present application provides a method for generating an icon by a screenshot image, including:
receiving a screen capture instruction, and capturing a screen of an image displayed on a current interface to obtain a captured image;
sending the screenshot image to a face recognition server for face recognition;
receiving a face recognition result returned by the face recognition server, wherein the face recognition result comprises the name of a person and face area position information;
intercepting a face part of the screenshot image as an icon according to the face region position information;
and displaying the name of the person and the icon corresponding to the person.
Optionally, the method further includes: and when the picture of the face part of the screenshot image is not intercepted, further judging that the head portrait picture corresponding to the character is taken as an icon when the face recognition result returned by the face recognition server contains the head portrait picture corresponding to the character.
Optionally, the method further includes: and when the picture of the face part of the screenshot image is not intercepted, further judging that the placeholder image is used as an icon when the face recognition result returned by the face recognition server does not contain the avatar picture corresponding to the character.
Optionally, the method further includes: and firstly, displaying the head portrait picture or the placeholder picture as an icon, and displaying the picture of the face part of the screenshot picture as the icon after the picture of the face part of the screenshot picture is intercepted.
On the terminal side, a second aspect of the embodiments of the present application provides an apparatus for generating an icon by a screenshot image, including:
a screen capture unit: the screen capturing device is used for receiving a screen capturing instruction, and capturing a screen of an image displayed on a current interface to obtain a screen capturing image;
a transmission unit: the face recognition server is used for sending the screen shot image to a face recognition server for face recognition;
a receiving unit: the face recognition server is used for receiving a face recognition result returned by the face recognition server, wherein the face recognition result comprises the name of a person and face area position information;
an icon generation unit: the face part of the screenshot image is intercepted as an icon according to the face region position information;
a display unit: and displaying the name of the person and the icon corresponding to the person.
Optionally, the icon generating unit is further configured to, when the picture of the face portion of the screenshot image is not captured, further determine that the avatar picture corresponding to the person is included in the face recognition result returned by the face recognition server, and take the avatar picture as the icon.
Optionally, the icon generating unit is further configured to, when the picture of the face portion of the screenshot image is not captured, further determine that the placeholder map is used as the icon when the face recognition result returned by the face recognition server does not include the avatar picture corresponding to the person.
Optionally, the icon generating unit is further configured to display the head portrait picture or the placeholder picture as an icon, and display the picture of the face part of the screen capture image as the icon after the picture of the face part of the screen capture image is captured.
A third aspect of the present application provides a computing device, which includes a memory and a processor, wherein the memory is used for storing program instructions, and the processor is used for calling the program instructions stored in the memory and executing any one of the methods provided by the first aspect according to the obtained program.
A fourth aspect of the present application provides a computer storage medium having stored thereon computer-executable instructions for causing a computer to perform any one of the methods provided by the first aspect above.
The method comprises the steps of carrying out screen capture on a picture which a user wants to know, obtaining a screen capture image, carrying out face recognition on the screen capture image, capturing a face part of the screen capture image as an icon according to face region position information in a face recognition result, establishing a one-to-one correspondence relationship between a name and the icon, and displaying the face part, so that the user can clearly know character information in the picture through the icon of the face part of the screen capture image and the name below the icon, the recognition degree of the recognition result is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a diagram illustrating icon display in an image recognition result according to the prior art;
FIG. 2 is a schematic illustration of an implementation environment to which the present application is directed;
FIG. 3 is a first flowchart of a method for generating an icon based on a screenshot image according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an icon generated based on a screenshot image according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of a method for generating an icon based on a screenshot image according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a device for generating a screenshot-based image according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a method and a device for generating an icon based on a screenshot image, the image that a user wants to know is screenshot to obtain the screenshot image, the screenshot image is subjected to face recognition, the face part of the screenshot image is captured as the icon according to face region position information in a face recognition result, a one-to-one correspondence relationship between the name of the person and the icon is established and displayed, so that the user can clearly know the character information in the image through the icon of the face part of the screenshot image and the name below the icon, the recognition degree of the recognition result is improved, and the user experience is improved.
The method and the device are based on the same application concept, and because the principles of solving the problems of the method and the device are similar, the implementation of the device and the method can be mutually referred, and repeated parts are not repeated.
Various embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be noted that the display sequence of the embodiment of the present application only represents the sequence of the embodiment, and does not represent the merits of the technical solutions provided by the embodiments.
FIG. 2 is a schematic diagram of an exemplary embodiment illustrating an implementation environment to which the present invention relates. The environment of implementation of the present invention includes a display terminal 100. The display terminal 100 can obtain the screenshot image of the current display interface by using the screenshot processing method provided by the invention.
The display terminal 100 includes, but is not limited to, a network device having a screen capture processing function, such as a smart television, a mobile phone, a tablet computer, a notebook computer, and a desktop computer, and in the embodiment of the present disclosure, a smart television is taken as an example for description.
According to the requirement, the implementation environment further comprises a face recognition server 200, the display terminal 100 sends the screenshot image to the face recognition server 200 for face recognition of the screenshot image, the face recognition server 200 feeds back the face recognition result to the display terminal 100, the display terminal 100 intercepts the face part as an icon according to the face area position information, establishes a one-to-one correspondence relationship between the person names and the icons, and displays the icons corresponding to the person names.
The implementation environment may also include a local server 300. The local server 300 is configured to receive a screenshot image uploaded by the display terminal 100 and send the screenshot image to the face recognition server 200, where the face recognition server 200 is a server that has a cooperation agreement with the local server 300, the face recognition server 200 performs face recognition on the received screenshot image and feeds back a face recognition result to the local server 300, the local server 300 sends the face recognition result to the display terminal 100, and the display terminal 100 intercepts a face part as an icon according to face area location information, establishes a one-to-one correspondence relationship between a person name and the icon, and displays the icon corresponding to the person name.
Example 1:
the present disclosure provides a method for generating an icon based on a screenshot image, the method being applied to a display terminal side, as shown in fig. 3, and including the following steps:
s301, receiving a screen capturing instruction, and capturing a screen of the image displayed on the current interface to obtain a screen capturing image.
The display terminal may be a display terminal in the implementation environment shown in fig. 1, for example, an intelligent television set-top box, and the like. The current interface refers to a display interface of the smart television or the smart television set top box. The screen capturing instruction can be sent by control equipment such as a remote controller, and the terminal equipment receives the screen capturing instruction sent by the control equipment such as the remote controller, and triggers the terminal equipment to execute subsequent screen capturing operation to acquire the currently displayed picture content.
It should be noted that the screen capture processing method of the present invention is not limited to the corresponding processing logic deployed in the display device, but may also be processing logic deployed in other machines. For example, processing logic of the screen capture processing method is deployed in a terminal device with computing power, and the like.
S302, the screen shot image is sent to a face recognition server for face recognition.
For example, the face recognition server extracts a face part from the screenshot image by using a face detection technology and compares the face part with the big data of the user, and when the similarity between the face in the screenshot image and the face of a certain image in the big data of the user is greater than a threshold value (the threshold value can be 80%), the face in the screenshot image and the face of the certain image are judged to be the same person.
S303: receiving a face recognition result returned by a face recognition server, wherein the face recognition result comprises the name of a person and face area position information;
the name of the person is the name of the person identified in the face identification server and corresponding to the head portrait picture with the similarity of the screen shot image being larger than the threshold value.
The position and the size of the face in the screen shot image are accurately calibrated in the screen shot image through face detection, so that the position of a face region is obtained, and the position information of the face region is the position information location (x, y, w, h) of the face part of the person in the screen shot image, wherein x and y are coordinate points of the upper left corner of the face region, w is the width of the face region, and h is the height of the face region.
Besides the name of the person and the position information of the face area, the face recognition result returned by the face recognition server can also comprise an avatar picture, wherein the avatar picture refers to an avatar picture with similarity greater than a threshold value with a screenshot avatar in self big data corresponding to the human recognition server.
S304: intercepting a face part of the screenshot image as an icon according to the face region position information;
and intercepting the face part of the person in the screen shot image according to the position information location (x, y, w, h), and taking the intercepted picture as an icon.
S305: and displaying the name of the person and the icon corresponding to the person.
When the display terminal does not intercept the face part of the screenshot image due to the reasons that position information is not acquired or performance cannot be kept up to the top, it needs to further judge whether the face recognition result returned by the face recognition server contains the avatar image corresponding to the person, if the returned face recognition result contains the avatar image corresponding to the person, the avatar image is used as an icon, and if the returned face recognition result does not contain the avatar image corresponding to the person, which may be caused by reasons such as network, the placeholder image with certain characteristics is used as an icon, and the placeholder image can be a female avatar, a male avatar, a child avatar and the like.
As a preferred embodiment, considering that the screenshot of the display terminal is relatively slow, it may be considered that a placeholder image with certain characteristics or an avatar image returned from the human recognition server is displayed as an icon, and after the display terminal captures the image of the face portion of the screenshot image, the image of the face portion of the screenshot image is displayed as the icon.
Fig. 4 shows that, for an icon generated based on a screenshot image according to an embodiment of the present application, the screenshot image is sent to a face recognition server, and the two people of the screenshot image recognized by the face recognition server are: a person 1 and a person 2, wherein the position information of the corresponding face part of the person 1 in the screenshot image is location1, the position information of the corresponding face part of the person 2 in the screenshot image is location2, and the one-to-one correspondence between the person 1 and the person 2 is as follows:
name (I) | location |
Character 1 | location1(x1,y1,w1,h1) |
Character 2 | location2(x2,y2,w2,h2) |
The icon of the person 1 is cut out from the position 1 (x 1, y1, w1, h 1), the icon of the person 1 is cut out from the face part corresponding to the person 1 in the screenshot image, the icon of the person 2 is cut out from the position 2 (x 2, y2, w2, h 2), the name of the person 1 and the icon of the person 1 are displayed at the bottom of the display terminal, and the name of the person 2 and the icon of the person 2 are displayed. This allows the user to be presented with the screen shot image as an icon together with the name of the person, so that the user can most intuitively know the name of the person he wants to know.
The picture that the user wants to know is screenshot, obtain the screenshot image, carry out face identification with the screenshot image, the face part of the regional position information intercepting screenshot image of face according to the face in the face identification result is as the icon, establish name and icon one-to-one correspondence, and show, the user just can know the personage information in this picture through the name surveyability of the icon of the face part of screenshot image and icon below like this, the degree of recognition result has been promoted, user experience has been improved.
Example 2
The present disclosure provides another method for generating an icon based on a screenshot image, as shown in fig. 5, including the steps of:
s501, receiving a screen capture instruction, and capturing a screen of a display image of the current interface to obtain a screen capture image.
This step is the same as step S301, and will not be described herein.
S502, sending the screen capture image to a local server, and sending the screen capture image to a person by the local server
A face recognition server;
the local server is a function of image transfer, the face recognition server is a server which has a cooperation agreement with the local server, and in this embodiment, the local server itself does not have the face recognition function, so that the screenshot image needs to be forwarded to a third-party server which has the face recognition function and has a cooperation agreement with the third-party server for face recognition operation.
S503, the face recognition server carries out face recognition on the received screenshot image;
step S503 is the same as step S302, and will not be described herein.
S504: the local server receives a face recognition result returned by the face recognition server, wherein the face recognition result comprises the name of a person and face area position information;
and the face recognition server sends the face recognition result to the local server, and then the local server sends the face recognition result to the display terminal.
S505: the local server sends the face recognition result to the display terminal, and the display terminal intercepts the face part of the screenshot image as an icon according to the face region position information;
s506: and displaying the name of the person and the icon corresponding to the person.
Step S505 is the same as step S304, and step S506 is the same as step S305, which are not described herein again.
When the display terminal does not intercept the face part of the screenshot image due to the reasons of position information acquisition failure, performance failure to keep up and the like, it needs to further judge whether the face recognition result returned by the face recognition server contains the avatar image corresponding to the person, if the returned face recognition result contains the avatar image corresponding to the person, the avatar image is used as an icon, and if the returned face recognition result does not contain the avatar image corresponding to the person, which may be caused by the reasons of network and the like, a placeholder image with certain characteristics is used as an icon, wherein the placeholder image may be a female avatar, a male avatar, an avatar of a child and the like.
As a preferred embodiment, considering that the screenshot of the display terminal is relatively slow, it may be considered that a placeholder image with certain characteristics or a head portrait image returned from the human recognition server is displayed as an icon, and after the display terminal captures the image of the face portion of the screenshot image, the image of the face portion of the screenshot image is displayed as an icon
The picture that the user wants to know is screenshot, obtain the screenshot image, carry out face identification with the screenshot image, the face part of the regional position information intercepting screenshot image of face according to the face in the face identification result is as the icon, establish name and icon one-to-one correspondence, and show, the user just can know the personage information in this picture through the name surveyability of the icon of the face part of screenshot image and icon below like this, the degree of recognition result has been promoted, user experience has been improved.
Example 3
On the display terminal side, the present disclosure provides an apparatus for generating an icon based on a screen shot image, as shown in fig. 6, including:
the screen capture unit 601: the system comprises a screen capturing module, a screen capturing module and a display module, wherein the screen capturing module is used for receiving a screen capturing instruction, and capturing a screen of a current interface to obtain a screen capturing image;
transmitting section 602: the face recognition server is used for sending the screen shot image to a face recognition server for face recognition;
the receiving unit 603: the face recognition server is used for receiving a face recognition result returned by the face recognition server, wherein the face recognition result comprises the name of a person and face area position information;
icon generation section 604: the face part of the screenshot image is intercepted as an icon according to the face region position information;
the display unit 605: and displaying the name of the person and the icon corresponding to the person.
Optionally, the icon generating unit 604 is further configured to, when the picture of the face portion of the screenshot image is not captured, further determine that the avatar picture corresponding to the person is included in the face recognition result returned by the face recognition server, and take the avatar picture as the icon.
Optionally, the icon generating unit 604 is further configured to, when the picture of the face portion of the screenshot image is not captured, further determine that the placeholder map is used as the icon when the face recognition result returned by the face recognition server does not include the avatar picture corresponding to the person.
Optionally, the icon generating unit 604 is further configured to display the head portrait picture or the placeholder picture as an icon, and display the picture of the face part of the screen capture image as an icon after the picture of the face part of the screen capture image is captured.
It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiment of the present application provides a computing device, which may specifically be a desktop computer, a portable computer, a smart phone, a tablet computer, a Personal Digital Assistant (PDA), and the like. Referring to fig. 7, the computing device may include a Central Processing Unit (CPU) 32, a memory 31. The computing device may also include input/output devices (not shown), which may include a keyboard, mouse, touch screen, etc., and the like, and the output devices may include a Display device, such as a Liquid Crystal Display (LCD), Cathode Ray Tube (CRT), etc.
The processor 32 is configured to execute any of the methods provided by the embodiments of the present application according to the obtained program instructions by calling the program instructions stored in the memory.
Embodiments of the present application provide a computer storage medium for storing computer program instructions for an apparatus provided in the embodiments of the present application, which includes a program for executing any one of the methods provided in the embodiments of the present application.
The computer storage media may be any available media or data storage device that can be accessed by a computer, including, but not limited to, magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs)), etc.
The method provided by the embodiment of the application can be applied to terminal equipment and also can be applied to network equipment.
The Terminal device may be a smart television, or may be another type of Terminal, for example, a User Equipment (User Equipment, abbreviated as "UE"), a Mobile Station (Mobile Station, abbreviated as "MS"), a Mobile Terminal (Mobile Terminal), or the like, optionally, the Terminal may have a capability of communicating with one or more core networks via a Radio Access Network (RAN), for example, the Terminal may be a Mobile phone (or referred to as a "cellular" phone), or a computer with Mobile properties, and for example, the Terminal may be a portable, pocket, hand-held, computer-embedded, or vehicle-mounted Mobile device.
A network-side device or apparatus may be a server or a base station (e.g., an access point), which refers to a device in an access network that communicates over the air-interface, through one or more sectors, with wireless terminals. The base station may be configured to interconvert received air frames and IP packets as a router between the wireless terminal and the rest of the access network, which may include an Internet Protocol (IP) network. The base station may also coordinate management of attributes for the air interface. For example, the Base Station may be a Base Transceiver Station (BTS) in GSM or CDMA, a Base Station (NodeB) in WCDMA, an evolved Node B (NodeB or eNB or e-NodeB) in LTE, or a gNB in 5G system. The embodiments of the present application are not limited.
The above method process flow may be implemented by a software program, which may be stored in a storage medium, and when the stored software program is called, the above method steps are performed.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (7)
1. A method for generating an icon based on a screenshot image, the method comprising:
receiving a screen capturing instruction;
the method comprises the steps of performing screen capture on an image displayed on a current interface to obtain a screen capture image;
sending the screenshot image to a face recognition server for face recognition;
receiving a face recognition result returned by the face recognition server, wherein the face recognition result comprises face region position information, a person name and a head portrait picture corresponding to the person name, and the face region position information is used for capturing a face region of the screenshot image according to the face region position information to obtain a face region picture;
displaying the screenshot image, and correspondingly displaying the character name and the head portrait image before capturing the face area of the screenshot image according to the face area position information to obtain a face area picture;
and after the face area of the screenshot image is intercepted according to the face area position information to obtain a face area picture, replacing the previously displayed head portrait picture with the face area picture so as to correspondingly display the person name and the face area picture.
2. The method of claim 1,
the person names comprise a first person name and a second person name; the face area position information comprises first face area position information and second face area position information, the head portrait picture corresponding to the person name comprises a first head portrait picture corresponding to the first person name and a second person picture corresponding to the second person name, the first face area position information is used for capturing a face area of the screenshot image according to the first face area position information to obtain a first face area picture, and the second face area position information is used for capturing the face area of the screenshot image according to the second face area position information to obtain a second face area picture;
after the face area of the screenshot image is intercepted according to the face area position information to obtain a face area picture, replacing the previously displayed head portrait picture with the face area picture comprises the following steps: and after the face area of the screen shot image is intercepted according to the first face area position information to obtain a first face area picture, replacing the first head portrait picture which is displayed corresponding to the first person name previously by using the first face area picture.
3. The method of claim 2, wherein the replacing the previously displayed avatar picture with the face region picture after the face region of the screenshot image is captured according to the face region location information further comprises:
and after the face area of the screenshot image is intercepted according to the second face area position information to obtain a second face area picture, replacing the first head portrait picture displayed corresponding to the second person name by using the second face area picture.
4. The method of claim 1,
the person names comprise a first person name and a second person name; the face area position information comprises first face area position information and second face area position information, the head portrait picture corresponding to the person name comprises a first head portrait picture corresponding to the first person name and a second person picture corresponding to the second person name, the first face area position information is used for capturing a face area of the screenshot image according to the first face area position information to obtain a first face area picture, and the second face area position information is used for capturing the face area of the screenshot image according to the second face area position information to obtain a second face area picture;
before the capturing the face area of the screenshot image according to the face area position information to obtain a face area picture, displaying the screenshot image, and correspondingly displaying the character name and the avatar picture, including:
displaying the screen shot image;
before the face area of the screen shot image is intercepted according to the first face area position information to obtain a first face area picture, the first person name and the first head picture are correspondingly displayed;
and correspondingly displaying the second person name and the second head portrait picture before capturing the face area of the screen shot image according to the second face area position information to obtain a second face area picture.
5. The method of claim 1,
the face position information is location (x, y, w, h), wherein x and y are coordinate points of the upper left corner of the face region, w is the width of the face region, and h is the height of the face region.
6. A computing device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any one of claims 1 to 5 in accordance with the obtained program.
7. A computer storage medium having stored thereon computer-executable instructions for causing a computer to perform the method of any one of claims 1 to 5.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811020860.6A CN109189289B (en) | 2018-09-03 | 2018-09-03 | Method and device for generating icon based on screen capture image |
PCT/CN2019/104034 WO2020048425A1 (en) | 2018-09-03 | 2019-09-02 | Icon generating method and apparatus based on screenshot image, computing device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811020860.6A CN109189289B (en) | 2018-09-03 | 2018-09-03 | Method and device for generating icon based on screen capture image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109189289A CN109189289A (en) | 2019-01-11 |
CN109189289B true CN109189289B (en) | 2021-12-24 |
Family
ID=64912090
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811020860.6A Active CN109189289B (en) | 2018-09-03 | 2018-09-03 | Method and device for generating icon based on screen capture image |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109189289B (en) |
WO (1) | WO2020048425A1 (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109189289B (en) * | 2018-09-03 | 2021-12-24 | 聚好看科技股份有限公司 | Method and device for generating icon based on screen capture image |
CN111698532B (en) * | 2019-03-15 | 2022-12-16 | 阿里巴巴集团控股有限公司 | Bullet screen information processing method and device |
CN113965540B (en) * | 2019-04-30 | 2023-03-21 | 创新先进技术有限公司 | Information sharing method, device and equipment |
CN111343512B (en) * | 2020-02-04 | 2023-01-10 | 聚好看科技股份有限公司 | Information acquisition method, display device and server |
CN113552977A (en) * | 2020-04-23 | 2021-10-26 | 阿里巴巴集团控股有限公司 | Data processing method and device, electronic equipment and computer storage medium |
WO2021238733A1 (en) * | 2020-05-25 | 2021-12-02 | 聚好看科技股份有限公司 | Display device and image recognition result display method |
WO2022078172A1 (en) * | 2020-10-16 | 2022-04-21 | 海信视像科技股份有限公司 | Display device and content display method |
CN112329851B (en) * | 2020-11-05 | 2024-08-06 | 腾讯科技(深圳)有限公司 | Icon detection method and device and computer readable storage medium |
US20220198861A1 (en) * | 2020-12-18 | 2022-06-23 | Sensormatic Electronics, LLC | Access control system screen capture facial detection and recognition |
CN115866292B (en) * | 2021-08-05 | 2024-08-20 | 聚好看科技股份有限公司 | Server, display device and screenshot identification method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102667764A (en) * | 2009-08-07 | 2012-09-12 | 谷歌公司 | User interface for presenting search results for multiple regions of a visual query |
CN104184923A (en) * | 2014-08-27 | 2014-12-03 | 天津三星电子有限公司 | System and method used for retrieving figure information in video |
CN107105340A (en) * | 2017-03-21 | 2017-08-29 | 百度在线网络技术(北京)有限公司 | People information methods, devices and systems are shown in video based on artificial intelligence |
WO2018057272A1 (en) * | 2016-09-23 | 2018-03-29 | Apple Inc. | Avatar creation and editing |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4496264B2 (en) * | 2008-10-24 | 2010-07-07 | 株式会社東芝 | Electronic device and video display method |
EP2503545A1 (en) * | 2011-03-21 | 2012-09-26 | Sony Ericsson Mobile Communications AB | Arrangement and method relating to audio recognition |
CN106326823B (en) * | 2015-07-07 | 2020-06-30 | 北京神州泰岳软件股份有限公司 | Method and system for obtaining head portrait in picture |
CN106598998B (en) * | 2015-10-20 | 2020-10-27 | 北京安云世纪科技有限公司 | Information acquisition method and information acquisition device |
CN109189289B (en) * | 2018-09-03 | 2021-12-24 | 聚好看科技股份有限公司 | Method and device for generating icon based on screen capture image |
-
2018
- 2018-09-03 CN CN201811020860.6A patent/CN109189289B/en active Active
-
2019
- 2019-09-02 WO PCT/CN2019/104034 patent/WO2020048425A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102667764A (en) * | 2009-08-07 | 2012-09-12 | 谷歌公司 | User interface for presenting search results for multiple regions of a visual query |
CN104184923A (en) * | 2014-08-27 | 2014-12-03 | 天津三星电子有限公司 | System and method used for retrieving figure information in video |
WO2018057272A1 (en) * | 2016-09-23 | 2018-03-29 | Apple Inc. | Avatar creation and editing |
CN107105340A (en) * | 2017-03-21 | 2017-08-29 | 百度在线网络技术(北京)有限公司 | People information methods, devices and systems are shown in video based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
WO2020048425A1 (en) | 2020-03-12 |
CN109189289A (en) | 2019-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109189289B (en) | Method and device for generating icon based on screen capture image | |
CN111837425B (en) | Access method, access device and storage medium | |
CN109151597B (en) | Information display method and device | |
US10314098B2 (en) | Method and apparatus for connecting short-range wireless communication in terminal | |
US10728128B2 (en) | Apparatus and method for detecting counterfeit advertiser in wireless communication system | |
US10469488B2 (en) | Security verification method, apparatus, and system | |
WO2015169188A1 (en) | Method, apparatus, and system for loading webpage application program | |
CN110392306B (en) | Data processing method and equipment | |
US9052866B2 (en) | Method, apparatus and computer-readable medium for image registration and display | |
EP3787343A1 (en) | Method and device for recovering and establishing wireless backhaul link | |
US11564216B2 (en) | Information transmission method and device | |
CN111800794A (en) | Method and device for determining position of demodulation reference signal | |
CN109039994B (en) | Method and equipment for calculating asynchronous time difference between audio and video | |
CN104349169B (en) | A kind of image processing method and electronic equipment | |
CN110475369A (en) | A kind of business scheduling method, terminal and the network equipment | |
US11044766B2 (en) | Method for Wi-Fi connection and related products | |
CN107808106A (en) | A kind of session content methods of exhibiting and device with scene handoff functionality | |
CN105574453A (en) | Two-dimensional code processing method and mobile terminal | |
CN108924668B (en) | Picture loading and data providing method and device | |
CN114125739A (en) | Network switching method and device, electronic equipment and storage medium | |
EP4021117A1 (en) | Transmission bandwidth determination method and device | |
CN107181670B (en) | Picture processing method and device and storage medium | |
CN113205452A (en) | Image processing method and device, electronic equipment and storage medium | |
CN106797586B (en) | rate configuration method and device | |
CN112385253A (en) | Network state display method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240723 Address after: No. 399 Songling Road, Laoshan District, Qingdao City, Shandong Province (A6 3rd Floor) Patentee after: QINGDAO JUKANYUN TECHNOLOGY CO.,LTD. Country or region after: China Address before: 266100 Songling Road, Laoshan District, Qingdao, Shandong Province, No. 399 Patentee before: JUHAOKAN TECHNOLOGY Co.,Ltd. Country or region before: China |
|
TR01 | Transfer of patent right |