[go: nahoru, domu]

US20150071547A1 - Automated Selection Of Keeper Images From A Burst Photo Captured Set - Google Patents

Automated Selection Of Keeper Images From A Burst Photo Captured Set Download PDF

Info

Publication number
US20150071547A1
US20150071547A1 US14/021,857 US201314021857A US2015071547A1 US 20150071547 A1 US20150071547 A1 US 20150071547A1 US 201314021857 A US201314021857 A US 201314021857A US 2015071547 A1 US2015071547 A1 US 2015071547A1
Authority
US
United States
Prior art keywords
images
image
sequence
interest
keeper
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/021,857
Inventor
Brett Keating
Vincent Wong
Todd Sachs
Claus Molgaard
Michael Rousson
Elliott Harris
Justin TITI
Karl Hsu
Jeff Brasket
Marco Zuliani
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Priority to US14/021,857 priority Critical patent/US20150071547A1/en
Assigned to APPLE INC. reassignment APPLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HARRIS, ELLIOTT, TITI, Justin, ROUSSON, MICHAEL, HSU, KARL, ZULIANI, Marco, BRASKET, Jeff, KEATING, BRETT, SACHS, TODD, WONG, VINCENT, MOLGAARD, CLAUS
Priority to KR1020167006182A priority patent/KR101731771B1/en
Priority to PCT/US2014/052965 priority patent/WO2015034725A1/en
Priority to CN201480049340.1A priority patent/CN105531988A/en
Priority to EP14767189.5A priority patent/EP3044947B1/en
Priority to AU2014315547A priority patent/AU2014315547A1/en
Publication of US20150071547A1 publication Critical patent/US20150071547A1/en
Priority to US15/266,460 priority patent/US10523894B2/en
Priority to AU2017261537A priority patent/AU2017261537B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/6267
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06K9/46
    • G06K9/6201
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • G06K2009/4666
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • this disclosure relates generally to the field of digital photography. More particularly, but not by way of limitation, this disclosure relates to techniques for selecting an image from a series of images taken during a burst photo capture mode.
  • the burst photo capture mode refers generally to a camera setting which allows the user to capture multiple photographs in a short period of time. The multiple photographs are generally taken automatically after the user makes the selection and presses one button, and they are generally captured at a particular rate of speed. The rate of speed may be, for example, 10 photographs per second.
  • a user utilizes this mode for a specific reason.
  • One such reason may be to capture an action shot, such as a child diving into a pool or blowing out birthday candles.
  • the user may desire to capture the event with multiple photographs that document the chronology of the event, i.e. before, during or after a specific event.
  • manually capturing the exact desired moments with individual button presses, may be very difficult.
  • Using the burst capture mode allows the user to capture a number of photographs in a short period of time and thus increase the chances that photographs of the exact desired moments are among the ones taken.
  • photo burst captures often after the pictures are taken, the user may decide to keep several photos, e.g. to show before, during, and after an event.
  • Another reason for which the user may decide to utilize the burst capture mode is to take portrait pictures of multiple people. This may happen when the user is taking a group photograph, and desires to have all of the people in the picture smiling, not blinking, and looking at the camera with an unobstructed line-of-sight view.
  • the burst capture mode can be very useful for action scenes, scenes for multiple people, or for providing the option of choosing the best from multiple photo captures, it is generally not used frequently because it tends to fill up memory storage space quickly. Moreover, the many pictures taken during a burst have to be reviewed by the user to select one or more keeper pictures and that can be a time consuming and tedious task.
  • a method to receive and retain a sequence of images in an image set includes detecting if each of the images in the sequence of images contains faces or if the scene contains action. Using the detection, the images in the set may then be classified. In one embodiment, if one or more images are detected as containing primarily faces, the images may be classified as portraits. In some embodiments, if the scene is detected as containing action, the images may be classified as action images. At least one quality metric value is then determined for each of the obtained images. The quality metric value may include, for example, sharpness values for the images. In other embodiments, the quality metric value may include blurriness metrics.
  • one or more images are selected as keeper images from the image set.
  • the keeper images are selected, in one embodiment, based on the classification and at least one quality metric value. After the selection has been made, the one or more keeper images may be presented to a user.
  • a method to pre-select keeper images from a burst capture set of images includes determining if detected faces in the image sets are smiling or blinking. In one embodiment, a sharpness value for each face may also be calculated. In another embodiment, in order to determine if the scene contains action, a feature vector may be constructed from the images and used in a classifier. In some embodiments, if the images are classified as action, the sequence of images may be divided into two or more sections and one keeper image may be selected from each section.
  • selecting one or more keeper images from the sequence of received images comprises identifying a region of interest in the images and selecting one or more keeper images from the sequence of images based on the at least one quality metric value for the region of interest. Identifying a region of interest may include registering each two images in the sequence of images with respect to each other, comparing the registered images with each other, and identifying a region in the registered images where the differences between the registered images are larger than a specified threshold.
  • FIG. 1 shows, in flowchart form, an image burst capture operation in accordance with one embodiment.
  • FIG. 2 shows, in flowchart form, an image processing operation in accordance with one embodiment.
  • FIG. 3 shows, in flowchart form, a burst set classification operation in accordance with another embodiment.
  • FIG. 4 shows, in flowchart form, a keeper image selection operation for a portrait burst in accordance with one embodiment.
  • FIG. 5 shows, in flow-chart form, a keeper image selection operation for an action burst in accordance with one embodiment.
  • FIG. 6 shows, in flow-chart form, a keeper image selection operation in accordance with one embodiment.
  • FIG. 7 shows, in block diagram form, a multi-function electronic device in accordance with one embodiment.
  • This disclosure pertains to systems, methods, and computer readable media to automatically pre-select one or more images as keeper images from multiple images taken with a burst photo capture mode.
  • a novel approach may be used to determine the reason the photographer used the burst capture mode. This may be done, for example, by analyzing the images to determine if the images contain primarily faces or if they track some action in the scene. Based on the determined reason, the burst may then be categorized as action, portrait or other.
  • the approach may analyze the captured image set. Depending on the category selected, the approach may use different criteria to pre-select one or more images from the image set as keeper image(s). For a portrait burst, the approach may select one image with the most smiling, non-blinking faces. For an action burst, the operation may divide the image set into sections that each cover different stages of the action and select one keeper image from each of the sections. For a burst that is classified as other, the approach may identify a region of interest in the image set and select a keeper image that has higher quality metrics for the identified region of interest.
  • the techniques used to pre-select the best one or more images may take advantage of some of the calculations made during normal processing of the images such that no significant post-processing time is required. This means that computations made to pre-select keeper image(s) may not be noticeable to the users, thus allowing the user to access the pre-selected images virtually instantaneously after the images are captured. In one embodiment, the calculations made during the processing of the images and the computations made to pre-select keeper images do not interfere with the burst capture frame rate. Thus not only does the user not experience any significant noticeable delay between image capture and the presentation of pre-selected keeper images, there is also no interference with the normal burst capture frame rate.
  • operation 100 begins when a burst capture operation is activated (block 105 ). This may done, in one embodiment, by setting the camera mode on burst capture and pressing an image capture button. Upon activating the burst capture mode, the camera may start taking multiple photographs (block 110 ) and receiving image data for each photograph taken (block 115 ).
  • the term “camera” refers to any electronic device that includes or incorporates digital image capture functionality. This includes, by way of example, stand-alone cameras (e.g., digital SLR cameras and ‘point-and-click’ cameras) as well as other electronic devices having embedded camera capabilities. Examples of this latter type include, but are not limited to, mobile phones, tablet and notebook computer systems, and digital media player devices.
  • the photographs are generally taken in a short period of time at a particular rate of speed.
  • the number of pictures taken in a burst can vary in different embodiments.
  • the user may hold down the image capture button until finished taking pictures.
  • the number of pictures taken in such an embodiment, may vary depending on the image capture rate.
  • the capture rate may be for example, 6, 8 or 10 pictures per second.
  • the user may be able to select the rate of capture.
  • the user may have the ability to select the number of pictures taken from a range of options available. For example, the user may be able to choose between 100, 200 or 500 photographs.
  • special image buffer memory may be used to retain the captured images.
  • general purpose memory may be used.
  • the data may be processed as it is received (block 120 ). This occurs, in one embodiment, in real time such that the user does not notice any significant delay between capturing the images and viewing them. In general, only a limited amount of time may be available for processing the images. For example, in an embodiment in which 10 images are captured during the burst capture at a rate of 10 images per second, there may be 100 milliseconds available to receive and process each image and to conduct an analysis to pre-select keeper images. Most of the processing time is generally needed for encoding, storing the image, and maintaining in one embodiment, an interactive user interface which shows burst capture progress. Thus, the time available for performing an analysis to pre-select keeper images may be very limited.
  • the real-time data collection and processing does not take more than 35-55% of the total amount of time available. For a burst being captured at a rate of 10 images per second, that translates to 35-55 milliseconds for data collection, processing and analysis.
  • the embodiments described in this specification are generally able to meet these time constraints.
  • processing each image received in operation 120 may begin by dividing the image into smaller regions (e.g., blocks, tiles or bands) (block 200 ) to make processing of the multiple calculations performed on the image faster and more efficient.
  • the blocks are 32 ⁇ 32.
  • the blocks are 16 ⁇ 16.
  • the entirety of the image is selected as one region.
  • the image may also be scaled down as is well known in the art.
  • the blocks may be processed to determine image quality metrics in accordance with image content and/or motion sensor data (e.g., gyro and accelerometer sensors). These techniques may be used separately, or combined together, depending on the particular use case and/or system resources.
  • image quality metrics may be associated with each image directly (e.g., stored with the image as metadata) or indirectly (e.g., through a separate index or database file).
  • the first step in processing the image and determining quality metrics may involve creating a color histogram of the image in the UV color space (block 205 ).
  • the color histogram may be a two dimensional histogram with the U-value as one dimension and the V-value as the other.
  • the image may be divided into multiple regions, having Ui and Vi as the dimensions for the ith region.
  • U 1 may contain any U-value between 0 and 7. If a color is found which falls within Ui and Vi, a “bin” corresponding to (Ui,Vi) may be incremented.
  • the sizes of the bins may be uniform, or they may be adjusted so that regions where color combinations are more common are represented by more bins.
  • a quality measure indicative of the image's sharpness may be calculated (block 210 ). Sharpness measures may be obtained or determined from, for example, a camera's auto-focus (AF) and/or auto-exposure (AE) systems. In one embodiment, sharpness measures may be determined by calculating the sum of adjacent pixel differences. Other methods of determining sharpness are also possible. For the purposes of this disclosure, the sharper an image is judged to be, the higher its' corresponding rank (e.g., quality metric value).
  • a wavelet transform may be calculated for each block (block 215 ) to compress the image, thus making further calculations more efficient.
  • the wavelength transform may be a Haar transform. Calculating a Haar wavelength transform is well known in the art and thus not discussed here.
  • the amount of blur present in the image may be derived (block 220 ). In one embodiment, the amount of blur is derived from the wavelet transforms calculated. Other approaches are also possible. One approach to determining the amount of blur present in an image is discussed in U.S. patent application Ser. No. 13/911,873, entitled “Reference Frame Selection for Still Image Stabilization,” incorporated herein by reference in its entirety.
  • a determination is made as to whether the image is too blurry to use (block 225 ). This is done, in one embodiment, by comparing the amount of blur present in the image with a predetermined threshold. If the amount of blur present in the image is above a predetermined threshold, or in some embodiments, if another calculated quality measure is below a different threshold value the image may be determined to be too blurry. Threshold values may be static or predetermined (obtained, for example, from program memory during camera start-up) or dynamic (determined, for example, based on image statistics).
  • the image may be regarded as too blurry to use if one or more of the quality measures of the image is significantly smaller than the maximum quality metric value of the image set. In another implementation, if the quality metric of an image is smaller than the maximum quality metric value of the image set multiplied by a ratio, the image may be regarded as too blurry to use (e.g., a ratio of between 0.6 and 0.9).
  • the image may be discarded or otherwise removed from further consideration and a check may be made to determine if at least one more received image remains to be processed (block 260 ). If the image is not too blurry to use (NO prong of block 225 ), two one dimensional signatures may be calculated (block 230 ) for the image.
  • the signatures may be functions of the vertical and horizontal projections of the image. In one embodiment, the signatures are vertical and horizontal sums of pixel values.
  • the next step in processing the image may be determining whether or not the image contains faces. Face recognition techniques are well known in the art and thus not discussed in this specification. Using a face recognition algorithm, the operation may detect if there are faces in the image (block 235 ). If no faces are detected in the image (NO prong of block 235 ), the image may be retained (block 255 ), whereafter a check can be made to determine if all of the images from the set have been received (block 260 ) and, if yes, continue to block 305 of FIG. 3 to classify the image set. If at least one more image remains to be received (the YES prong of block 260 ), however, the operation may obtain the next image and continue to block 200 to process the next image.
  • the operation may move to block 240 to determine the size and location of each face.
  • the location of each face may refer to the blocks of pixels that make up the face on the image and the size may refer to the size of the block.
  • the operation may also determine if the face is smiling and if the eyes are open or blinking (block 245 ).
  • a sharpness value may be calculated for each of the faces detected in the image (block 250 ). As discussed above, there are a variety of known procedures for calculating image sharpness values. Using one of these known procedures, the operation may calculate a separate sharpness value over each block of pixels detected as representing a face.
  • the operation moves to block 255 to retain the image along with its' processing data and continue to block 260 to determine if there are more images in the image set to process. If there are more images, the operation moves back to block 200 to repeat the process for the next image. If, however, there are no other images in the image set, the operation moves to block 305 of FIG. 3 to classify the image set.
  • a ratio between the sharpness metric value of the sharpest image (i.e. identified in accordance with block 210 ) and each of the other captured images may be determined. Those images for which this ratio is less than some specified value, could be eliminated from further consideration as irrelevant. That is, only those images having a ratio value greater than a specified threshold would be considered for pre-selecting keeper images.
  • the selected threshold may be task or goal dependent and could vary from implementation to implementation. This is done to eliminate images that are of low quality and are not likely to be selected as keepers. Eliminating unwanted images can increase efficiency and speed up processing time.
  • images may be compared to each other to determine if there are images that are too similar too each other. If two such images are found, one may be eliminated from the set. This can also result in increased efficiency.
  • Operation 300 to classify the image set captured in the burst, begins by determining if the images contain primarily faces (block 305 ). This can be done, in one embodiment, by analyzing the data collected during the processing operation 120 . If faces were detected during operation 120 , the operation also calculated the size of each face in the images, as discussed above with respect to FIG. 3 . In one embodiment, the sizes of the faces in an image may be added together for each image to calculate a total face size for that image. The total face size may then be compared to the total size of the image. If the total face size is above a certain threshold relative to the total size of the image, then the operation may determine that particular image contains primarily faces.
  • the operation may decide that the image does not primarily contain faces.
  • the threshold value is 75% such that if the total face size is below %75 of the total image size, the image is considered as not containing primarily faces. It should be noted that other threshold values are also possible. Other approaches for determining if the images in the set contain primarily faces can also be used.
  • operation 300 may categorize the image set as a portrait set (block 310 ). In other embodiments if 50% or more of the images in the set contain primarily faces, the set is categorized as a portrait. Other configurations are also possible.
  • the operation moves to block 405 in FIG. 4 (operation 400 ) to pre-select a keeper image in a portrait image set.
  • a region of interest may be identified in the image. This may be done in, one embodiment, by first registering each pair of images with respect to each other (block 315 ). There are a variety of well-known methods for registering images with respect to each other. U.S. patent application Ser. No. 13/911,793, entitled “Image Registration Methods for Still Image Stabilization,” incorporated herein by reference, describes a few such methods.
  • the registration may be performed by aligning the two signatures computed during processing of the images (see FIG. 2 , block 230 ). After the two images have been registered, the registered images may be compared with each other to determine an area of the images where there is a large difference between them (block 320 ). The difference between the registered images may be referred to as registration error. In the embodiment, where registration is done by aligning the vertical and horizontal signatures, the comparison may occur by examining the differences between the registered vertical signatures and the registered horizontal signatures. If there is a large difference between these numbers, it is likely that a moving subject (i.e., local motion) was present in that region of the images. That is because generally background of an image dominates the number of pixels in the image.
  • a moving subject i.e., local motion
  • registration is likely to align the background of one image with respect to the other, such that there generally is no significant difference between the backgrounds in the registered images.
  • the difference between the images may be larger.
  • registering the images with respect to one another and comparing the registered images with each other may identify local motion between the images.
  • the area containing local motion may be identified as the region of interest (block 325 ).
  • the region of interest may be identified as (x1, y1) and (x2, y2).
  • the region of interest may be selected as a region in the images for which the registration error (i.e., the difference between the two registered images) is larger than a specified threshold. It will be understood, other procedures for identifying the region of interest are also possible. If no local motion can be identified (i.e., the difference between the registered images is small), then the entire image may be identified as the region of interest.
  • a feature vector may be constructed from multiple data values computed so far during the processing of the images (block 330 ). Each value may be considered as a feature which when combined together form a vector of values referred to as the feature vector.
  • one of the values used to form the feature vector may be the computed color histograms. The color histograms show how similar or different the images are to each other. Thus, if the color histograms show that the images are too different, it is likely that the scene contained some action.
  • One of the other values that may be used in forming the feature vector is how large the registration errors are either in absolute value or in respect to each other.
  • the information from the feature vector may be input into a classifier, (block 340 ) such as a Support Vector Machine (SVM), an artificial neural network (ANN) or a Bayesian classifier to determine if the scene captured in the image set contains action.
  • a classifier such as a Support Vector Machine (SVM), an artificial neural network (ANN) or a Bayesian classifier to determine if the scene captured in the image set contains action.
  • the classifier prior to automated use, the classifier is trained with a set of training feature vectors already classified by hand.
  • the classifier may return a binary decision indicating if the images contain action or not (block 345 ). If the decision indicates that the images contained action, the burst may be classified as an action burst (block 350 ) and the operation may continue to block 505 of operation 500 ( FIG. 5 ) to pre-select keeper images in an action image set.
  • the set may be classified as other (block 355 ) and the operation may continue to block 605 of operation 600 in FIG. 6 to determine the best image(s) in a set categorized as other.
  • operation 400 for pre-selecting keeper images in an image classified as a portrait set begins by calculating a sharpness score for each face in each image in the set (block 405 ).
  • sharpness values for each face are generally calculated during processing operation 120 for each image.
  • sharpness scores may be calculated for each face.
  • Sharpness values are normalized over all the images in the set, by tracking each face as one subject across the image set. This may be done by first calculating an average sharpness value for each face across all the images in the set.
  • the average sharpness value in one embodiment, may be the sum of image gradients calculated over the eyes for the particular face across all the images in the set.
  • the sharpness values for the face in each of the images in the set may be averaged to obtain the average sharpness value.
  • the sharpness value for the face in each image may be divided by the average sharpness value for that face to obtain a sharpness score for the respective face.
  • a total score may be calculated for the face (block 410 ).
  • the total score may be calculated by analyzing various categories of data collected during the processing of the images.
  • Each category of data may be assigned a particular range of scores. For example, scores may be assigned for smiling faces and for non-blinking faces.
  • each category of data has a range of numbers available as options for scores for that category.
  • a higher score may signify a better quality image. For example, data indicating that a face is smiling may result in a score of 10, while a non-smiling face may result in a score of zero.
  • a non-blinking face may also receive a score of 10, while a blinking face may receive a score of zero.
  • the sharpness score calculated is another category that may be taken into account for the total score.
  • Other categories of data that may contribute to the total score include the location of the faces, e.g., whether or not the face is close to the edges of the image and the location of the area of the image occupied by the face. For example, being close to the edges of the image may receive a lower score, while being closer to the middle may receive a higher score.
  • rules of photographic composition such as the rule of threes may be used to establish a preference for where faces should be located. The rule of threes is well known in the art. Scores for each of these categories may be assigned and then normalized before being added together to calculate the total score for a face. Once total scores for all of the faces in an image have been calculated, the total face scores may be added together to obtain a score for the image (block 415 ).
  • a multiplicative factor may then be applied to each image score (block 420 ).
  • the multiplicative factor may be selected such that it makes the image score higher for images with faces. This results in a built-in preference for images with faces. Thus, if there are images in a set that do not contain any faces, they are less likely to be selected as keeper images. This is advantageous for an image set categorized as a portrait, as images without faces should not be selected as keepers for such a set.
  • the multiplicative factor has been applied to all the image scores, the image with the highest score may be selected as the keeper image for the burst (block 425 ) and may be presented to the user as such (block 430 ).
  • the set When faces are not detected in the image set, the set may be classified as an action or other type of set. For a set categorized as an action set, multiple images may be selected as keeper images. This is generally desirable in an action set, as the user may like to have the images tell the story of the action. To do this, the image set captured in the burst may be divided into various sections, and a keeper image may be selected from each section. Each section of the burst may contain images related to a specific sequence of actions in the scene. For example, if the burst captured was of a child diving into a pool from a diving board, the first section may include pictures of the child standing on the board, the second section may include pictures of the child in the air, and the third section may include pictures of the child in the water.
  • the maximum number may be three.
  • the maximum number may be a preset in the image capture device or it may be an optional setting that the user can select.
  • operation 500 to pre-select keeper images in an action set begins by calculating the distance between each pair of images in the image set (block 505 ).
  • the distance measured may be the Bhattacharyya distance of the two dimensional color histograms calculated during the processing operation 120 .
  • the calculated distance can then be used in a clustering model to divide the image set into different sections.
  • clustering models are available for use in this approach. These include connectivity models such as hierarchical clustering (e.g., single-link, complete-link), centroid models (e.g., K-means algorithms), exhaustive search, and scene change detection algorithms. These clustering models and algorithms are well known in the art and thus not described in detail here.
  • a scene change detection operation may first be used to cluster the image set into different sections (block 510 ). If the results from this operation are acceptable (YES prong of block 515 ), they are used. However, if the results are not acceptable, an exhaustive search operation may be used (block 520 ). An exhaustive search operation generally examines all the ways in which the set can be divided into a predetermined number of sections. The operation then attempts to optimize the ratio of average distance between images within a section to average distance between images from different sections. Based on the results of optimizing this ratio, the image set may be divided into different sections.
  • an image from each section may be pre-selected as a keeper (block 525 ). This is done, in one embodiment, by comparing image quality metrics for all of the images in one section and selecting the image with the highest and/or best quality metrics. For example, sharpness and blurriness measures calculated during the processing operation 120 may be examined to select the sharpest and/or least blurry image.
  • multiple images may have the same, or nearly the same, quality metric value. In such cases, the first image in each section having the highest quality metric value may be selected. In another embodiment, the last such image in the section may be selected. In still another embodiment, of those images having the highest quality metric value, that image closest to the middle of the image section may be selected. In yet another embodiment, if there are ‘N’ images having the highest quality metric value (e.g., are within a specified range of values from one another), a random one of the N images may be selected.
  • a keeper image from each section may be selected in accordance with the approach of operation 600 in FIG. 6 . Once keeper images for each of the divided sections have been selected, they may be presented to the user for review and selection (block 530 ). In this manner, multiple images are pre-selected as keeper images to show various stages of an action scene in an action image set.
  • the burst is not categorized as a portrait or an action, it may be classified as other.
  • Other is a broad category that covers instances in which it cannot be determined why the user used the burst capture mode. It may not be possible to examine images captured in such a burst for the best faces or for action, but it is still possible to select one or more high quality images in the set as keeper images.
  • One such approach involves identifying a best image through comparison of the region of interest of the images with each other. As discussed above, the region of interest is identified during the classification operation 300 (block 325 ).
  • the region may first be expanded to cover all the blocks of the image that overlap with the region of interest (block 620 ).
  • the blocks may correspond with the processing blocks of operation 120 for which quality metrics values were previously calculated, so that those metrics may be examined for the region of interest in each image in the image set (block 625 ).
  • the quality metrics may include, in one embodiment, sharpness measures and blurriness metrics.
  • the operation may assign a score to each image based on the various quality metrics examined (block 630 ). The scores may be assigned based on a range of numbers for each quality metric and added together to get a total score for each image.
  • a keeper image may then be selected based on the total image score (block 635 ). This results, in one embodiment, in selecting the image having the best quality metrics for the region of interest as the keeper image.
  • the keeper image may then be presented to the user for review and selection (block 640 ).
  • Electronic device 700 could be, for example, a mobile telephone, personal media device, portable camera, or a tablet, notebook or desktop computer system.
  • electronic device 700 may include processor 705 , display 710 , user interface 715 , graphics hardware 720 , device sensors 725 (e.g., proximity sensor/ambient light sensor, accelerometer and/or gyroscope), microphone 730 , audio codec(s) 735 , speaker(s) 740 , communications circuitry 745 , image capture circuit or unit 750 , video codec(s) 755 , memory 760 , storage 765 , and communications bus 770 .
  • processor 705 e.g., display 710 , user interface 715 , graphics hardware 720 , device sensors 725 (e.g., proximity sensor/ambient light sensor, accelerometer and/or gyroscope), microphone 730 , audio codec(s) 735 , speaker(s) 740 , communications circuitry 745 , image capture circuit or unit 750 ,
  • Processor 705 may execute instructions necessary to carry out or control the operation of many functions performed by device 700 (e.g., such as the capture and/or processing of images in accordance with FIGS. 1-6 ). Processor 705 may, for instance, drive display 710 and receive user input from user interface 715 .
  • User interface 715 can take a variety of forms, such as a button, keypad, dial, a click wheel, keyboard, display screen and/or a touch screen. User interface 715 could, for example, be the conduit through which a user may select when to capture an image.
  • Processor 705 may be a system-on-chip such as those found in mobile devices and include one or more dedicated graphics processing units (GPUs).
  • GPUs dedicated graphics processing units
  • Processor 705 may be based on reduced instruction-set computer (RISC) or complex instruction-set computer (CISC) architectures or any other suitable architecture and may include one or more processing cores.
  • Graphics hardware 720 may be special purpose computational hardware for processing graphics and/or assisting processor 705 perform computational tasks.
  • graphics hardware 720 may include one or more programmable graphics processing units (GPUs).
  • Image capture circuitry 750 may capture still and video images that may be processed to generate images and may, in accordance with this disclosure, include specialized hardware to perform some or many of the actions described herein. Output from image capture circuitry 750 may be processed (or further processed), at least in part by video codec(s) 755 and/or processor 705 and/or graphics hardware 720 , and/or a dedicated image processing unit (not shown). Images so captured may be stored in memory 760 and/or storage 765 .
  • Memory 760 may include one or more different types of media used by processor 705 , graphics hardware 720 , and image capture circuitry 750 to perform device functions. For example, memory 760 may include memory cache, read-only memory (ROM), and/or random access memory (RAM).
  • Storage 765 may store media (e.g., audio, image and video files), computer program instructions or software, preference information, device profile information, and any other suitable data.
  • Storage 765 may include one more non-transitory storage mediums including, for example, magnetic disks (fixed, floppy, and removable) and tape, optical media such as CD-ROMs and digital video disks (DVDs), and semiconductor memory devices such as Electrically Programmable Read-Only Memory (EPROM), and Electrically Erasable Programmable Read-Only Memory (EEPROM).
  • Memory 760 and storage 765 may be used to retain computer program instructions or code organized into one or more modules and written in any desired computer programming language. When executed by, for example, processor 705 such computer program code may implement one or more of the methods described herein.
  • FIGS. 1-6 have been described in the context of processing raw or unprocessed images, this is not necessary. Operations in accordance with this disclosure may be applied to processed versions of the captured images (e.g. edge-maps) or sub-sampled versions of the captured images (e.g. thumbnail images).
  • some of the described operations may have their individual steps performed in an order different from, or in conjunction with other steps, presented herein.
  • An example of this first difference would be performing actions in accordance with block 120 after one or more of the images are retained (e.g., block 255 ).
  • An example of the latter difference would be the determination of quality metrics, e.g., in accordance with operation 120 , as each image is captured (as implied in FIG. 2 ), after all images are captured, or after more than one, but less than all images have been captured. More generally, if there is hardware support some operations described in conjunction with FIGS. 1-6 may be performed in parallel.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Systems and methods for improving automatic selection of keeper images from a commonly captured set of images are described. A combination of image type identification and image quality metrics may be used to identify one or more images in the set as keeper images. Image type identification may be used to categorize the captured images into, for example, three or more categories. The categories may include portrait, action, or “other.” Depending on the category identified, the images may be analyzed differently to identify keeper images. For portrait images, an operation may be used to identify the best set of faces. For action images, the set may be divided into sections such that keeper images selected from each section tell the story of the action. For the “other” category, the images may be analyzed such that those having higher quality metrics for an identified region of interest are selected.

Description

    BACKGROUND
  • This disclosure relates generally to the field of digital photography. More particularly, but not by way of limitation, this disclosure relates to techniques for selecting an image from a series of images taken during a burst photo capture mode. As used herein, the burst photo capture mode refers generally to a camera setting which allows the user to capture multiple photographs in a short period of time. The multiple photographs are generally taken automatically after the user makes the selection and presses one button, and they are generally captured at a particular rate of speed. The rate of speed may be, for example, 10 photographs per second.
  • Typically, a user utilizes this mode for a specific reason. One such reason may be to capture an action shot, such as a child diving into a pool or blowing out birthday candles. In such instances, the user may desire to capture the event with multiple photographs that document the chronology of the event, i.e. before, during or after a specific event. However, because of the fast rate at which the events are occurring, manually capturing the exact desired moments, with individual button presses, may be very difficult. Using the burst capture mode allows the user to capture a number of photographs in a short period of time and thus increase the chances that photographs of the exact desired moments are among the ones taken. In such action photo burst captures, often after the pictures are taken, the user may decide to keep several photos, e.g. to show before, during, and after an event.
  • Another reason for which the user may decide to utilize the burst capture mode is to take portrait pictures of multiple people. This may happen when the user is taking a group photograph, and desires to have all of the people in the picture smiling, not blinking, and looking at the camera with an unobstructed line-of-sight view.
  • It is also possible that there is no particular action or people in the scene, but the user would like to be able to pick from several photographs in order to find the best photo in some aesthetic sense. Capturing photos of fountains and waterfalls are some examples of circumstances like this.
  • Although, the burst capture mode can be very useful for action scenes, scenes for multiple people, or for providing the option of choosing the best from multiple photo captures, it is generally not used frequently because it tends to fill up memory storage space quickly. Moreover, the many pictures taken during a burst have to be reviewed by the user to select one or more keeper pictures and that can be a time consuming and tedious task.
  • SUMMARY
  • In one embodiment a method to receive and retain a sequence of images in an image set is provided. The method includes detecting if each of the images in the sequence of images contains faces or if the scene contains action. Using the detection, the images in the set may then be classified. In one embodiment, if one or more images are detected as containing primarily faces, the images may be classified as portraits. In some embodiments, if the scene is detected as containing action, the images may be classified as action images. At least one quality metric value is then determined for each of the obtained images. The quality metric value may include, for example, sharpness values for the images. In other embodiments, the quality metric value may include blurriness metrics. After quality metric values are determined and the images are classified, one or more images are selected as keeper images from the image set. The keeper images are selected, in one embodiment, based on the classification and at least one quality metric value. After the selection has been made, the one or more keeper images may be presented to a user.
  • In another embodiment, a method to pre-select keeper images from a burst capture set of images includes determining if detected faces in the image sets are smiling or blinking. In one embodiment, a sharpness value for each face may also be calculated. In another embodiment, in order to determine if the scene contains action, a feature vector may be constructed from the images and used in a classifier. In some embodiments, if the images are classified as action, the sequence of images may be divided into two or more sections and one keeper image may be selected from each section.
  • In still another embodiment, if the image set is not detected as containing primarily faces or if action is not detected in the scene, the images may be classified as “other.” In some implementations, selecting one or more keeper images from the sequence of received images comprises identifying a region of interest in the images and selecting one or more keeper images from the sequence of images based on the at least one quality metric value for the region of interest. Identifying a region of interest may include registering each two images in the sequence of images with respect to each other, comparing the registered images with each other, and identifying a region in the registered images where the differences between the registered images are larger than a specified threshold.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows, in flowchart form, an image burst capture operation in accordance with one embodiment.
  • FIG. 2 shows, in flowchart form, an image processing operation in accordance with one embodiment.
  • FIG. 3 shows, in flowchart form, a burst set classification operation in accordance with another embodiment.
  • FIG. 4 shows, in flowchart form, a keeper image selection operation for a portrait burst in accordance with one embodiment.
  • FIG. 5 shows, in flow-chart form, a keeper image selection operation for an action burst in accordance with one embodiment.
  • FIG. 6 shows, in flow-chart form, a keeper image selection operation in accordance with one embodiment.
  • FIG. 7 shows, in block diagram form, a multi-function electronic device in accordance with one embodiment.
  • DETAILED DESCRIPTION
  • This disclosure pertains to systems, methods, and computer readable media to automatically pre-select one or more images as keeper images from multiple images taken with a burst photo capture mode. In one embodiment, a novel approach may be used to determine the reason the photographer used the burst capture mode. This may be done, for example, by analyzing the images to determine if the images contain primarily faces or if they track some action in the scene. Based on the determined reason, the burst may then be categorized as action, portrait or other.
  • After categorizing the burst, the approach may analyze the captured image set. Depending on the category selected, the approach may use different criteria to pre-select one or more images from the image set as keeper image(s). For a portrait burst, the approach may select one image with the most smiling, non-blinking faces. For an action burst, the operation may divide the image set into sections that each cover different stages of the action and select one keeper image from each of the sections. For a burst that is classified as other, the approach may identify a region of interest in the image set and select a keeper image that has higher quality metrics for the identified region of interest.
  • In one embodiment, the techniques used to pre-select the best one or more images may take advantage of some of the calculations made during normal processing of the images such that no significant post-processing time is required. This means that computations made to pre-select keeper image(s) may not be noticeable to the users, thus allowing the user to access the pre-selected images virtually instantaneously after the images are captured. In one embodiment, the calculations made during the processing of the images and the computations made to pre-select keeper images do not interfere with the burst capture frame rate. Thus not only does the user not experience any significant noticeable delay between image capture and the presentation of pre-selected keeper images, there is also no interference with the normal burst capture frame rate.
  • In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the inventive concept. As part of this description, some of this disclosure's drawings represent structures and devices in block diagram form in order to avoid obscuring the invention. In the interest of clarity, not all features of an actual implementation are described. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in this disclosure to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention, and multiple references to “one embodiment” or “an embodiment” should not be understood as necessarily all referring to the same embodiment.
  • It will be appreciated that in the development of any actual implementation (as in any development project), numerous decisions must be made to achieve the developers' specific goals (e.g., compliance with system and business related constraints), and that these goals may vary from one implementation to another. It will also be appreciated that such development efforts might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the design and implementation of image processing systems having the benefit of this disclosure.
  • One novel approach to pre-selecting keeper images from an image set taken during a burst capture mode is to first capture and process the images. Referring to FIG. 1, in one embodiment according to this approach operation 100 begins when a burst capture operation is activated (block 105). This may done, in one embodiment, by setting the camera mode on burst capture and pressing an image capture button. Upon activating the burst capture mode, the camera may start taking multiple photographs (block 110) and receiving image data for each photograph taken (block 115).
  • As used herein, the term “camera” refers to any electronic device that includes or incorporates digital image capture functionality. This includes, by way of example, stand-alone cameras (e.g., digital SLR cameras and ‘point-and-click’ cameras) as well as other electronic devices having embedded camera capabilities. Examples of this latter type include, but are not limited to, mobile phones, tablet and notebook computer systems, and digital media player devices.
  • The photographs are generally taken in a short period of time at a particular rate of speed. The number of pictures taken in a burst can vary in different embodiments. In one embodiment, the user may hold down the image capture button until finished taking pictures. The number of pictures taken, in such an embodiment, may vary depending on the image capture rate. The capture rate may be for example, 6, 8 or 10 pictures per second. In one embodiment, the user may be able to select the rate of capture. There also may be a maximum number of pictures that can be taken during each burst capture. For example, the maximum number may be 999. Other numbers are also possible. In one embodiment, the user may have the ability to select the number of pictures taken from a range of options available. For example, the user may be able to choose between 100, 200 or 500 photographs. In one embodiment, special image buffer memory may be used to retain the captured images. In another embodiment, general purpose memory may be used.
  • As image data is received for each photograph, the data may be processed as it is received (block 120). This occurs, in one embodiment, in real time such that the user does not notice any significant delay between capturing the images and viewing them. In general, only a limited amount of time may be available for processing the images. For example, in an embodiment in which 10 images are captured during the burst capture at a rate of 10 images per second, there may be 100 milliseconds available to receive and process each image and to conduct an analysis to pre-select keeper images. Most of the processing time is generally needed for encoding, storing the image, and maintaining in one embodiment, an interactive user interface which shows burst capture progress. Thus, the time available for performing an analysis to pre-select keeper images may be very limited. In one embodiment, the real-time data collection and processing does not take more than 35-55% of the total amount of time available. For a burst being captured at a rate of 10 images per second, that translates to 35-55 milliseconds for data collection, processing and analysis. The embodiments described in this specification are generally able to meet these time constraints.
  • Referring to FIG. 2, processing each image received in operation 120 (block 120 of FIG. 1) may begin by dividing the image into smaller regions (e.g., blocks, tiles or bands) (block 200) to make processing of the multiple calculations performed on the image faster and more efficient. In one embodiment, the blocks are 32×32. In another embodiment, the blocks are 16×16. Other variations are also possible. Alternatively, the entirety of the image is selected as one region. In one embodiment, to make processing more efficient the image may also be scaled down as is well known in the art.
  • After the image has been divided into smaller blocks, the blocks may be processed to determine image quality metrics in accordance with image content and/or motion sensor data (e.g., gyro and accelerometer sensors). These techniques may be used separately, or combined together, depending on the particular use case and/or system resources. In one particular embodiment, output from a camera's AF and/or AE systems may be used to generate a quality metric during normal lighting conditions while the camera's motion sensor(s) may be used during low light conditions. Quality metrics may be associated with each image directly (e.g., stored with the image as metadata) or indirectly (e.g., through a separate index or database file).
  • In one embodiment, the first step in processing the image and determining quality metrics may involve creating a color histogram of the image in the UV color space (block 205). In one embodiment, the color histogram may be a two dimensional histogram with the U-value as one dimension and the V-value as the other. The image may be divided into multiple regions, having Ui and Vi as the dimensions for the ith region. For example, in one embodiment, U1 may contain any U-value between 0 and 7. If a color is found which falls within Ui and Vi, a “bin” corresponding to (Ui,Vi) may be incremented. The sizes of the bins may be uniform, or they may be adjusted so that regions where color combinations are more common are represented by more bins. This may make the distribution of counts in the bins more informative. This means, for example, that because colors near the center are more common, more bins may be placed near the center by making the regions small (e.g., having fewer colors in each dimension). Away from the center, the regions may be made larger by having more colors in each dimension. This process may be referred to as block center-weighted bins. After the color histogram has been created, a quality measure indicative of the image's sharpness may be calculated (block 210). Sharpness measures may be obtained or determined from, for example, a camera's auto-focus (AF) and/or auto-exposure (AE) systems. In one embodiment, sharpness measures may be determined by calculating the sum of adjacent pixel differences. Other methods of determining sharpness are also possible. For the purposes of this disclosure, the sharper an image is judged to be, the higher its' corresponding rank (e.g., quality metric value).
  • After determining one or more sharpness measures, a wavelet transform may be calculated for each block (block 215) to compress the image, thus making further calculations more efficient. In one embodiment, the wavelength transform may be a Haar transform. Calculating a Haar wavelength transform is well known in the art and thus not discussed here. After calculating wavelet transforms, the amount of blur present in the image may be derived (block 220). In one embodiment, the amount of blur is derived from the wavelet transforms calculated. Other approaches are also possible. One approach to determining the amount of blur present in an image is discussed in U.S. patent application Ser. No. 13/911,873, entitled “Reference Frame Selection for Still Image Stabilization,” incorporated herein by reference in its entirety.
  • In one embodiment, after the amount of blur present in the image has been calculated, a determination is made as to whether the image is too blurry to use (block 225). This is done, in one embodiment, by comparing the amount of blur present in the image with a predetermined threshold. If the amount of blur present in the image is above a predetermined threshold, or in some embodiments, if another calculated quality measure is below a different threshold value the image may be determined to be too blurry. Threshold values may be static or predetermined (obtained, for example, from program memory during camera start-up) or dynamic (determined, for example, based on image statistics). In one embodiment, if one or more of the quality measures of the image is significantly smaller than the maximum quality metric value of the image set, the image may be regarded as too blurry to use. In another implementation, if the quality metric of an image is smaller than the maximum quality metric value of the image set multiplied by a ratio, the image may be regarded as too blurry to use (e.g., a ratio of between 0.6 and 0.9).
  • Notwithstanding the approach by which blurriness is determined, if the image is determined to be too blurry (YES prong of block 225), the image may be discarded or otherwise removed from further consideration and a check may be made to determine if at least one more received image remains to be processed (block 260). If the image is not too blurry to use (NO prong of block 225), two one dimensional signatures may be calculated (block 230) for the image. The signatures may be functions of the vertical and horizontal projections of the image. In one embodiment, the signatures are vertical and horizontal sums of pixel values.
  • The next step in processing the image, in some implementations, may be determining whether or not the image contains faces. Face recognition techniques are well known in the art and thus not discussed in this specification. Using a face recognition algorithm, the operation may detect if there are faces in the image (block 235). If no faces are detected in the image (NO prong of block 235), the image may be retained (block 255), whereafter a check can be made to determine if all of the images from the set have been received (block 260) and, if yes, continue to block 305 of FIG. 3 to classify the image set. If at least one more image remains to be received (the YES prong of block 260), however, the operation may obtain the next image and continue to block 200 to process the next image.
  • If one or more faces are detected (YES prong of block 235) by the face recognition algorithm, the operation may move to block 240 to determine the size and location of each face. The location of each face may refer to the blocks of pixels that make up the face on the image and the size may refer to the size of the block. For each of the detected faces, the operation may also determine if the face is smiling and if the eyes are open or blinking (block 245). Once face detection and analysis has been performed, a sharpness value may be calculated for each of the faces detected in the image (block 250). As discussed above, there are a variety of known procedures for calculating image sharpness values. Using one of these known procedures, the operation may calculate a separate sharpness value over each block of pixels detected as representing a face. After the face sharpness values are calculated, the operation moves to block 255 to retain the image along with its' processing data and continue to block 260 to determine if there are more images in the image set to process. If there are more images, the operation moves back to block 200 to repeat the process for the next image. If, however, there are no other images in the image set, the operation moves to block 305 of FIG. 3 to classify the image set.
  • In some embodiments, after all the images have been received and processed, before continuing to classify the image set, a ratio between the sharpness metric value of the sharpest image (i.e. identified in accordance with block 210) and each of the other captured images may be determined. Those images for which this ratio is less than some specified value, could be eliminated from further consideration as irrelevant. That is, only those images having a ratio value greater than a specified threshold would be considered for pre-selecting keeper images. One of ordinary skill in the art will recognize the selected threshold may be task or goal dependent and could vary from implementation to implementation. This is done to eliminate images that are of low quality and are not likely to be selected as keepers. Eliminating unwanted images can increase efficiency and speed up processing time. In other embodiments, images may be compared to each other to determine if there are images that are too similar too each other. If two such images are found, one may be eliminated from the set. This can also result in increased efficiency.
  • Operation 300, to classify the image set captured in the burst, begins by determining if the images contain primarily faces (block 305). This can be done, in one embodiment, by analyzing the data collected during the processing operation 120. If faces were detected during operation 120, the operation also calculated the size of each face in the images, as discussed above with respect to FIG. 3. In one embodiment, the sizes of the faces in an image may be added together for each image to calculate a total face size for that image. The total face size may then be compared to the total size of the image. If the total face size is above a certain threshold relative to the total size of the image, then the operation may determine that particular image contains primarily faces. If the total face size is below the threshold, the operation may decide that the image does not primarily contain faces. In one embodiment, the threshold value is 75% such that if the total face size is below %75 of the total image size, the image is considered as not containing primarily faces. It should be noted that other threshold values are also possible. Other approaches for determining if the images in the set contain primarily faces can also be used.
  • In one embodiment, if the majority of the images in the image set contain primarily faces, then operation 300 may categorize the image set as a portrait set (block 310). In other embodiments if 50% or more of the images in the set contain primarily faces, the set is categorized as a portrait. Other configurations are also possible. When the burst is classified as a portrait, the operation moves to block 405 in FIG. 4 (operation 400) to pre-select a keeper image in a portrait image set.
  • If the image set is determined to not contain primarily faces (NO prong of block 305), then a region of interest may be identified in the image. This may be done in, one embodiment, by first registering each pair of images with respect to each other (block 315). There are a variety of well-known methods for registering images with respect to each other. U.S. patent application Ser. No. 13/911,793, entitled “Image Registration Methods for Still Image Stabilization,” incorporated herein by reference, describes a few such methods.
  • In one embodiment, the registration may be performed by aligning the two signatures computed during processing of the images (see FIG. 2, block 230). After the two images have been registered, the registered images may be compared with each other to determine an area of the images where there is a large difference between them (block 320). The difference between the registered images may be referred to as registration error. In the embodiment, where registration is done by aligning the vertical and horizontal signatures, the comparison may occur by examining the differences between the registered vertical signatures and the registered horizontal signatures. If there is a large difference between these numbers, it is likely that a moving subject (i.e., local motion) was present in that region of the images. That is because generally background of an image dominates the number of pixels in the image. As a result, registration is likely to align the background of one image with respect to the other, such that there generally is no significant difference between the backgrounds in the registered images. When there is local motion due to, for example, motion of a foreground object, however, the difference between the images may be larger. Thus, registering the images with respect to one another and comparing the registered images with each other may identify local motion between the images. The area containing local motion may be identified as the region of interest (block 325). For example, in the embodiment using vertical and horizontal signatures, if the vertical signatures show that the two images have a large difference between their x columns (x1 and x2) and the horizontal signatures have a large difference between their y rows (y1 and y2), the region of interest may be identified as (x1, y1) and (x2, y2).
  • In one embodiment, the region of interest may be selected as a region in the images for which the registration error (i.e., the difference between the two registered images) is larger than a specified threshold. It will be understood, other procedures for identifying the region of interest are also possible. If no local motion can be identified (i.e., the difference between the registered images is small), then the entire image may be identified as the region of interest.
  • Once the registration error is determined and a region of interest identified, a feature vector may be constructed from multiple data values computed so far during the processing of the images (block 330). Each value may be considered as a feature which when combined together form a vector of values referred to as the feature vector. In one embodiment, one of the values used to form the feature vector may be the computed color histograms. The color histograms show how similar or different the images are to each other. Thus, if the color histograms show that the images are too different, it is likely that the scene contained some action. One of the other values that may be used in forming the feature vector is how large the registration errors are either in absolute value or in respect to each other. Other values that may be used are the L1 error of the Y channel between the images at the start and end of the burst and the average of the Euclidean norm of the registration translation between pairs of images (which may be a reasonable proxy for camera motion). Other types values may also be used to construct the feature vector.
  • Once a feature vector is constructed, the information from the feature vector may be input into a classifier, (block 340) such as a Support Vector Machine (SVM), an artificial neural network (ANN) or a Bayesian classifier to determine if the scene captured in the image set contains action. In one embodiment, prior to automated use, the classifier is trained with a set of training feature vectors already classified by hand. The classifier may return a binary decision indicating if the images contain action or not (block 345). If the decision indicates that the images contained action, the burst may be classified as an action burst (block 350) and the operation may continue to block 505 of operation 500 (FIG. 5) to pre-select keeper images in an action image set. If the classifier decision indicates that the images did not contain (enough) action, then the set may be classified as other (block 355) and the operation may continue to block 605 of operation 600 in FIG. 6 to determine the best image(s) in a set categorized as other.
  • Referring to FIG. 4, in one embodiment, operation 400 for pre-selecting keeper images in an image classified as a portrait set begins by calculating a sharpness score for each face in each image in the set (block 405). As discussed above, sharpness values for each face are generally calculated during processing operation 120 for each image. By normalizing those sharpness values, sharpness scores may be calculated for each face. Sharpness values are normalized over all the images in the set, by tracking each face as one subject across the image set. This may be done by first calculating an average sharpness value for each face across all the images in the set. The average sharpness value, in one embodiment, may be the sum of image gradients calculated over the eyes for the particular face across all the images in the set. Other ways of obtaining the average sharpness value are also possible. For example, the sharpness values for the face in each of the images in the set may be averaged to obtain the average sharpness value. Once the average sharpness value for each face is calculated, the sharpness value for the face in each image may be divided by the average sharpness value for that face to obtain a sharpness score for the respective face.
  • Once a sharpness score has been calculated for each face, a total score may be calculated for the face (block 410). The total score may be calculated by analyzing various categories of data collected during the processing of the images. Each category of data may be assigned a particular range of scores. For example, scores may be assigned for smiling faces and for non-blinking faces. In one embodiment, each category of data has a range of numbers available as options for scores for that category. A higher score may signify a better quality image. For example, data indicating that a face is smiling may result in a score of 10, while a non-smiling face may result in a score of zero. A non-blinking face may also receive a score of 10, while a blinking face may receive a score of zero. The sharpness score calculated is another category that may be taken into account for the total score. Other categories of data that may contribute to the total score include the location of the faces, e.g., whether or not the face is close to the edges of the image and the location of the area of the image occupied by the face. For example, being close to the edges of the image may receive a lower score, while being closer to the middle may receive a higher score. In one embodiment, rules of photographic composition, such as the rule of threes may be used to establish a preference for where faces should be located. The rule of threes is well known in the art. Scores for each of these categories may be assigned and then normalized before being added together to calculate the total score for a face. Once total scores for all of the faces in an image have been calculated, the total face scores may be added together to obtain a score for the image (block 415).
  • A multiplicative factor may then be applied to each image score (block 420). The multiplicative factor may be selected such that it makes the image score higher for images with faces. This results in a built-in preference for images with faces. Thus, if there are images in a set that do not contain any faces, they are less likely to be selected as keeper images. This is advantageous for an image set categorized as a portrait, as images without faces should not be selected as keepers for such a set. Once, the multiplicative factor has been applied to all the image scores, the image with the highest score may be selected as the keeper image for the burst (block 425) and may be presented to the user as such (block 430).
  • When faces are not detected in the image set, the set may be classified as an action or other type of set. For a set categorized as an action set, multiple images may be selected as keeper images. This is generally desirable in an action set, as the user may like to have the images tell the story of the action. To do this, the image set captured in the burst may be divided into various sections, and a keeper image may be selected from each section. Each section of the burst may contain images related to a specific sequence of actions in the scene. For example, if the burst captured was of a child diving into a pool from a diving board, the first section may include pictures of the child standing on the board, the second section may include pictures of the child in the air, and the third section may include pictures of the child in the water. In one embodiment, there is a maximum number of sections an image set may be divided into. For example, in an image set containing ten images, the maximum number may be three. The maximum number may be a preset in the image capture device or it may be an optional setting that the user can select.
  • Referring to FIG. 5, in one embodiment, operation 500 to pre-select keeper images in an action set begins by calculating the distance between each pair of images in the image set (block 505). In one embodiment, the distance measured may be the Bhattacharyya distance of the two dimensional color histograms calculated during the processing operation 120. The calculated distance can then be used in a clustering model to divide the image set into different sections. Various clustering models are available for use in this approach. These include connectivity models such as hierarchical clustering (e.g., single-link, complete-link), centroid models (e.g., K-means algorithms), exhaustive search, and scene change detection algorithms. These clustering models and algorithms are well known in the art and thus not described in detail here.
  • In one embodiment, a scene change detection operation may first be used to cluster the image set into different sections (block 510). If the results from this operation are acceptable (YES prong of block 515), they are used. However, if the results are not acceptable, an exhaustive search operation may be used (block 520). An exhaustive search operation generally examines all the ways in which the set can be divided into a predetermined number of sections. The operation then attempts to optimize the ratio of average distance between images within a section to average distance between images from different sections. Based on the results of optimizing this ratio, the image set may be divided into different sections.
  • Once the set has been divided into different sections, an image from each section may be pre-selected as a keeper (block 525). This is done, in one embodiment, by comparing image quality metrics for all of the images in one section and selecting the image with the highest and/or best quality metrics. For example, sharpness and blurriness measures calculated during the processing operation 120 may be examined to select the sharpest and/or least blurry image. In practice, multiple images may have the same, or nearly the same, quality metric value. In such cases, the first image in each section having the highest quality metric value may be selected. In another embodiment, the last such image in the section may be selected. In still another embodiment, of those images having the highest quality metric value, that image closest to the middle of the image section may be selected. In yet another embodiment, if there are ‘N’ images having the highest quality metric value (e.g., are within a specified range of values from one another), a random one of the N images may be selected.
  • In one embodiment, a keeper image from each section may be selected in accordance with the approach of operation 600 in FIG. 6. Once keeper images for each of the divided sections have been selected, they may be presented to the user for review and selection (block 530). In this manner, multiple images are pre-selected as keeper images to show various stages of an action scene in an action image set.
  • Referring back to FIG. 3, if the burst is not categorized as a portrait or an action, it may be classified as other. Other is a broad category that covers instances in which it cannot be determined why the user used the burst capture mode. It may not be possible to examine images captured in such a burst for the best faces or for action, but it is still possible to select one or more high quality images in the set as keeper images. One such approach involves identifying a best image through comparison of the region of interest of the images with each other. As discussed above, the region of interest is identified during the classification operation 300 (block 325).
  • To properly compare the region of interests in the images with each other, the region may first be expanded to cover all the blocks of the image that overlap with the region of interest (block 620). The blocks may correspond with the processing blocks of operation 120 for which quality metrics values were previously calculated, so that those metrics may be examined for the region of interest in each image in the image set (block 625). The quality metrics may include, in one embodiment, sharpness measures and blurriness metrics. After examining the quality metrics of the region of interests for all of the images in the set, the operation may assign a score to each image based on the various quality metrics examined (block 630). The scores may be assigned based on a range of numbers for each quality metric and added together to get a total score for each image. A keeper image may then be selected based on the total image score (block 635). This results, in one embodiment, in selecting the image having the best quality metrics for the region of interest as the keeper image. The keeper image may then be presented to the user for review and selection (block 640).
  • Referring to FIG. 7, a simplified functional block diagram of illustrative electronic device 700 is shown according to one embodiment. Electronic device 700 could be, for example, a mobile telephone, personal media device, portable camera, or a tablet, notebook or desktop computer system. As shown, electronic device 700 may include processor 705, display 710, user interface 715, graphics hardware 720, device sensors 725 (e.g., proximity sensor/ambient light sensor, accelerometer and/or gyroscope), microphone 730, audio codec(s) 735, speaker(s) 740, communications circuitry 745, image capture circuit or unit 750, video codec(s) 755, memory 760, storage 765, and communications bus 770.
  • Processor 705 may execute instructions necessary to carry out or control the operation of many functions performed by device 700 (e.g., such as the capture and/or processing of images in accordance with FIGS. 1-6). Processor 705 may, for instance, drive display 710 and receive user input from user interface 715. User interface 715 can take a variety of forms, such as a button, keypad, dial, a click wheel, keyboard, display screen and/or a touch screen. User interface 715 could, for example, be the conduit through which a user may select when to capture an image. Processor 705 may be a system-on-chip such as those found in mobile devices and include one or more dedicated graphics processing units (GPUs). Processor 705 may be based on reduced instruction-set computer (RISC) or complex instruction-set computer (CISC) architectures or any other suitable architecture and may include one or more processing cores. Graphics hardware 720 may be special purpose computational hardware for processing graphics and/or assisting processor 705 perform computational tasks. In one embodiment, graphics hardware 720 may include one or more programmable graphics processing units (GPUs).
  • Image capture circuitry 750 may capture still and video images that may be processed to generate images and may, in accordance with this disclosure, include specialized hardware to perform some or many of the actions described herein. Output from image capture circuitry 750 may be processed (or further processed), at least in part by video codec(s) 755 and/or processor 705 and/or graphics hardware 720, and/or a dedicated image processing unit (not shown). Images so captured may be stored in memory 760 and/or storage 765. Memory 760 may include one or more different types of media used by processor 705, graphics hardware 720, and image capture circuitry 750 to perform device functions. For example, memory 760 may include memory cache, read-only memory (ROM), and/or random access memory (RAM). Storage 765 may store media (e.g., audio, image and video files), computer program instructions or software, preference information, device profile information, and any other suitable data. Storage 765 may include one more non-transitory storage mediums including, for example, magnetic disks (fixed, floppy, and removable) and tape, optical media such as CD-ROMs and digital video disks (DVDs), and semiconductor memory devices such as Electrically Programmable Read-Only Memory (EPROM), and Electrically Erasable Programmable Read-Only Memory (EEPROM). Memory 760 and storage 765 may be used to retain computer program instructions or code organized into one or more modules and written in any desired computer programming language. When executed by, for example, processor 705 such computer program code may implement one or more of the methods described herein.
  • It is to be understood that the above description is intended to be illustrative, and not restrictive. The material has been presented to enable any person skilled in the art to make and use the inventive concepts as claimed and is provided in the context of particular embodiments, variations of which will be readily apparent to those skilled in the art (e.g., some of the disclosed embodiments may be used in combination with each other). For example, while FIGS. 1-6 have been described in the context of processing raw or unprocessed images, this is not necessary. Operations in accordance with this disclosure may be applied to processed versions of the captured images (e.g. edge-maps) or sub-sampled versions of the captured images (e.g. thumbnail images). In addition, some of the described operations may have their individual steps performed in an order different from, or in conjunction with other steps, presented herein. An example of this first difference would be performing actions in accordance with block 120 after one or more of the images are retained (e.g., block 255). An example of the latter difference would be the determination of quality metrics, e.g., in accordance with operation 120, as each image is captured (as implied in FIG. 2), after all images are captured, or after more than one, but less than all images have been captured. More generally, if there is hardware support some operations described in conjunction with FIGS. 1-6 may be performed in parallel.
  • In light of the above examples, the scope of the invention therefore should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.”

Claims (20)

1. A non-transitory program storage device, readable by a programmable control device and comprising instructions stored thereon to cause one or more programmable control devices to:
obtain a temporal sequence of images of a scene;
detect, by an image processor, if each of the images contains primarily faces by calculating a total face size for each image and comparing the total face size of the image to a total size of the image;
process, by the image processor, each of the obtained images to obtain for each image at least one quality metric value;
select one or more images from the sequence of images as keeper images, wherein the selection is made at least in part based on whether each image contains primarily faces and on the at least one quality metric value for the image; and
retain the one or more keeper images in a memory.
2. The non-transitory program storage device of claim 1, wherein the instructions to cause the one or more programmable control devices to process comprise instructions to cause the one or more programmable control devices to determine, for each of the obtained images, a value based on at least a portion of the image, wherein the value is indicative of the image's sharpness.
3. The non-transitory program storage device of claim 2, wherein the value indicative of an image's sharpness is based on output from at least one of an auto-focus system and an auto-exposure system.
4. The non-transitory program storage device of claim 1, wherein the instructions to cause the one or more programmable control devices to process comprise instructions to cause the one or more programmable control devices to determine, for each of the obtained images, a blur value.
5. The non-transitory program storage device of claim 1, wherein the instructions further cause the one or more programmable control devices to:
detect, by the processor, if the scene in each image contains action when the image is detected as not containing primarily faces; and
to classify each image based on the detection of primarily faces or action.
6. The non-transitory program storage device of claim 1, wherein the instructions to cause the one or more programmable control devices to detect further comprise instructions to cause the one or more programmable control devices to determine if each of the detected faces is smiling, when at least one face is detected.
7. The non-transitory program storage device of claim 5, wherein the instructions to cause the one or more programmable control devices to classify comprise instructions to cause the one or more programmable control devices to classify the images as portraits when one or more images in the image sequence contain primarily faces.
8. The non-transitory program storage device of claim 7, wherein when the images are classified as action, the instructions to cause the one or more programmable control devices to select one or more images comprise instructions to cause the one or more programmable control devices to:
divide the sequence of images into two or more sections; and
select a keeper image from each of the two or more sections based on the at least one quality metric value.
9. The non-transitory program storage device of claim 5, wherein the instructions to cause the one or more programmable control devices to select one or more images comprise instructions to cause the one or more programmable control devices to:
identify a region of interest in the one or more images;
obtain at least one quality metric for the region of interest; and
select one or more images from the sequence of images based at least on one or more quality metric values for the region of interest.
10. A digital image capture device, comprising:
a memory;
a display communicatively coupled to the memory; and
one or more processors communicatively coupled to the memory and display and configured to execute instructions stored in the memory comprising:
obtaining a temporal sequence of images of a scene;
detecting, by an image processor, if each of the images in the sequence of images contains primarily faces by calculating a total face size for each image and comparing the total face size of the image to a total size of the image;
detecting, by the image processor, if each image contains action when the image is detected as not containing primarily faces;
classifying each of the images based on the detection;
processing, by the image processor, each of the obtained images to obtain for each image at least one quality metric value;
selecting one or more images from the sequence of images as keeper images, wherein the selection is made at least in part based on the classification and on the at least one quality metric value; and
retaining the one or more keeper images in the memory.
11. The system of claim 10, wherein detecting if each of the images in the sequence of images contains action comprises constructing a feature vector from each of the images and applying the feature vectors to a classifier.
12. The system of claim 10, wherein classifying comprises classifying the images as other if the images are not detected to contain primarily faces and if the scene is not detected to contain action.
13. The system of claim 12, wherein when the images are classified as other, selecting one or more images from the sequence of images as keeper images comprises:
identifying a region of interest in the images;
expanding the region of interest to include one or more blocks that overlap the region of interest;
determining a quality metric value for the expanded region of interest; and
selecting one or more images from the sequence of images based on the at least one quality metric value for the expanded region of interest.
14. The system of claim 13, wherein identifying a region of interest comprises:
registering each two images in the sequence of images with respect to each other;
comparing the registered images with each other; and
identifying a region in the registered images where the differences between the registered images is larger than a specified threshold.
15. A method comprising:
obtaining a temporal sequence of images of a scene;
detecting, by an image processor, if each of the images in the sequence of images contains primarily faces by calculating a total face size for each image and comparing the total face size of the image to a total size of the image;
detecting, by the image processor, if the scene in each image contains action when the image is detected as not containing primarily faces;
classifying the images based on the detection;
processing, by the image processor, each of the obtained images to obtain for each image at least one quality metric value;
selecting one or more images from the sequence of images as keeper images, wherein the selection is made at least in part based on the classification and on the at least one quality metric value; and
retaining the one or more keeper images in a memory.
16. The method of claim 15, wherein classifying the images based on the detection comprises classifying the images as portraits when one or more images in the sequence of images contains primarily faces.
17. The method of claim 15, wherein detecting if each image in the sequence of images contains primarily faces comprises determining if each of the detected faces is blinking.
18. The method of claim 16, further comprising determining a face sharpness value for each detected face.
19. The method of claim 15, wherein selecting one or more images from the sequence of images as keeper images comprises:
identifying a region of interest in the images;
expanding the region of interest to include one or more blocks that overlap the region of interest;
determining, by the image processor, at least one quality metric value for the expanded region of interest; and
selecting one or more images from the sequence of images based at least in part on the at least one quality metric value for the expanded region of interest.
20. The method of claim 15, wherein processing each of the obtained images comprises determining, for each of the obtained images, a value based on at least a portion of the image, wherein the value is indicative of the image's sharpness.
US14/021,857 2013-09-09 2013-09-09 Automated Selection Of Keeper Images From A Burst Photo Captured Set Abandoned US20150071547A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US14/021,857 US20150071547A1 (en) 2013-09-09 2013-09-09 Automated Selection Of Keeper Images From A Burst Photo Captured Set
KR1020167006182A KR101731771B1 (en) 2013-09-09 2014-08-27 Automated selection of keeper images from a burst photo captured set
PCT/US2014/052965 WO2015034725A1 (en) 2013-09-09 2014-08-27 Automated selection of keeper images from a burst photo captured set
CN201480049340.1A CN105531988A (en) 2013-09-09 2014-08-27 Automated selection of keeper images from a burst photo captured set
EP14767189.5A EP3044947B1 (en) 2013-09-09 2014-08-27 Automated selection of keeper images from a burst photo captured set
AU2014315547A AU2014315547A1 (en) 2013-09-09 2014-08-27 Automated selection of keeper images from a burst photo captured set
US15/266,460 US10523894B2 (en) 2013-09-09 2016-09-15 Automated selection of keeper images from a burst photo captured set
AU2017261537A AU2017261537B2 (en) 2013-09-09 2017-11-15 Automated selection of keeper images from a burst photo captured set

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/021,857 US20150071547A1 (en) 2013-09-09 2013-09-09 Automated Selection Of Keeper Images From A Burst Photo Captured Set

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/266,460 Continuation US10523894B2 (en) 2013-09-09 2016-09-15 Automated selection of keeper images from a burst photo captured set

Publications (1)

Publication Number Publication Date
US20150071547A1 true US20150071547A1 (en) 2015-03-12

Family

ID=51570846

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/021,857 Abandoned US20150071547A1 (en) 2013-09-09 2013-09-09 Automated Selection Of Keeper Images From A Burst Photo Captured Set
US15/266,460 Expired - Fee Related US10523894B2 (en) 2013-09-09 2016-09-15 Automated selection of keeper images from a burst photo captured set

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/266,460 Expired - Fee Related US10523894B2 (en) 2013-09-09 2016-09-15 Automated selection of keeper images from a burst photo captured set

Country Status (6)

Country Link
US (2) US20150071547A1 (en)
EP (1) EP3044947B1 (en)
KR (1) KR101731771B1 (en)
CN (1) CN105531988A (en)
AU (2) AU2014315547A1 (en)
WO (1) WO2015034725A1 (en)

Cited By (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105007412A (en) * 2015-07-02 2015-10-28 成都亿邻通科技有限公司 Photo storage method for mobile terminal
CN105654463A (en) * 2015-11-06 2016-06-08 乐视移动智能信息技术(北京)有限公司 Image processing method applied to continuous shooting process and apparatus thereof
CN105654470A (en) * 2015-12-24 2016-06-08 小米科技有限责任公司 Image selection method, device and system
US20160295130A1 (en) * 2013-05-31 2016-10-06 Apple Inc. Identifying Dominant and Non-Dominant Images in a Burst Mode Capture
CN106254807A (en) * 2015-06-10 2016-12-21 三星电子株式会社 Extract electronic equipment and the method for rest image
EP3079349A3 (en) * 2015-03-17 2017-01-04 MediaTek, Inc Automatic image capture during preview and image recommendation
US9602729B2 (en) 2015-06-07 2017-03-21 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US20170086790A1 (en) * 2015-09-29 2017-03-30 General Electric Company Method and system for enhanced visualization and selection of a representative ultrasound image by automatically detecting b lines and scoring images of an ultrasound scan
US9612741B2 (en) 2012-05-09 2017-04-04 Apple Inc. Device, method, and graphical user interface for displaying additional information in response to a user contact
US9619076B2 (en) 2012-05-09 2017-04-11 Apple Inc. Device, method, and graphical user interface for transitioning between display states in response to a gesture
US9632664B2 (en) 2015-03-08 2017-04-25 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US9639184B2 (en) 2015-03-19 2017-05-02 Apple Inc. Touch input cursor manipulation
FR3043233A1 (en) * 2015-10-30 2017-05-05 Merry Pixel METHOD OF AUTOMATICALLY SELECTING IMAGES FROM A MOBILE DEVICE
US9645732B2 (en) 2015-03-08 2017-05-09 Apple Inc. Devices, methods, and graphical user interfaces for displaying and using menus
US9674426B2 (en) 2015-06-07 2017-06-06 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US9753639B2 (en) 2012-05-09 2017-09-05 Apple Inc. Device, method, and graphical user interface for displaying content associated with a corresponding affordance
US9778771B2 (en) 2012-12-29 2017-10-03 Apple Inc. Device, method, and graphical user interface for transitioning between touch input to display output relationships
US9787862B1 (en) * 2016-01-19 2017-10-10 Gopro, Inc. Apparatus and methods for generating content proxy
US9785305B2 (en) 2015-03-19 2017-10-10 Apple Inc. Touch input cursor manipulation
US9792502B2 (en) 2014-07-23 2017-10-17 Gopro, Inc. Generating video summaries for a video using video summary templates
US9830048B2 (en) 2015-06-07 2017-11-28 Apple Inc. Devices and methods for processing touch inputs with instructions in a web page
US9838730B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing
US9871994B1 (en) 2016-01-19 2018-01-16 Gopro, Inc. Apparatus and methods for providing content context using session metadata
US9880735B2 (en) 2015-08-10 2018-01-30 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US9886184B2 (en) 2012-05-09 2018-02-06 Apple Inc. Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object
US9891811B2 (en) 2015-06-07 2018-02-13 Apple Inc. Devices and methods for navigating between user interfaces
US9916863B1 (en) 2017-02-24 2018-03-13 Gopro, Inc. Systems and methods for editing videos based on shakiness measures
US9922682B1 (en) 2016-06-15 2018-03-20 Gopro, Inc. Systems and methods for organizing video files
US9953224B1 (en) 2016-08-23 2018-04-24 Gopro, Inc. Systems and methods for generating a video summary
US9953679B1 (en) 2016-05-24 2018-04-24 Gopro, Inc. Systems and methods for generating a time lapse video
US9959025B2 (en) 2012-12-29 2018-05-01 Apple Inc. Device, method, and graphical user interface for navigating user interface hierarchies
US9967515B1 (en) 2016-06-15 2018-05-08 Gopro, Inc. Systems and methods for bidirectional speed ramping
US9972066B1 (en) 2016-03-16 2018-05-15 Gopro, Inc. Systems and methods for providing variable image projection for spherical visual content
US9973647B2 (en) 2016-06-17 2018-05-15 Microsoft Technology Licensing, Llc. Suggesting image files for deletion based on image file parameters
US20180139377A1 (en) * 2015-05-14 2018-05-17 Sri International Selecting optimal image from mobile device captures
US20180150954A1 (en) * 2015-03-18 2018-05-31 Canon Kabushiki Kaisha Image processing apparatus and image processing method, that determine a conformable image
US9990107B2 (en) 2015-03-08 2018-06-05 Apple Inc. Devices, methods, and graphical user interfaces for displaying and using menus
US9990121B2 (en) 2012-05-09 2018-06-05 Apple Inc. Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input
US9996231B2 (en) 2012-05-09 2018-06-12 Apple Inc. Device, method, and graphical user interface for manipulating framed graphical objects
US10002641B1 (en) 2016-10-17 2018-06-19 Gopro, Inc. Systems and methods for determining highlight segment sets
CN108198177A (en) * 2017-12-29 2018-06-22 广东欧珀移动通信有限公司 Image acquiring method, device, terminal and storage medium
US10015469B2 (en) 2012-07-03 2018-07-03 Gopro, Inc. Image blur based on 3D depth information
US10037138B2 (en) 2012-12-29 2018-07-31 Apple Inc. Device, method, and graphical user interface for switching between user interfaces
US10044972B1 (en) 2016-09-30 2018-08-07 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US10042542B2 (en) 2012-05-09 2018-08-07 Apple Inc. Device, method, and graphical user interface for moving and dropping a user interface object
US10045120B2 (en) 2016-06-20 2018-08-07 Gopro, Inc. Associating audio with three-dimensional objects in videos
US10048757B2 (en) 2015-03-08 2018-08-14 Apple Inc. Devices and methods for controlling media presentation
US10067653B2 (en) 2015-04-01 2018-09-04 Apple Inc. Devices and methods for processing touch inputs based on their intensities
US10073615B2 (en) 2012-05-09 2018-09-11 Apple Inc. Device, method, and graphical user interface for displaying user interface objects corresponding to an application
US10078442B2 (en) 2012-12-29 2018-09-18 Apple Inc. Device, method, and graphical user interface for determining whether to scroll or select content based on an intensity theshold
US10078644B1 (en) 2016-01-19 2018-09-18 Gopro, Inc. Apparatus and methods for manipulating multicamera content using content proxy
US10095391B2 (en) 2012-05-09 2018-10-09 Apple Inc. Device, method, and graphical user interface for selecting user interface objects
US10096341B2 (en) 2015-01-05 2018-10-09 Gopro, Inc. Media identifier generation for camera-captured media
US10095396B2 (en) 2015-03-08 2018-10-09 Apple Inc. Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object
CN108629338A (en) * 2018-06-14 2018-10-09 五邑大学 A kind of face beauty prediction technique based on LBP and convolutional neural networks
US10126930B2 (en) 2012-05-09 2018-11-13 Apple Inc. Device, method, and graphical user interface for scrolling nested regions
US10127246B2 (en) 2016-08-16 2018-11-13 Microsoft Technology Licensing, Llc Automatic grouping based handling of similar photos
US10129464B1 (en) 2016-02-18 2018-11-13 Gopro, Inc. User interface for creating composite images
US10162452B2 (en) 2015-08-10 2018-12-25 Apple Inc. Devices and methods for processing touch inputs based on their intensities
US10175864B2 (en) 2012-05-09 2019-01-08 Apple Inc. Device, method, and graphical user interface for selecting object within a group of objects in accordance with contact intensity
US10175757B2 (en) 2012-05-09 2019-01-08 Apple Inc. Device, method, and graphical user interface for providing tactile feedback for touch-based operations performed and reversed in a user interface
US10192585B1 (en) 2014-08-20 2019-01-29 Gopro, Inc. Scene and activity identification in video summary generation based on motion detected in a video
US10200598B2 (en) 2015-06-07 2019-02-05 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US10225511B1 (en) 2015-12-30 2019-03-05 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
US10229719B1 (en) 2016-05-09 2019-03-12 Gopro, Inc. Systems and methods for generating highlights for a video
US10235035B2 (en) 2015-08-10 2019-03-19 Apple Inc. Devices, methods, and graphical user interfaces for content navigation and manipulation
US10248308B2 (en) 2015-08-10 2019-04-02 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interfaces with physical gestures
US10268898B1 (en) 2016-09-21 2019-04-23 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video via segments
US10275087B1 (en) 2011-08-05 2019-04-30 P4tents1, LLC Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10282632B1 (en) 2016-09-21 2019-05-07 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video
WO2019117460A1 (en) * 2017-12-11 2019-06-20 삼성전자주식회사 Wearable display device and control method therefor
US10338955B1 (en) 2015-10-22 2019-07-02 Gopro, Inc. Systems and methods that effectuate transmission of workflow between computing platforms
US10339443B1 (en) 2017-02-24 2019-07-02 Gopro, Inc. Systems and methods for processing convolutional neural network operations using textures
US10346030B2 (en) 2015-06-07 2019-07-09 Apple Inc. Devices and methods for navigating between user interfaces
US10360663B1 (en) 2017-04-07 2019-07-23 Gopro, Inc. Systems and methods to create a dynamic blur effect in visual content
US10395119B1 (en) 2016-08-10 2019-08-27 Gopro, Inc. Systems and methods for determining activities performed during video capture
US10395122B1 (en) 2017-05-12 2019-08-27 Gopro, Inc. Systems and methods for identifying moments in videos
US10397415B1 (en) 2016-09-30 2019-08-27 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US10402938B1 (en) 2016-03-31 2019-09-03 Gopro, Inc. Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
US10402698B1 (en) 2017-07-10 2019-09-03 Gopro, Inc. Systems and methods for identifying interesting moments within videos
US10402677B2 (en) 2016-06-10 2019-09-03 Apple Inc. Hierarchical sharpness evaluation
US10416800B2 (en) 2015-08-10 2019-09-17 Apple Inc. Devices, methods, and graphical user interfaces for adjusting user interface objects
US10437333B2 (en) 2012-12-29 2019-10-08 Apple Inc. Device, method, and graphical user interface for forgoing generation of tactile output for a multi-contact gesture
US10474903B2 (en) * 2018-01-25 2019-11-12 Adobe Inc. Video segmentation using predictive models trained to provide aesthetic scores
US10496260B2 (en) 2012-05-09 2019-12-03 Apple Inc. Device, method, and graphical user interface for pressure-based alteration of controls in a user interface
US10521705B2 (en) * 2017-11-14 2019-12-31 Adobe Inc. Automatically selecting images using multicontext aware ratings
WO2020000382A1 (en) * 2018-06-29 2020-01-02 Hangzhou Eyecloud Technologies Co., Ltd. Motion-based object detection method, object detection apparatus and electronic device
US10582125B1 (en) * 2015-06-01 2020-03-03 Amazon Technologies, Inc. Panoramic image generation from video
US10614114B1 (en) 2017-07-10 2020-04-07 Gopro, Inc. Systems and methods for creating compilations based on hierarchical clustering
US10620781B2 (en) 2012-12-29 2020-04-14 Apple Inc. Device, method, and graphical user interface for moving a cursor according to a change in an appearance of a control icon with simulated three-dimensional characteristics
US10671895B2 (en) 2016-06-30 2020-06-02 Microsoft Technology Licensing, Llc Automated selection of subjectively best image frames from burst captured image sequences
US10732809B2 (en) 2015-12-30 2020-08-04 Google Llc Systems and methods for selective retention and editing of images captured by mobile image capture device
US10791316B2 (en) * 2017-03-28 2020-09-29 Samsung Electronics Co., Ltd. Method for transmitting data about three-dimensional image
JP2020531131A (en) * 2017-08-21 2020-11-05 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. B-line detection, presentation and reporting in lung ultrasonography
US11106988B2 (en) 2016-10-06 2021-08-31 Gopro, Inc. Systems and methods for determining predicted risk for a flight path of an unmanned aerial vehicle
EP3893495A4 (en) * 2019-01-31 2022-01-26 Huawei Technologies Co., Ltd. Method for selecting images based on continuous shooting and electronic device
US11836597B2 (en) * 2018-08-09 2023-12-05 Nvidia Corporation Detecting visual artifacts in image sequences using a neural network model

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106303234A (en) * 2016-08-11 2017-01-04 广东小天才科技有限公司 photographing processing method and device
CN108629791B (en) * 2017-03-17 2020-08-18 北京旷视科技有限公司 Pedestrian tracking method and device and cross-camera pedestrian tracking method and device
CN107509024B (en) * 2017-07-25 2019-01-04 维沃移动通信有限公司 One kind is taken pictures processing method and mobile terminal
CN107562860B (en) * 2017-08-29 2019-06-07 维沃移动通信有限公司 A kind of photo choosing method and mobile terminal
CN108540726B (en) * 2018-05-15 2020-05-05 Oppo广东移动通信有限公司 Method and device for processing continuous shooting image, storage medium and terminal
US11163981B2 (en) * 2018-09-11 2021-11-02 Apple Inc. Periocular facial recognition switching
US10713517B2 (en) 2018-09-30 2020-07-14 Himax Technologies Limited Region of interest recognition
CN109902189B (en) * 2018-11-30 2021-02-12 华为技术有限公司 Picture selection method and related equipment
CN112449099B (en) * 2019-08-30 2022-08-19 华为技术有限公司 Image processing method, electronic equipment and cloud server
CN112714246A (en) * 2019-10-25 2021-04-27 Tcl集团股份有限公司 Continuous shooting photo obtaining method, intelligent terminal and storage medium
EP4047923A4 (en) 2020-12-22 2022-11-23 Samsung Electronics Co., Ltd. Electronic device comprising camera and method of same
WO2023277321A1 (en) 2021-06-30 2023-01-05 Samsung Electronics Co., Ltd. Method and electronic device for a slow motion video

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301440B1 (en) * 2000-04-13 2001-10-09 International Business Machines Corp. System and method for automatically setting image acquisition controls
US20050228849A1 (en) * 2004-03-24 2005-10-13 Tong Zhang Intelligent key-frame extraction from a video
US20060210166A1 (en) * 2005-03-03 2006-09-21 Fuji Photo Film Co. Ltd. Image extracting apparatus, image extracting method, and image extracting program
US20070182861A1 (en) * 2006-02-03 2007-08-09 Jiebo Luo Analyzing camera captured video for key frames
US20080037869A1 (en) * 2006-07-13 2008-02-14 Hui Zhou Method and Apparatus for Determining Motion in Images
US20080192129A1 (en) * 2003-12-24 2008-08-14 Walker Jay S Method and Apparatus for Automatically Capturing and Managing Images
US20090263028A1 (en) * 2008-04-21 2009-10-22 Core Logic, Inc. Selecting Best Image
US20100020224A1 (en) * 2008-07-24 2010-01-28 Canon Kabushiki Kaisha Method for selecting desirable images from among a plurality of images and apparatus thereof
US7688379B2 (en) * 2005-12-09 2010-03-30 Hewlett-Packard Development Company, L.P. Selecting quality images from multiple captured images
US7929853B2 (en) * 2008-09-04 2011-04-19 Samsung Electronics Co., Ltd. Method and apparatus for taking pictures on a mobile communication terminal having a camera module
US20120076427A1 (en) * 2010-09-24 2012-03-29 Stacie L Hibino Method of selecting important digital images
US20130265451A1 (en) * 2012-04-10 2013-10-10 Samsung Electronics Co., Ltd. Apparatus and method for continuously taking a picture

Family Cites Families (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5657402A (en) 1991-11-01 1997-08-12 Massachusetts Institute Of Technology Method of creating a high resolution still image using a plurality of images and apparatus for practice of the method
US5646521A (en) 1995-08-01 1997-07-08 Schlumberger Technologies, Inc. Analog channel for mixed-signal-VLSI tester
US6552744B2 (en) 1997-09-26 2003-04-22 Roxio, Inc. Virtual reality camera
WO2000013407A1 (en) 1998-08-28 2000-03-09 Sarnoff Corporation Method and apparatus for electronically enhancing images
US6271847B1 (en) 1998-09-25 2001-08-07 Microsoft Corporation Inverse texture mapping using weighted pyramid blending and view-dependent weight maps
US6549643B1 (en) 1999-11-30 2003-04-15 Siemens Corporate Research, Inc. System and method for selecting key-frames of video data
US7130864B2 (en) 2001-10-31 2006-10-31 Hewlett-Packard Development Company, L.P. Method and system for accessing a collection of images in a database
KR100556732B1 (en) 2001-12-29 2006-03-10 엘지전자 주식회사 Moving picture enlarge region trace method
JP3889650B2 (en) 2002-03-28 2007-03-07 三洋電機株式会社 Image processing method, image processing apparatus, computer program, and recording medium
US7639741B1 (en) 2002-12-06 2009-12-29 Altera Corporation Temporal filtering using object motion estimation
CN1251145C (en) 2003-11-27 2006-04-12 上海交通大学 Pyramid image merging method being integrated with edge and texture information
WO2006122009A2 (en) 2005-05-09 2006-11-16 Lockheed Martin Corporation Continuous extended range image processing
US7760956B2 (en) 2005-05-12 2010-07-20 Hewlett-Packard Development Company, L.P. System and method for producing a page using frames of a video stream
US7839429B2 (en) 2005-05-26 2010-11-23 Hewlett-Packard Development Company, L.P. In-camera panorama stitching method and apparatus
US7424218B2 (en) 2005-07-28 2008-09-09 Microsoft Corporation Real-time preview for panoramic images
US7739599B2 (en) * 2005-09-23 2010-06-15 Microsoft Corporation Automatic capturing and editing of a video
US8018999B2 (en) 2005-12-05 2011-09-13 Arcsoft, Inc. Algorithm description on non-motion blur image generation project
US8842730B2 (en) 2006-01-27 2014-09-23 Imax Corporation Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality
JP4620607B2 (en) 2006-02-24 2011-01-26 株式会社モルフォ Image processing device
JP5084167B2 (en) 2006-03-31 2012-11-28 キヤノン株式会社 Position and orientation measurement method and apparatus
US7742083B2 (en) 2006-04-13 2010-06-22 Eastman Kodak Company In-camera dud image management
US8379154B2 (en) * 2006-05-12 2013-02-19 Tong Zhang Key-frame extraction from video
US20080170126A1 (en) 2006-05-12 2008-07-17 Nokia Corporation Method and system for image stabilization
US7602418B2 (en) 2006-10-11 2009-10-13 Eastman Kodak Company Digital image with reduced object motion blur
US7796872B2 (en) 2007-01-05 2010-09-14 Invensense, Inc. Method and apparatus for producing a sharp image from a handheld device containing a gyroscope
EP1972893A1 (en) 2007-03-21 2008-09-24 Universiteit Gent System and method for position determination
US7856120B2 (en) 2007-03-30 2010-12-21 Mitsubishi Electric Research Laboratories, Inc. Jointly registering images while tracking moving objects with moving cameras
JP4678603B2 (en) 2007-04-20 2011-04-27 富士フイルム株式会社 Imaging apparatus and imaging method
TWI355615B (en) 2007-05-11 2012-01-01 Ind Tech Res Inst Moving object detection apparatus and method by us
KR101023946B1 (en) 2007-11-02 2011-03-28 주식회사 코아로직 Apparatus for digital image stabilizing using object tracking and Method thereof
US8411938B2 (en) 2007-11-29 2013-04-02 Sri International Multi-scale multi-camera adaptive fusion with contrast normalization
US20090161982A1 (en) 2007-12-19 2009-06-25 Nokia Corporation Restoring images
WO2009090992A1 (en) * 2008-01-17 2009-07-23 Nikon Corporation Electronic camera
JP4661922B2 (en) 2008-09-03 2011-03-30 ソニー株式会社 Image processing apparatus, imaging apparatus, solid-state imaging device, image processing method, and program
US8570386B2 (en) 2008-12-31 2013-10-29 Stmicroelectronics S.R.L. Method of merging images and relative method of generating an output image of enhanced quality
US8515171B2 (en) 2009-01-09 2013-08-20 Rochester Institute Of Technology Methods for adaptive and progressive gradient-based multi-resolution color image segmentation and systems thereof
US8515182B2 (en) 2009-02-11 2013-08-20 Ecole De Technologie Superieure Method and system for determining a quality measure for an image using multi-level decomposition of images
WO2010093745A1 (en) 2009-02-12 2010-08-19 Dolby Laboratories Licensing Corporation Quality evaluation of sequences of images
US8237813B2 (en) 2009-04-23 2012-08-07 Csr Technology Inc. Multiple exposure high dynamic range image capture
KR101616874B1 (en) 2009-09-23 2016-05-02 삼성전자주식회사 Method and apparatus for blending multiple images
JP5397481B2 (en) * 2009-12-18 2014-01-22 富士通株式会社 Image sorting apparatus and image sorting method
CN102209196B (en) * 2010-03-30 2016-08-03 株式会社尼康 Image processing apparatus and image evaluation method
JP4998630B2 (en) * 2010-03-30 2012-08-15 株式会社ニコン Image processing apparatus and image evaluation program
US8488010B2 (en) 2010-09-21 2013-07-16 Hewlett-Packard Development Company, L.P. Generating a stabilized video sequence based on motion sensor data
JP5652649B2 (en) 2010-10-07 2015-01-14 株式会社リコー Image processing apparatus, image processing method, and image processing program
US8687941B2 (en) * 2010-10-29 2014-04-01 International Business Machines Corporation Automatic static video summarization
CN101984463A (en) 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
US8648959B2 (en) 2010-11-11 2014-02-11 DigitalOptics Corporation Europe Limited Rapid auto-focus using classifier chains, MEMS and/or multiple object focusing
US8532421B2 (en) 2010-11-12 2013-09-10 Adobe Systems Incorporated Methods and apparatus for de-blurring images using lucky frames
US9147260B2 (en) 2010-12-20 2015-09-29 International Business Machines Corporation Detection and tracking of moving objects
US8861836B2 (en) 2011-01-14 2014-10-14 Sony Corporation Methods and systems for 2D to 3D conversion from a portrait image
US8379934B2 (en) 2011-02-04 2013-02-19 Eastman Kodak Company Estimating subject motion between image frames
US8384787B2 (en) 2011-02-24 2013-02-26 Eastman Kodak Company Method for providing a stabilized video sequence
JP2012191442A (en) * 2011-03-10 2012-10-04 Sanyo Electric Co Ltd Image reproduction controller
EP2521091B1 (en) 2011-05-03 2016-04-20 ST-Ericsson SA Estimation of motion blur in a picture
US10134440B2 (en) * 2011-05-03 2018-11-20 Kodak Alaris Inc. Video summarization using audio and visual cues
US8983206B2 (en) 2011-05-04 2015-03-17 Ecole de Techbologie Superieure Method and system for increasing robustness of visual quality metrics using spatial shifting
US20120293607A1 (en) 2011-05-17 2012-11-22 Apple Inc. Panorama Processing
KR101784176B1 (en) 2011-05-25 2017-10-12 삼성전자주식회사 Image photographing device and control method thereof
US9007428B2 (en) 2011-06-01 2015-04-14 Apple Inc. Motion-based image stitching
KR101699919B1 (en) 2011-07-28 2017-01-26 삼성전자주식회사 High dynamic range image creation apparatus of removaling ghost blur by using multi exposure fusion and method of the same
US8913140B2 (en) 2011-08-15 2014-12-16 Apple Inc. Rolling shutter reduction based on motion sensors
US9305240B2 (en) 2011-12-07 2016-04-05 Google Technology Holdings LLC Motion aligned distance calculations for image comparisons
US8860825B2 (en) 2012-09-12 2014-10-14 Google Inc. Methods and systems for removal of rolling shutter effects
US9262684B2 (en) 2013-06-06 2016-02-16 Apple Inc. Methods of image fusion for image stabilization
US9373054B2 (en) 2014-09-02 2016-06-21 Kodak Alaris Inc. Method for selecting frames from video sequences based on incremental improvement

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301440B1 (en) * 2000-04-13 2001-10-09 International Business Machines Corp. System and method for automatically setting image acquisition controls
US20080192129A1 (en) * 2003-12-24 2008-08-14 Walker Jay S Method and Apparatus for Automatically Capturing and Managing Images
US20050228849A1 (en) * 2004-03-24 2005-10-13 Tong Zhang Intelligent key-frame extraction from a video
US20060210166A1 (en) * 2005-03-03 2006-09-21 Fuji Photo Film Co. Ltd. Image extracting apparatus, image extracting method, and image extracting program
US7688379B2 (en) * 2005-12-09 2010-03-30 Hewlett-Packard Development Company, L.P. Selecting quality images from multiple captured images
US20070182861A1 (en) * 2006-02-03 2007-08-09 Jiebo Luo Analyzing camera captured video for key frames
US20080037869A1 (en) * 2006-07-13 2008-02-14 Hui Zhou Method and Apparatus for Determining Motion in Images
US20090263028A1 (en) * 2008-04-21 2009-10-22 Core Logic, Inc. Selecting Best Image
US20100020224A1 (en) * 2008-07-24 2010-01-28 Canon Kabushiki Kaisha Method for selecting desirable images from among a plurality of images and apparatus thereof
US7929853B2 (en) * 2008-09-04 2011-04-19 Samsung Electronics Co., Ltd. Method and apparatus for taking pictures on a mobile communication terminal having a camera module
US20120076427A1 (en) * 2010-09-24 2012-03-29 Stacie L Hibino Method of selecting important digital images
US20130265451A1 (en) * 2012-04-10 2013-10-10 Samsung Electronics Co., Ltd. Apparatus and method for continuously taking a picture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Luo, Yiwen, and Xiaoou Tang. "Photo and video quality evaluation: Focusing on the subject." Computer Vision-ECCV 2008. Springer Berlin Heidelberg, 2008. 386-399. *
Yousefi, Siamak, M. Rahman, and Nasser Kehtarnavaz. "A new auto-focus sharpness function for digital and smart-phone cameras." Consumer Electronics, IEEE Transactions on 57.3 (2011): 1003-1009. *

Cited By (213)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10275087B1 (en) 2011-08-05 2019-04-30 P4tents1, LLC Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10664097B1 (en) 2011-08-05 2020-05-26 P4tents1, LLC Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10656752B1 (en) 2011-08-05 2020-05-19 P4tents1, LLC Gesture-equipped touch screen system, method, and computer program product
US10649571B1 (en) 2011-08-05 2020-05-12 P4tents1, LLC Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10540039B1 (en) 2011-08-05 2020-01-21 P4tents1, LLC Devices and methods for navigating between user interface
US10386960B1 (en) 2011-08-05 2019-08-20 P4tents1, LLC Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10365758B1 (en) 2011-08-05 2019-07-30 P4tents1, LLC Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10345961B1 (en) 2011-08-05 2019-07-09 P4tents1, LLC Devices and methods for navigating between user interfaces
US10338736B1 (en) 2011-08-05 2019-07-02 P4tents1, LLC Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10775994B2 (en) 2012-05-09 2020-09-15 Apple Inc. Device, method, and graphical user interface for moving and dropping a user interface object
US10496260B2 (en) 2012-05-09 2019-12-03 Apple Inc. Device, method, and graphical user interface for pressure-based alteration of controls in a user interface
US12067229B2 (en) 2012-05-09 2024-08-20 Apple Inc. Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object
US10095391B2 (en) 2012-05-09 2018-10-09 Apple Inc. Device, method, and graphical user interface for selecting user interface objects
US12045451B2 (en) 2012-05-09 2024-07-23 Apple Inc. Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input
US11947724B2 (en) 2012-05-09 2024-04-02 Apple Inc. Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface
US11354033B2 (en) 2012-05-09 2022-06-07 Apple Inc. Device, method, and graphical user interface for managing icons in a user interface region
US11314407B2 (en) 2012-05-09 2022-04-26 Apple Inc. Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object
US11221675B2 (en) 2012-05-09 2022-01-11 Apple Inc. Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface
US9753639B2 (en) 2012-05-09 2017-09-05 Apple Inc. Device, method, and graphical user interface for displaying content associated with a corresponding affordance
US10114546B2 (en) 2012-05-09 2018-10-30 Apple Inc. Device, method, and graphical user interface for displaying user interface objects corresponding to an application
US11068153B2 (en) 2012-05-09 2021-07-20 Apple Inc. Device, method, and graphical user interface for displaying user interface objects corresponding to an application
US11023116B2 (en) 2012-05-09 2021-06-01 Apple Inc. Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input
US11010027B2 (en) 2012-05-09 2021-05-18 Apple Inc. Device, method, and graphical user interface for manipulating framed graphical objects
US9823839B2 (en) 2012-05-09 2017-11-21 Apple Inc. Device, method, and graphical user interface for displaying additional information in response to a user contact
US10996788B2 (en) 2012-05-09 2021-05-04 Apple Inc. Device, method, and graphical user interface for transitioning between display states in response to a gesture
US10969945B2 (en) 2012-05-09 2021-04-06 Apple Inc. Device, method, and graphical user interface for selecting user interface objects
US10126930B2 (en) 2012-05-09 2018-11-13 Apple Inc. Device, method, and graphical user interface for scrolling nested regions
US10942570B2 (en) 2012-05-09 2021-03-09 Apple Inc. Device, method, and graphical user interface for providing tactile feedback for operations performed in a user interface
US10042542B2 (en) 2012-05-09 2018-08-07 Apple Inc. Device, method, and graphical user interface for moving and dropping a user interface object
US10908808B2 (en) 2012-05-09 2021-02-02 Apple Inc. Device, method, and graphical user interface for displaying additional information in response to a user contact
US10884591B2 (en) 2012-05-09 2021-01-05 Apple Inc. Device, method, and graphical user interface for selecting object within a group of objects
US10782871B2 (en) 2012-05-09 2020-09-22 Apple Inc. Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object
US10775999B2 (en) 2012-05-09 2020-09-15 Apple Inc. Device, method, and graphical user interface for displaying user interface objects corresponding to an application
US10168826B2 (en) 2012-05-09 2019-01-01 Apple Inc. Device, method, and graphical user interface for transitioning between display states in response to a gesture
US10175864B2 (en) 2012-05-09 2019-01-08 Apple Inc. Device, method, and graphical user interface for selecting object within a group of objects in accordance with contact intensity
US10592041B2 (en) 2012-05-09 2020-03-17 Apple Inc. Device, method, and graphical user interface for transitioning between display states in response to a gesture
US10175757B2 (en) 2012-05-09 2019-01-08 Apple Inc. Device, method, and graphical user interface for providing tactile feedback for touch-based operations performed and reversed in a user interface
US10073615B2 (en) 2012-05-09 2018-09-11 Apple Inc. Device, method, and graphical user interface for displaying user interface objects corresponding to an application
US10481690B2 (en) 2012-05-09 2019-11-19 Apple Inc. Device, method, and graphical user interface for providing tactile feedback for media adjustment operations performed in a user interface
US9996231B2 (en) 2012-05-09 2018-06-12 Apple Inc. Device, method, and graphical user interface for manipulating framed graphical objects
US9990121B2 (en) 2012-05-09 2018-06-05 Apple Inc. Device, method, and graphical user interface for moving a user interface object based on an intensity of a press input
US10191627B2 (en) 2012-05-09 2019-01-29 Apple Inc. Device, method, and graphical user interface for manipulating framed graphical objects
US9886184B2 (en) 2012-05-09 2018-02-06 Apple Inc. Device, method, and graphical user interface for providing feedback for changing activation states of a user interface object
US9619076B2 (en) 2012-05-09 2017-04-11 Apple Inc. Device, method, and graphical user interface for transitioning between display states in response to a gesture
US9612741B2 (en) 2012-05-09 2017-04-04 Apple Inc. Device, method, and graphical user interface for displaying additional information in response to a user contact
US10015469B2 (en) 2012-07-03 2018-07-03 Gopro, Inc. Image blur based on 3D depth information
US10620781B2 (en) 2012-12-29 2020-04-14 Apple Inc. Device, method, and graphical user interface for moving a cursor according to a change in an appearance of a control icon with simulated three-dimensional characteristics
US10437333B2 (en) 2012-12-29 2019-10-08 Apple Inc. Device, method, and graphical user interface for forgoing generation of tactile output for a multi-contact gesture
US9959025B2 (en) 2012-12-29 2018-05-01 Apple Inc. Device, method, and graphical user interface for navigating user interface hierarchies
US10185491B2 (en) 2012-12-29 2019-01-22 Apple Inc. Device, method, and graphical user interface for determining whether to scroll or enlarge content
US10037138B2 (en) 2012-12-29 2018-07-31 Apple Inc. Device, method, and graphical user interface for switching between user interfaces
US10175879B2 (en) 2012-12-29 2019-01-08 Apple Inc. Device, method, and graphical user interface for zooming a user interface while performing a drag operation
US9965074B2 (en) 2012-12-29 2018-05-08 Apple Inc. Device, method, and graphical user interface for transitioning between touch input to display output relationships
US9996233B2 (en) 2012-12-29 2018-06-12 Apple Inc. Device, method, and graphical user interface for navigating user interface hierarchies
US10078442B2 (en) 2012-12-29 2018-09-18 Apple Inc. Device, method, and graphical user interface for determining whether to scroll or select content based on an intensity theshold
US12050761B2 (en) 2012-12-29 2024-07-30 Apple Inc. Device, method, and graphical user interface for transitioning from low power mode
US10915243B2 (en) 2012-12-29 2021-02-09 Apple Inc. Device, method, and graphical user interface for adjusting content selection
US9857897B2 (en) 2012-12-29 2018-01-02 Apple Inc. Device and method for assigning respective portions of an aggregate intensity to a plurality of contacts
US9778771B2 (en) 2012-12-29 2017-10-03 Apple Inc. Device, method, and graphical user interface for transitioning between touch input to display output relationships
US10101887B2 (en) 2012-12-29 2018-10-16 Apple Inc. Device, method, and graphical user interface for navigating user interface hierarchies
US20160295130A1 (en) * 2013-05-31 2016-10-06 Apple Inc. Identifying Dominant and Non-Dominant Images in a Burst Mode Capture
US9942486B2 (en) * 2013-05-31 2018-04-10 Apple Inc. Identifying dominant and non-dominant images in a burst mode capture
US10776629B2 (en) 2014-07-23 2020-09-15 Gopro, Inc. Scene and activity identification in video summary generation
US10339975B2 (en) 2014-07-23 2019-07-02 Gopro, Inc. Voice-based video tagging
US11069380B2 (en) 2014-07-23 2021-07-20 Gopro, Inc. Scene and activity identification in video summary generation
US9792502B2 (en) 2014-07-23 2017-10-17 Gopro, Inc. Generating video summaries for a video using video summary templates
US10074013B2 (en) 2014-07-23 2018-09-11 Gopro, Inc. Scene and activity identification in video summary generation
US11776579B2 (en) 2014-07-23 2023-10-03 Gopro, Inc. Scene and activity identification in video summary generation
US10643663B2 (en) 2014-08-20 2020-05-05 Gopro, Inc. Scene and activity identification in video summary generation based on motion detected in a video
US10262695B2 (en) 2014-08-20 2019-04-16 Gopro, Inc. Scene and activity identification in video summary generation
US10192585B1 (en) 2014-08-20 2019-01-29 Gopro, Inc. Scene and activity identification in video summary generation based on motion detected in a video
US10096341B2 (en) 2015-01-05 2018-10-09 Gopro, Inc. Media identifier generation for camera-captured media
US10559324B2 (en) 2015-01-05 2020-02-11 Gopro, Inc. Media identifier generation for camera-captured media
US10268341B2 (en) 2015-03-08 2019-04-23 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10402073B2 (en) 2015-03-08 2019-09-03 Apple Inc. Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object
US10338772B2 (en) 2015-03-08 2019-07-02 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US9632664B2 (en) 2015-03-08 2017-04-25 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US11977726B2 (en) 2015-03-08 2024-05-07 Apple Inc. Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object
US10067645B2 (en) 2015-03-08 2018-09-04 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10613634B2 (en) 2015-03-08 2020-04-07 Apple Inc. Devices and methods for controlling media presentation
US11112957B2 (en) 2015-03-08 2021-09-07 Apple Inc. Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object
US10180772B2 (en) 2015-03-08 2019-01-15 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US9645732B2 (en) 2015-03-08 2017-05-09 Apple Inc. Devices, methods, and graphical user interfaces for displaying and using menus
US10048757B2 (en) 2015-03-08 2018-08-14 Apple Inc. Devices and methods for controlling media presentation
US9990107B2 (en) 2015-03-08 2018-06-05 Apple Inc. Devices, methods, and graphical user interfaces for displaying and using menus
US10387029B2 (en) 2015-03-08 2019-08-20 Apple Inc. Devices, methods, and graphical user interfaces for displaying and using menus
US10268342B2 (en) 2015-03-08 2019-04-23 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10095396B2 (en) 2015-03-08 2018-10-09 Apple Inc. Devices, methods, and graphical user interfaces for interacting with a control object while dragging another object
US9645709B2 (en) 2015-03-08 2017-05-09 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10860177B2 (en) 2015-03-08 2020-12-08 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
EP3300350A1 (en) * 2015-03-17 2018-03-28 MediaTek Inc. Automatic image capture during preview and image recommendation
US10038836B2 (en) 2015-03-17 2018-07-31 Mediatek Inc. Automatic image capture during preview and image recommendation
EP3079349A3 (en) * 2015-03-17 2017-01-04 MediaTek, Inc Automatic image capture during preview and image recommendation
US10600182B2 (en) * 2015-03-18 2020-03-24 Canon Kabushiki Kaisha Image processing apparatus and image processing method, that determine a conformable image
US20180150954A1 (en) * 2015-03-18 2018-05-31 Canon Kabushiki Kaisha Image processing apparatus and image processing method, that determine a conformable image
US9639184B2 (en) 2015-03-19 2017-05-02 Apple Inc. Touch input cursor manipulation
US10599331B2 (en) 2015-03-19 2020-03-24 Apple Inc. Touch input cursor manipulation
US11550471B2 (en) 2015-03-19 2023-01-10 Apple Inc. Touch input cursor manipulation
US9785305B2 (en) 2015-03-19 2017-10-10 Apple Inc. Touch input cursor manipulation
US11054990B2 (en) 2015-03-19 2021-07-06 Apple Inc. Touch input cursor manipulation
US10222980B2 (en) 2015-03-19 2019-03-05 Apple Inc. Touch input cursor manipulation
US10067653B2 (en) 2015-04-01 2018-09-04 Apple Inc. Devices and methods for processing touch inputs based on their intensities
US10152208B2 (en) 2015-04-01 2018-12-11 Apple Inc. Devices and methods for processing touch inputs based on their intensities
US10477095B2 (en) * 2015-05-14 2019-11-12 Sri International Selecting optimal image from mobile device captures
US20190356844A1 (en) * 2015-05-14 2019-11-21 Sri International Selecting optimal image from mobile device captures
US20180139377A1 (en) * 2015-05-14 2018-05-17 Sri International Selecting optimal image from mobile device captures
US10582125B1 (en) * 2015-06-01 2020-03-03 Amazon Technologies, Inc. Panoramic image generation from video
US10346030B2 (en) 2015-06-07 2019-07-09 Apple Inc. Devices and methods for navigating between user interfaces
US10200598B2 (en) 2015-06-07 2019-02-05 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US9916080B2 (en) 2015-06-07 2018-03-13 Apple Inc. Devices and methods for navigating between user interfaces
US11835985B2 (en) 2015-06-07 2023-12-05 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US11681429B2 (en) 2015-06-07 2023-06-20 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US10705718B2 (en) 2015-06-07 2020-07-07 Apple Inc. Devices and methods for navigating between user interfaces
US9706127B2 (en) 2015-06-07 2017-07-11 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US9891811B2 (en) 2015-06-07 2018-02-13 Apple Inc. Devices and methods for navigating between user interfaces
US11231831B2 (en) 2015-06-07 2022-01-25 Apple Inc. Devices and methods for content preview based on touch input intensity
US11240424B2 (en) 2015-06-07 2022-02-01 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US10303354B2 (en) 2015-06-07 2019-05-28 Apple Inc. Devices and methods for navigating between user interfaces
US9830048B2 (en) 2015-06-07 2017-11-28 Apple Inc. Devices and methods for processing touch inputs with instructions in a web page
US9674426B2 (en) 2015-06-07 2017-06-06 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US10841484B2 (en) 2015-06-07 2020-11-17 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US9602729B2 (en) 2015-06-07 2017-03-21 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US9860451B2 (en) 2015-06-07 2018-01-02 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US10455146B2 (en) 2015-06-07 2019-10-22 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
US10511765B2 (en) * 2015-06-10 2019-12-17 Samsung Electronics Co., Ltd Electronic apparatus and method of extracting still images
CN106254807A (en) * 2015-06-10 2016-12-21 三星电子株式会社 Extract electronic equipment and the method for rest image
CN105007412A (en) * 2015-07-02 2015-10-28 成都亿邻通科技有限公司 Photo storage method for mobile terminal
US10416800B2 (en) 2015-08-10 2019-09-17 Apple Inc. Devices, methods, and graphical user interfaces for adjusting user interface objects
US10754542B2 (en) 2015-08-10 2020-08-25 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10235035B2 (en) 2015-08-10 2019-03-19 Apple Inc. Devices, methods, and graphical user interfaces for content navigation and manipulation
US11327648B2 (en) 2015-08-10 2022-05-10 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10698598B2 (en) 2015-08-10 2020-06-30 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US9880735B2 (en) 2015-08-10 2018-01-30 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10963158B2 (en) 2015-08-10 2021-03-30 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US11182017B2 (en) 2015-08-10 2021-11-23 Apple Inc. Devices and methods for processing touch inputs based on their intensities
US10884608B2 (en) 2015-08-10 2021-01-05 Apple Inc. Devices, methods, and graphical user interfaces for content navigation and manipulation
US10209884B2 (en) 2015-08-10 2019-02-19 Apple Inc. Devices, Methods, and Graphical User Interfaces for Manipulating User Interface Objects with Visual and/or Haptic Feedback
US10203868B2 (en) 2015-08-10 2019-02-12 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10248308B2 (en) 2015-08-10 2019-04-02 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interfaces with physical gestures
US10162452B2 (en) 2015-08-10 2018-12-25 Apple Inc. Devices and methods for processing touch inputs based on their intensities
US11740785B2 (en) 2015-08-10 2023-08-29 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US20170086790A1 (en) * 2015-09-29 2017-03-30 General Electric Company Method and system for enhanced visualization and selection of a representative ultrasound image by automatically detecting b lines and scoring images of an ultrasound scan
US10338955B1 (en) 2015-10-22 2019-07-02 Gopro, Inc. Systems and methods that effectuate transmission of workflow between computing platforms
FR3043233A1 (en) * 2015-10-30 2017-05-05 Merry Pixel METHOD OF AUTOMATICALLY SELECTING IMAGES FROM A MOBILE DEVICE
CN105654463A (en) * 2015-11-06 2016-06-08 乐视移动智能信息技术(北京)有限公司 Image processing method applied to continuous shooting process and apparatus thereof
WO2017076040A1 (en) * 2015-11-06 2017-05-11 乐视控股(北京)有限公司 Image processing method and device for use during continuous shooting operation
CN105654470A (en) * 2015-12-24 2016-06-08 小米科技有限责任公司 Image selection method, device and system
US10728489B2 (en) 2015-12-30 2020-07-28 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
US10732809B2 (en) 2015-12-30 2020-08-04 Google Llc Systems and methods for selective retention and editing of images captured by mobile image capture device
US11159763B2 (en) 2015-12-30 2021-10-26 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
US10225511B1 (en) 2015-12-30 2019-03-05 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
US9871994B1 (en) 2016-01-19 2018-01-16 Gopro, Inc. Apparatus and methods for providing content context using session metadata
US10078644B1 (en) 2016-01-19 2018-09-18 Gopro, Inc. Apparatus and methods for manipulating multicamera content using content proxy
US9787862B1 (en) * 2016-01-19 2017-10-10 Gopro, Inc. Apparatus and methods for generating content proxy
US10402445B2 (en) 2016-01-19 2019-09-03 Gopro, Inc. Apparatus and methods for manipulating multicamera content using content proxy
US10129464B1 (en) 2016-02-18 2018-11-13 Gopro, Inc. User interface for creating composite images
US10740869B2 (en) 2016-03-16 2020-08-11 Gopro, Inc. Systems and methods for providing variable image projection for spherical visual content
US9972066B1 (en) 2016-03-16 2018-05-15 Gopro, Inc. Systems and methods for providing variable image projection for spherical visual content
US10402938B1 (en) 2016-03-31 2019-09-03 Gopro, Inc. Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
US10817976B2 (en) 2016-03-31 2020-10-27 Gopro, Inc. Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
US11398008B2 (en) 2016-03-31 2022-07-26 Gopro, Inc. Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
US9838730B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing
US10341712B2 (en) 2016-04-07 2019-07-02 Gopro, Inc. Systems and methods for audio track selection in video editing
US10229719B1 (en) 2016-05-09 2019-03-12 Gopro, Inc. Systems and methods for generating highlights for a video
US9953679B1 (en) 2016-05-24 2018-04-24 Gopro, Inc. Systems and methods for generating a time lapse video
US10402677B2 (en) 2016-06-10 2019-09-03 Apple Inc. Hierarchical sharpness evaluation
US9922682B1 (en) 2016-06-15 2018-03-20 Gopro, Inc. Systems and methods for organizing video files
US9967515B1 (en) 2016-06-15 2018-05-08 Gopro, Inc. Systems and methods for bidirectional speed ramping
US10742924B2 (en) 2016-06-15 2020-08-11 Gopro, Inc. Systems and methods for bidirectional speed ramping
US11223795B2 (en) 2016-06-15 2022-01-11 Gopro, Inc. Systems and methods for bidirectional speed ramping
US9973647B2 (en) 2016-06-17 2018-05-15 Microsoft Technology Licensing, Llc. Suggesting image files for deletion based on image file parameters
US10045120B2 (en) 2016-06-20 2018-08-07 Gopro, Inc. Associating audio with three-dimensional objects in videos
US10671895B2 (en) 2016-06-30 2020-06-02 Microsoft Technology Licensing, Llc Automated selection of subjectively best image frames from burst captured image sequences
US10395119B1 (en) 2016-08-10 2019-08-27 Gopro, Inc. Systems and methods for determining activities performed during video capture
US10127246B2 (en) 2016-08-16 2018-11-13 Microsoft Technology Licensing, Llc Automatic grouping based handling of similar photos
US9953224B1 (en) 2016-08-23 2018-04-24 Gopro, Inc. Systems and methods for generating a video summary
US11508154B2 (en) 2016-08-23 2022-11-22 Gopro, Inc. Systems and methods for generating a video summary
US11062143B2 (en) 2016-08-23 2021-07-13 Gopro, Inc. Systems and methods for generating a video summary
US20180247130A1 (en) * 2016-08-23 2018-08-30 Gopro, Inc. Systems and methods for generating a video summary
US10726272B2 (en) * 2016-08-23 2020-07-28 Go Pro, Inc. Systems and methods for generating a video summary
US10282632B1 (en) 2016-09-21 2019-05-07 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video
US10268898B1 (en) 2016-09-21 2019-04-23 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video via segments
US10397415B1 (en) 2016-09-30 2019-08-27 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US10560591B2 (en) 2016-09-30 2020-02-11 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US10044972B1 (en) 2016-09-30 2018-08-07 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US10560655B2 (en) 2016-09-30 2020-02-11 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US11106988B2 (en) 2016-10-06 2021-08-31 Gopro, Inc. Systems and methods for determining predicted risk for a flight path of an unmanned aerial vehicle
US10643661B2 (en) 2016-10-17 2020-05-05 Gopro, Inc. Systems and methods for determining highlight segment sets
US10923154B2 (en) 2016-10-17 2021-02-16 Gopro, Inc. Systems and methods for determining highlight segment sets
US10002641B1 (en) 2016-10-17 2018-06-19 Gopro, Inc. Systems and methods for determining highlight segment sets
US10339443B1 (en) 2017-02-24 2019-07-02 Gopro, Inc. Systems and methods for processing convolutional neural network operations using textures
US10776689B2 (en) 2017-02-24 2020-09-15 Gopro, Inc. Systems and methods for processing convolutional neural network operations using textures
US9916863B1 (en) 2017-02-24 2018-03-13 Gopro, Inc. Systems and methods for editing videos based on shakiness measures
US10791316B2 (en) * 2017-03-28 2020-09-29 Samsung Electronics Co., Ltd. Method for transmitting data about three-dimensional image
US10817992B2 (en) 2017-04-07 2020-10-27 Gopro, Inc. Systems and methods to create a dynamic blur effect in visual content
US10360663B1 (en) 2017-04-07 2019-07-23 Gopro, Inc. Systems and methods to create a dynamic blur effect in visual content
US10395122B1 (en) 2017-05-12 2019-08-27 Gopro, Inc. Systems and methods for identifying moments in videos
US10817726B2 (en) 2017-05-12 2020-10-27 Gopro, Inc. Systems and methods for identifying moments in videos
US10614315B2 (en) 2017-05-12 2020-04-07 Gopro, Inc. Systems and methods for identifying moments in videos
US10402698B1 (en) 2017-07-10 2019-09-03 Gopro, Inc. Systems and methods for identifying interesting moments within videos
US10614114B1 (en) 2017-07-10 2020-04-07 Gopro, Inc. Systems and methods for creating compilations based on hierarchical clustering
JP7285826B2 (en) 2017-08-21 2023-06-02 コーニンクレッカ フィリップス エヌ ヴェ B-line detection, presentation and reporting in lung ultrasound
JP2020531131A (en) * 2017-08-21 2020-11-05 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. B-line detection, presentation and reporting in lung ultrasonography
US10521705B2 (en) * 2017-11-14 2019-12-31 Adobe Inc. Automatically selecting images using multicontext aware ratings
WO2019117460A1 (en) * 2017-12-11 2019-06-20 삼성전자주식회사 Wearable display device and control method therefor
CN108198177A (en) * 2017-12-29 2018-06-22 广东欧珀移动通信有限公司 Image acquiring method, device, terminal and storage medium
US10474903B2 (en) * 2018-01-25 2019-11-12 Adobe Inc. Video segmentation using predictive models trained to provide aesthetic scores
CN108629338A (en) * 2018-06-14 2018-10-09 五邑大学 A kind of face beauty prediction technique based on LBP and convolutional neural networks
WO2020000382A1 (en) * 2018-06-29 2020-01-02 Hangzhou Eyecloud Technologies Co., Ltd. Motion-based object detection method, object detection apparatus and electronic device
US11836597B2 (en) * 2018-08-09 2023-12-05 Nvidia Corporation Detecting visual artifacts in image sequences using a neural network model
US20220094846A1 (en) * 2019-01-31 2022-03-24 Huawei Technologies Co., Lid. Method for selecting image based on burst shooting and electronic device
US12003850B2 (en) * 2019-01-31 2024-06-04 Huawei Technologies Co., Ltd. Method for selecting image based on burst shooting and electronic device
EP3893495A4 (en) * 2019-01-31 2022-01-26 Huawei Technologies Co., Ltd. Method for selecting images based on continuous shooting and electronic device

Also Published As

Publication number Publication date
AU2017261537A1 (en) 2017-12-07
AU2014315547A1 (en) 2016-03-17
EP3044947A1 (en) 2016-07-20
CN105531988A (en) 2016-04-27
KR101731771B1 (en) 2017-04-28
EP3044947B1 (en) 2020-09-23
US20170006251A1 (en) 2017-01-05
WO2015034725A1 (en) 2015-03-12
AU2017261537B2 (en) 2019-10-03
KR20160040711A (en) 2016-04-14
US10523894B2 (en) 2019-12-31

Similar Documents

Publication Publication Date Title
AU2017261537B2 (en) Automated selection of keeper images from a burst photo captured set
CN108764370B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
US9619708B2 (en) Method of detecting a main subject in an image
JP3984175B2 (en) Photo image sorting apparatus and program
JP4772839B2 (en) Image identification method and imaging apparatus
US9251439B2 (en) Image sharpness classification system
US8644563B2 (en) Recognition of faces using prior behavior
US8311364B2 (en) Estimating aesthetic quality of digital images
US8290281B2 (en) Selective presentation of images
Sun et al. Photo assessment based on computational visual attention model
US8805112B2 (en) Image sharpness classification system
US8330826B2 (en) Method for measuring photographer's aesthetic quality progress
US20150172537A1 (en) Photographing apparatus, method and program
CN110807759B (en) Method and device for evaluating photo quality, electronic equipment and readable storage medium
CN111935479B (en) Target image determination method and device, computer equipment and storage medium
US9058655B2 (en) Region of interest based image registration
KR20140016401A (en) Method and apparatus for capturing images
CN113808069A (en) Hierarchical multi-class exposure defect classification in images
CN110730381A (en) Method, device, terminal and storage medium for synthesizing video based on video template
CN111279684A (en) Shooting control method and electronic device
CN112839167A (en) Image processing method, image processing device, electronic equipment and computer readable medium
WO2015196681A1 (en) Picture processing method and electronic device
JP2005332382A (en) Image processing method, device and program
Shen et al. Towards intelligent photo composition-automatic detection of unintentional dissection lines in environmental portrait photos
Souza et al. Generating an Album with the Best Media Using Computer Vision

Legal Events

Date Code Title Description
AS Assignment

Owner name: APPLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KEATING, BRETT;WONG, VINCENT;SACHS, TODD;AND OTHERS;SIGNING DATES FROM 20130829 TO 20130909;REEL/FRAME:031168/0074

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION