Image Quality Estimation Using a Reference Image Portion
A method includes receiving, by a device, a first image of a scene and a second image of at least a portion of the scene. The method includes identifying a first plurality of features from the first image and comparing the first plurality of features to a second plurality of features from the second image to identify a common feature. The method includes determining a particular subset of pixels that corresponds to the common feature, the particular subset of pixels corresponding to a first subset of pixels of the first image and a second subset of pixels of the second image. The method also includes generating a first image quality estimate of the first image based on a comparison of a first degree of variation within the first subset of pixels and a second degree of variation within the second subset of pixels.
Latest Patents:
This application claims priority from, and is a continuation of, U.S. patent application Ser. No. 14/277,329, filed May 14, 2014, which is incorporated herein by reference in its entirety.
FIELD OF THE DISCLOSUREThe present disclosure is generally related to estimating an image quality of an image using a reference image portion.
BACKGROUNDA set of similar images is often created when a person takes multiple photographs of a scene. The set of similar images may have different viewing angles, zoom settings, lighting, and exposure settings. Images similar to a particular image may be available via the internet. For example, images taken at a famous vacation landmark or tourist location may be readily available via the internet.
When multiple images of an object or scene are available, a user may have interest in determining which of the images has the “best” image quality. Traditional methods for image quality evaluation include “full-reference” methods, “reduced reference” methods, and “no reference” methods. “Full reference” methods compare a full first image to a full second image. “Reduced reference” methods compare a full first image to a portion of a full second image. “No reference” methods do not use a second image. Rather, in “no reference” methods, blurriness, blockiness (e.g., due to compression), or noisiness (e.g., due to low light image capture) of a full image is estimated.
The present disclosure describes systems and methods of estimating an image quality of each image in a set of images using other images in the set of images. For example, the described techniques include identifying a correspondence or match between pixel regions in the images of the set and estimating image quality based on comparing the pixel regions. The images in the set may not be identical snapshots. For example, information regarding darker regions may be more readily available in a slightly overexposed image and information regarding lighter regions may be more readily available in a slightly underexposed image. As another example, an object may be blurry in one image but clear in another image. Images in the set may also be received from different devices. For example, the set of images may include images captured by a camera of a computing device and/or images downloaded via the internet.
In a particular embodiment, a method includes receiving, by a device including a processor, a set of images, where each image of the set of images is related to a common scene. The method also includes determining, by the device, a first subset of pixels of a first image of the set that corresponds to a second subset of pixels of a second image of the set. The method further includes generating, by the device a first image quality estimate of the first image based on a comparison of the first subset of pixels and the second subset of pixels.
In another particular embodiment, an apparatus includes a processor and a memory storing instructions executable by the processor to perform operations including receiving a set of images, where each image of the set of images is related to a common scene. The operations also include identifying a feature of the common scene that is represented in each image in the set of images, determining a first subset of pixels of a first image of the set, and determining a second subset of pixels of a second image of the set. The first subset of pixels and the second subset of pixels correspond to the feature. The operations further include generating a first image quality estimate of the first image based on a comparison of the first subset of pixels and the second subset of pixels.
In another particular embodiment, a computer-readable storage device stores instructions that, when executed by a computer, cause the computer to perform operations including receiving a set of images, where each image of the set of images is related to a common scene and is captured by a particular device. The operations also include determining a first subset of pixels of a first image of the set that corresponds to a second subset of pixels of a second image of the set. The operations further include generating a first image quality estimate of the first image based on a comparison of the first subset of pixels and the second subset of pixels.
The computing device 120 is operable to capture one or more images of the scene 110, to store one or more images of the scene 110, and/or to download one or more images of the scene 110 via a network 130. For example, a first set of images including a first image 112 of the scene 110 may be captured by the camera 123. The captured first image 112 may be stored within a memory, such as a memory 122 of the computing device 120. The computing device 120 also includes a processor 121 that is coupled to the memory 122 and coupled to the camera 123. The computing device 120 further includes a network interface 124 for communication with the network 130. The network 130 may be a wide area wireless network, and the network interface 124 may be a wireless transceiver, such as a cellular or wi-fi transceiver, that is configured to communicate via the network 130.
As illustrated in
When a user captures multiple images of the scene 110 using the camera 123, the images may have different resolutions, may have different image processing filters applied during image capture, may correspond to different angles or perspectives of the scene 110, etc. A user of the computing device 120 may have interest in determining which captured image is a “highest quality” image that is suitable for use in a subsequent application (e.g., e-mailing to friends or family, posting on a social network, etc.).
In a particular embodiment, the computing device 120 includes an image quality calculation module 125 that is operable to estimate image quality of images captured, stored, and/or downloaded by the computing device 120. For example, the image quality calculation module 125 may correspond to hardware and/or software at the computing device 120. To illustrate, the image quality calculation module 125 may be implemented by hardware components within the processor 121 and/or instructions stored at the memory 122 that are executable by the processor 121. In the embodiment shown in
The comparison module 127 may compare the extracted features from the images processed by the feature extraction module 126. For example, the feature extraction module 126 may extract features from the first image 112 of the scene 110 and from the second image 141 of the scene 110. Because the images 112, 141 are of the same scene 110, certain features extracted from the first image 112 may match or correspond to certain features extracted from the second image 141. The comparison module 127 may compare extracted features to determine subsets of pixels in the images 112, 141 that correspond to the common scene 110 or an object depicted therein. For example, the comparison module 127 may determine a first subset of pixels of the first image 112 that corresponds to a second subset of pixels of the second image 141. Thus, images that correspond to the common scene 110 may not be exactly identical, but rather may at least partially overlap in terms of object(s) depicted in the images, although such object(s) may be depicted from different angles, at different scales, in different lighting/color, etc. The comparison module 127 may generate a first image quality estimate of the first image based on a comparison of the first subset of pixels to the second subset of pixels. As an illustrative non-limiting example, the comparison module 127 may generate the first image quality estimate based on a comparison of a variation (e.g., in intensity, coloration, contrast, spatial frequency, etc.) within the first subset of pixels to a variation in the second subset of pixels. In a particular embodiment, because the subsets of pixels correspond to a common image feature, greater variation is interpreted as reflecting greater image detail, and therefore greater image quality.
In an illustrative embodiment, the first subset of pixels and the second subset of pixels are determined by comparing pixels of the first image 112 to pixels of the second image 141 to identify a feature of the common scene 110 that is represented in the first image 112 and in the second image 141. The first subset of pixels may correspond to a particular feature in the first image 112 (e.g., a particular edge, corner, shape, etc. depicted in the first image 112), the second subset of pixels may correspond to the same feature in the second image 141, and the comparison module 127 may determine that the first subset of pixels and the second subset of pixels include common feature keypoints identified by the feature extraction module 126. The comparison module 127 may determine an image quality of the first image 112 based on a comparison of a portion of the first image 112 to a portion of a reference image, such as the second subset of pixels of the second image 141.
In a particular embodiment, the computing device 120 may receive multiple images of the scene 110 and determine an image quality of each of the multiple images. For example, the camera 123 may capture the first image 112 and a third image 113 of the scene 110. The comparison module 127 may generate a third quality image estimate of the third image 113 based on a comparison of a third subset of pixels of the third image 113 and the second subset of pixels previously identified with respect to the second (reference) image 141. As an illustrative non-limiting example, the comparison module 127 may generate the third image quality estimate based on a comparison of a variation (e.g., in intensity, coloration, contrast, spatial frequency, etc.) within the third subset of pixels to a variation in the second subset of pixels.
In a particular embodiment, the comparison module 127 may determine whether the first image quality estimate of the first image 112 and the third image quality estimate of the third image 113 satisfy an image quality threshold, such as an illustrated threshold 128. Based on a comparison of the first image quality estimate to the third image quality estimate, the comparison module 127 may designate the first image 112 or the third image 113 as a “preferred image.” Alternately, or in addition, the computing device 120 may generate and display an indication regarding which image(s) have an “acceptable” image quality that satisfies the threshold 128. As an illustrative non-limiting example, there may be more variation (e.g., in intensity, coloration, contrast, spatial frequency, etc.) in the first subset of pixels than in the third subset of pixels. In this example, the comparison module 127 may determine that the first image 112 has more detail, and therefore higher quality, than the third image 113. It should be noted that the present disclosure is not limited to estimating and comparing image quality based on two or three images. In alternate embodiments, more than three images may be used.
A user may use a preferred image (or multiple images that satisfy the threshold 128) in an application, such as e-mail, multimedia messaging, social network sharing, etc. In a particular embodiment, the user may be provided an option to delete captured images of the scene 110 other than the preferred image (or multiple images that satisfy the threshold 128). Further, the user may upload the preferred image (or multiple images that satisfy the threshold 128) to the image repository 140 for subsequent use, such as during subsequent image quality estimations by the computing device 120 or other computing devices.
It should be noted that although
In another exemplary embodiment, the image quality calculation module 125 facilitates a combination of the “best parts” of multiple images to generate a high quality composite image. For example, portions of the first image 112, the second (reference) image 141, and/or the third image 113 may be combined to form a high quality composite image of the scene 110 for use in subsequent applications and/or uploading to the image repository 140. In another particular embodiment, the image quality calculation module 125 maintains information regarding which image in a set of images has the highest quality for a particular image feature (e.g., one image may have higher dynamic range for bright image features whereas another image may have higher dynamic range for dark image features). The computing device 120 may dynamically switch between images of the set when a user zooms in or zooms out so that the user continues to see a highest available quality image (or composite image). When the set of images corresponds to images of a common scene (e.g., the scene 110) taken from different viewpoints, the estimated image qualities may be used to infer “global” illumination and structural information regarding the scene. Such information may be useful for three-dimensional (3-D) rendering or scene reconstruction. As an illustrative non-limiting example, a bidirectional reflectance distribution function (BRDF) may be computed.
By determining image quality based on image portions (e.g., subsets of pixels) instead of full images, the image quality calculation module 125 may provide improved speed as compared to “full reference” image quality estimators (which compare two full images to each other) and “reduced reference” image quality estimators (which compare a full image to an image portion). The image quality calculation module 125 is also more flexible than “full reference” and “reduced reference” estimators. For example, the image quality calculation module 125 may perform mutual reference image quality estimation even in situations where the images are not aligned or copies of each other. In contrast, a “full reference” estimator may require use of aligned images, where one image is a degraded version of the other image. A “reduced reference” estimator may require a reference image that is an enlarged version of a test image, so that viewpoint, exposure setting, etc. are constant. Further, by comparing pixels that correspond to a common feature, the image quality calculation module 125 may provide improved accuracy as compared to “no reference” image quality estimators and may determine which of a set of captured images is a “highest quality” image (or a “preferred image”).
Referring to
After the first image quality of the first image 112 is calculated and after the third image quality of the third image 113 is calculated, the first image quality and the third image quality may be compared in order to determine which image quality is greater. Thus, the user of the computing device 120 may be presented (e.g., a display device of the computing device 120 may display) one of the images 112, 113 that is determined by the computing device 120 as having a greater image quality (e.g., a preferred image). In a particular illustrative embodiment, an image quality of each of the images may be compared to an image quality threshold. For any image having an image quality that does not satisfy the image quality threshold, the image may not be presented for further processing. Thus, the image quality threshold may be used to qualify or prequalify captured images prior to performing comparisons in order to determine a preferred image.
In a particular illustrative embodiment, the first image 112, the second image 141, and the third image 113 are not aligned. For example, subsets of pixels of the first image 112, the second image 141, and the third image 113 may be located in different positions within the images. Such variation may occur, for example, due to the difference of viewpoints/angles between the images 112, 141, and 113. Even though the images 112, 141, and 113 are not aligned, subsets (e.g., portions) of the images may be compared to each other during image processing. In a particular embodiment, a common feature is detected in each of the images via a feature detection algorithm and is used to align the images (or to identify corresponding portions of the images) for comparison purposes.
While the first image 112 and the third image 113 have been described as being compared to a second image 141 that is received from an external repository, in other embodiments, the second image 141 is another image captured by the computing device 120. In this embodiment, multiple images captured by the computing device 120 are compared with respect to other images also captured by the computing device 120 in order to estimate image quality as described herein. In this case, the computing device 120 may perform image processing and image quality assessments regardless of connectivity via a network, such as the network 130 of
Referring to
The first image 112 includes a first set of pixels 301 and the second image 141 includes a second set of pixels 302. The first set of pixels 301 corresponds to a feature or sub-portion of a captured scene (e.g., the scene 110 of
A processor executing the image quality algorithm receives image data corresponding to two or more images of a similar scene and computes a quality of one of the images based on use of the other images as partial references. One or more of the images may be obtained via an external source, such as via a wide area network. The processor executing the image quality algorithm identifies correspondences or matches between features of a set of images and computes a quality estimate for each image using content of the other images in the set for each of the features. The images in the set may not be aligned and may have different characteristics. The image quality algorithm may enable estimation of impairments in images including estimates of impacts of different exposure settings. The image quality algorithm takes advantage of information about a scene that is available from similar images. Feature points (e.g., SIFT, SURF, FAST, etc.) are used to identify spatial correspondents from one image to another within a set of images. The feature points may be used as references for estimating relative quality of two images. Color balance and color saturation can also be estimated by comparing images taken at different camera settings.
Referring to
The method 400 includes receiving a first image and a set of reference images where the first image and each image of the set of reference images are related to a common scene, at 402. For example, the first image may be the first image 112 of
The method 400 further includes determining a second subset of pixels of the second image that corresponds to the reference subset of pixels of the first reference image of the set of reference images, at 410. The method 400 further includes generating a second image quality estimate of the second image based on a comparison of the second subset of pixels and the first reference subset of pixels, at 412.
The method 400 further includes performing a comparison of the first image quality estimate to the second image quality estimate, at 414. Based on the results of comparing the first image quality estimate to the second image quality estimate, the method 400 further includes designating the first image or the second image as a preferred image based on the comparison, at 416. The method 400 of
In a networked deployment, the general computer system 500 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The general computer system 500 may also be implemented as or incorporated into various devices, such as a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a set-top box, a customer premises equipment device, an endpoint device, a web appliance, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. In a particular embodiment, the general computer system 500 may be implemented using electronic devices that provide video, audio, or data communication. Further, while one general computer system 500 is illustrated, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
As illustrated in
In a particular embodiment, as depicted in
In an alternative embodiment, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, may be constructed to implement one or more of the methods described herein. Various embodiments may broadly include a variety of electronic and computer systems. One or more embodiments described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit (ASIC). Accordingly, the present system encompasses software, firmware, and hardware implementations.
In accordance with various embodiments of the present disclosure, the methods described herein may be implemented by software programs executable by a computer system, a processor, or a device, which may include (forms of instructions embodied as a state machine implemented with logic components in an ASIC or a field programmable gate array (FPGA) device.) Further, in an exemplary, non-limiting embodiment, implementations may include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing may be constructed to implement one or more of the methods or functionality as described herein. It is further noted that a computing device, such as a processor, a controller, a state machine or other suitable device for executing instructions to perform operations or methods may perform such operations directly or indirectly by way of one or more intermediate devices directed by the computing device.
A computer-readable storage device 522 may stores the data and instructions 524 or receives, stores, and executes the data and instructions 524, so that a device may perform image quality calculation/estimation as described herein. For example, the computer-readable storage device 522 device may include or be included within one or more of the components of the computing device 120. While the computer-readable storage device 522 is shown to be a single device, the computer-readable storage device 522 may include a single device or multiple devices, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The computer-readable storage device 522 is capable of storing a set of instructions for execution by a processor to cause a computer system to perform any one or more of the methods or operations disclosed herein.
In a particular non-limiting, exemplary embodiment, the computer-readable storage device 522 may include a solid-state memory such as embedded memory (or a memory card or other package that houses one or more non-volatile read-only memories). Further, the computer-readable storage device 522 may be a random access memory or other volatile re-writable memory. Additionally, the computer-readable storage device 522 may include a magneto-optical or optical device, such as a disk or tapes or other storage device. Accordingly, the disclosure is considered to include any one or more of a computer-readable storage device and other equivalents and successor devices, in which data or instructions may be stored.
Although one or more components and functions may be described herein as being implemented with reference to a particular standard or protocols, the disclosure is not limited to such standards and protocols. For example, standards for internet and other network transmission (e.g., TCP/IP, UDP/IP, HTML, HTTP) represent examples of the state of the art. Such standards are from time-to-time superseded by faster or more efficient equivalents having essentially the same functions. Wireless standards for device detection (e.g., RFID), short-range communications (e.g., Bluetooth, Wi-Fi, Zigbee), and long-range communications (e.g., WiMAX, GSM, CDMA, LTE) can be used by the computer system 500 in selected embodiments.
The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Figures are also merely representational and may not be drawn to scale. Certain proportions thereof may be exaggerated, while others may be minimized. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.
Although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments.
Less than all of the steps or functions described with respect to the exemplary processes or methods can also be performed in one or more of the exemplary embodiments. Further, the use of numerical terms to describe a device, component, step or function, such as first, second, third, and so forth, is not intended to describe an order or function unless expressly stated. The use of the terms first, second, third and so forth, is generally to distinguish between devices, components, steps or functions unless expressly stated otherwise. Additionally, one or more devices or components described with respect to the exemplary embodiments can facilitate one or more functions, where the facilitating (e.g., facilitating access or facilitating establishing a connection) can include less than every step needed to perform the function or can include all of the steps needed to perform the function.
In one or more embodiments, a processor (which can include a controller or circuit) has been described that performs various functions. It should be understood that the processor can be implemented as multiple processors, which can include distributed processors or parallel processors in a single machine or multiple machines. The processor can be used in supporting a virtual processing environment. The virtual processing environment may support one or more virtual machines representing computers, servers, or other computing devices. In such virtual machines, components such as microprocessors and storage devices may be virtualized or logically represented. The processor can include a state machine, an application specific integrated circuit, and/or a programmable gate array (PGA) including a Field PGA. In one or more embodiments, when a processor executes instructions to perform “operations”, this can include the processor performing the operations directly and/or facilitating, directing, or cooperating with another device or component to perform the operations.
The Abstract is provided with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter.
The above-disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments, which fall within the scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
Claims
1. A method comprising:
- receiving, by a device, a first image of a scene;
- receiving, at the device, a second image of at least a portion of the scene;
- identifying, at the device, a first plurality of features from the first image;
- comparing, at the device, the first plurality of features to a second plurality of features from the second image to identify a common feature;
- determining, at the device, a particular subset of pixels that corresponds to the common feature, the particular subset of pixels corresponding to a first subset of pixels of the first image and a second subset of pixels of the second image; and
- generating, at the device, a first image quality estimate of the first image based on a comparison of a first degree of variation within the first subset of pixels and a second degree of variation within the second subset of pixels.
2. The method of claim 1, further comprising:
- receiving a third image of at least a portion of the scene;
- generating a third image quality estimate of the third image based on a comparison of a third degree of variation within a third subset of pixels of the third image and the second degree of variation within the second subset of pixels, wherein the third subset of pixels corresponds to the common feature; and
- performing a comparison of the first image quality estimate to the third image quality estimate.
3. The method of claim 2, further comprising sending, from the device to a display device, an indicator that identifies the first image as a preferred image in response to the first image quality estimate exceeding the third image quality estimate.
4. The method of claim 1, further comprising:
- comparing the first image quality estimate to a threshold; and
- sending to a display device a first indicator of the first image and conditionally sending a second indicator, the second indicator sent to the display device responsive to the first image quality estimate exceeding the threshold, wherein the second indicator is not sent to the display device in response to the first image quality estimate not exceeding the threshold.
5. The method of claim 4, further comprising providing a selectable option to the display device, wherein selection of the selectable option enables group deletion of images with image quality estimates that do not exceed the threshold.
6. The method of claim 1, wherein the first degree of variation corresponds to first variation of color intensity of the first subset of pixels, and wherein the second degree of variation corresponds to a second variation of color intensity of the second subset of pixels.
7. The method of claim 1, wherein the first degree of variation corresponds to coloration of the first subset of pixels, and wherein the second degree of variation corresponds to coloration of the second subset of pixels.
8. The method of claim 1, wherein the first degree of variation corresponds to spatial frequency of the first subset of pixels, and wherein the second degree of variation corresponds to spatial frequency of the second subset of pixels.
9. The method of claim 1, wherein the device comprises a camera, and wherein the first image is captured by the camera.
10. The method of claim 9, wherein the second image is a reference image not taken by the camera.
11. An apparatus comprising:
- a processor, and
- a memory storing instructions executable by the processor to perform operations including: receiving a first image of a scene; receiving a second image of at least a portion of the scene; identifying a first plurality of features from the first image; comparing the first plurality of features to a second plurality of features from the second image to identify a common feature; determining a particular subset of pixels that corresponds to the common feature, the particular subset of pixels corresponding to a first subset of pixels of the first image and a second subset of pixels of the second image; and generating a first image quality estimate of the first image based on a comparison of a first degree of variation within the first subset of pixels and a second degree of variation within the second subset of pixels.
12. The apparatus of claim 11, further comprising a camera coupled to the processor, wherein the first image is received from the camera.
13. The apparatus of claim 12, further comprising a display device coupled to the processor, wherein the operations further include comparing the first image quality estimate to a threshold and sending to the display device a first indicator of the first image and conditionally sending a second indicator, the second indicator sent to the display device responsive to the first image quality estimate exceeding the threshold, and wherein the second indicator is not sent to the display device in response to the first image quality estimate not exceeding the threshold.
14. The apparatus of claim 12, wherein the first image depicts the scene from a first angle, and wherein the second image depicts the scene from a second angle distinct from the first angle.
15. The apparatus of claim 11, wherein the second image is received from an image repository.
16. A computer-readable storage device storing instructions that, when executed by a computer, cause the computer to perform operations, the operations comprising:
- receiving a first image of a scene;
- receiving a second image of at least a portion of the scene;
- identifying a first plurality of features from the first image;
- identifying a second plurality of features from the second image;
- comparing the first plurality of features to the second plurality of features to identify a subset of common features;
- determining a particular subset of pixels that corresponds to a particular feature of the subset of common features, the particular subset of pixels corresponding to a first subset of pixels of the first image and a second subset of pixels of the second image; and
- generating a first image quality estimate of the first image based on a comparison of a first degree of variation within the first subset of pixels and a second degree of variation within the second subset of pixels.
17. The computer-readable storage device of claim 16, wherein the operations further comprise:
- receiving a third image of at least a portion of the scene;
- generating a third image quality estimate of the third image based on a comparison of a third degree of variation within a third subset of pixels of the third image and the second degree of variation within the second subset of pixels, wherein the third subset of pixels corresponds to the particular feature; and
- performing a comparison of the first image quality estimate to the third image quality estimate.
18. The computer-readable storage device of claim 17, wherein the operations further comprise sending, to a display device, an indicator that identifies the third image as a preferred image in response to the third image quality estimate exceeding the first image quality estimate.
19. The computer-readable storage device of claim 16, wherein the operation further comprise:
- comparing the first image quality estimate to a threshold; and
- sending to a display device a first indicator of the first image and conditionally seconding a second indicator, the second indicator sent to the display device responsive to the first image quality estimate exceeding the threshold, wherein the second indicator is not sent to the display device in response to the first image quality estimate not exceeding the threshold.
20. The computer-readable storage device of claim 19, providing a selectable option to the display device, wherein selection of the selectable option enables group deletion of images with image quality estimates that do not exceed the threshold.
Type: Application
Filed: Jun 19, 2018
Publication Date: Oct 18, 2018
Patent Grant number: 10922580
Applicant:
Inventors: Amy Ruth Reibman (Chatham, NJ), Zhu Liu (Marlboro, NJ), Lee Begeja (Gillete, NJ), Bernard S. Renger (New Providence, NJ), David Crawford Gibbon (Lincroft, NJ), Behzad Shahraray (Holmdel, NJ), Raghuraman Gopalan (Freehold, NJ), Eric Zavesky (Austin, TX)
Application Number: 16/011,692