Patents by Inventor Alexander C. Loui

Alexander C. Loui has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10540568
    Abstract: Embodiments of the present disclosure include a computer-implemented method that receives a digital image input, the digital image input containing one or more dynamic salient objects arranged over a background. The method also includes performing a tracking operation, the tracking operation identifying the dynamic salient object over one or more frames of the digital image input as the dynamic salient object moves over the background. The method further includes performing a clustering operation, in parallel with the tracking operation, on the digital image input, the clustering operation identifying boundary conditions of the dynamic salient object. Additionally, the method includes combining a first output from the tracking operation and a second output from the clustering operation to generate a third output. The method further includes performing a segmentation operation on the third output, the segmentation operation extracting the dynamic salient object from the digital image input.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: January 21, 2020
    Assignee: KODAK ALARIS INC.
    Inventors: Alexander C. Loui, Chi Zhang
  • Patent number: 10528795
    Abstract: A method for determining an impact score for a digital image includes providing the digital image wherein the digital image includes faces; using a processor to determine an image feature for the faces; using the processor to compute an object impact score for the faces, wherein the object impact score is based at least upon one of the determined image features; weighting the object impact score for the faces based on one of the determined image features for a face; using the processor to compute an impact score for the digital image by combining the weighted object impact scores for the faces in the image; and storing the computed impact score in a processor accessible memory.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: January 7, 2020
    Assignee: KODAK ALARIS INC.
    Inventors: Raymond William Ptucha, Alexander C. Loui, Mark D. Wood, David K. Rhoda, David Kloosterman, Joseph Anthony Manico
  • Publication number: 20190164006
    Abstract: Embodiments of the present disclosure include a computer-implemented method that receives a digital image input, the digital image input containing one or more dynamic salient objects arranged over a background. The method also includes performing a tracking operation, the tracking operation identifying the dynamic salient object over one or more frames of the digital image input as the dynamic salient object moves over the background. The method further includes performing a clustering operation, in parallel with the tracking operation, on the digital image input, the clustering operation identifying boundary conditions of the dynamic salient object. Additionally, the method includes combining a first output from the tracking operation and a second output from the clustering operation to generate a third output. The method further includes performing a segmentation operation on the third output, the segmentation operation extracting the dynamic salient object from the digital image input.
    Type: Application
    Filed: January 31, 2019
    Publication date: May 30, 2019
    Applicant: Kodak Alaris Inc.
    Inventors: Alexander C. Loui, Chi Zhang
  • Publication number: 20190156472
    Abstract: A system and method for performing real-time quality inspection of objects is disclosed. The system and method include a transport to move objects being inspected, allowing the inspection to be performed in-line. At least one optical acquisition unit is provided that captured optical images of the objects being inspected. The captured optical images are matched to CAD models of objects, and the matched CAD model is extracted. A laser with an illumination light beam has a wavelength in the violet or ultraviolet range then conducts scans of the objects, which are formed into three-dimensional point clouds. The point clouds are compared to the extracted CAD models for each object, where CTF are compared to user input or CAD model information and the object is determined to be acceptable or defective based on the extent of deviation between the point cloud and the CAD model.
    Type: Application
    Filed: July 12, 2018
    Publication date: May 23, 2019
    Applicant: KODAK ALARIS INC.
    Inventors: Bruce A. LINK, Robert W. JOHNSON, Alexander C. LOUI, Jose Zvietcovich ZEGARRA, Erik GARCELL
  • Patent number: 10229340
    Abstract: Embodiments of the present disclosure include a computer-implemented method that receives a digital image input, the digital image input containing one or more dynamic salient objects arranged over a background. The method also includes performing a tracking operation, the tracking operation identifying the dynamic salient object over one or more frames of the digital image input as the dynamic salient object moves over the background. The method further includes performing a clustering operation, in parallel with the tracking operation, on the digital image input, the clustering operation identifying boundary conditions of the dynamic salient object. Additionally, the method includes combining a first output from the tracking operation and a second output from the clustering operation to generate a third output. The method further includes performing a segmentation operation on the third output, the segmentation operation extracting the dynamic salient object from the digital image input.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: March 12, 2019
    Assignee: KODAK ALARIS INC.
    Inventors: Alexander C Loui, Chi Zhang
  • Publication number: 20190065886
    Abstract: A method for creating navigable views includes receiving digital images, computing a set of feature points for each of the digital images, selecting one of the digital images as a reference image, identifying a salient region of interest in the reference image, identifying other digital images containing a region of interest similar to the salient region of interest in the reference image using the set of feature points computed for each of other digital images, designating a reference location for the salient region of interest in the reference image, aligning the other digital images to the image that contains the designated reference location, ordering the image that contains the designated reference location and the other digital images, and generating a navigable view.
    Type: Application
    Filed: October 29, 2018
    Publication date: February 28, 2019
    Applicant: Kodak Alaris Inc.
    Inventors: Alexander C. Loui, Joseph A. Manico
  • Patent number: 10192117
    Abstract: A method for graph-based spatiotemporal video segmentation and automatic target object extraction in high-dimensional feature space includes using a processor to automatically analyze an entire volumetric video sequence; using the processor to construct a high-dimensional feature space that includes color, motion, time, and location information so that pixels in the entire volumetric video sequence are reorganized according to their unique and distinguishable feature vectors; using the processor to create a graph model that fuses the appearance, spatial, and temporal information of all pixels of the video sequence in the high-dimensional feature space; and using the processor to group pixels in the graph model that are inherently similar and assign the same labels to them to form semantic spatiotemporal key segments.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: January 29, 2019
    Assignee: KODAK ALARIS INC.
    Inventors: Alexander C. Loui, Lei Fan
  • Patent number: 10134440
    Abstract: A method for producing an audio-visual slideshow for a video sequence having an audio soundtrack and a corresponding video track including a time sequence of image frames, comprising: segmenting the audio soundtrack into a plurality of audio segments; subdividing the audio segments into a sequence of audio frames; determining a corresponding audio classification for each audio frame; automatically selecting a subset of the audio segments responsive to the audio classification for the corresponding audio frames; for each of the selected audio segments automatically analyzing the corresponding image frames to select one or more key image frames; merging the selected audio segments to form an audio summary; forming an audio-visual slideshow by combining the selected key frames with the audio summary, wherein the selected key frames are displayed synchronously with their corresponding audio segment; and storing the audio-visual slideshow in a processor-accessible storage memory.
    Type: Grant
    Filed: May 3, 2011
    Date of Patent: November 20, 2018
    Assignee: KODAK ALARIS INC.
    Inventors: Wei Jiang, Alexander C. Loui, Courtenay Cotton
  • Patent number: 10115033
    Abstract: A method for creating navigable views includes receiving digital images, computing a set of feature points for each of the digital images, selecting one of the digital images as a reference image, identifying a salient region of interest in the reference image, identifying other digital images containing a region of interest similar to the salient region of interest in the reference image using the set of feature points computed for each of other digital images, designating a reference location for the salient region of interest in the reference image, aligning the other digital images to the image that contains the designated reference location, ordering the image that contains the designated reference location and the other digital images, and generating a navigable view.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: October 30, 2018
    Assignee: Kodak Alaris Inc.
    Inventors: Alexander C. Loui, Joseph A. Manico
  • Patent number: 10089532
    Abstract: The present application is directed to new methods for automatically determining several characteristics of frames in a video sequence and automatically recommending or preparing image output products based on those frame characteristics. In some embodiments, motion characteristics of particular image frames are calculated, and those motion characteristics are automatically used to prepare or recommend image output products suitable for the motion characteristics of the frames. In other embodiments, facial, audio, and overall image quality are assessed and used to automatically recommend or prepare image output products. In still other embodiments, image frames in a video sequence are analyzed for various user-specified characteristics, which characteristics are then used to automatically recommend or prepare image output products.
    Type: Grant
    Filed: February 23, 2015
    Date of Patent: October 2, 2018
    Assignee: KODAK ALARIS INC.
    Inventors: Alexander C. Loui, Brian Mittelstaedt
  • Publication number: 20170351417
    Abstract: Data points, calendar entries, trends, behavioral patterns may be used to predict and pre-emptively build digital and printable products with selected collections of images without the user's active participation. The collections are selected from files on the user's device, cloud-based photo library, or other libraries shared among other individuals and grouped into thematic products. Based on analysis of the user's collections and on-line behaviors, the system may estimate types and volumes of potential media-centric products, and the resources needed for producing and distributing such media-centric products for a projected period of time. A user interface may take the form of a “virtual curator”, which is a graphical or animated persona for augmenting and managing interactions between the user and the system managing the user's stored media assets. The virtual curator can assume one of many personas, as appropriate, with each user.
    Type: Application
    Filed: June 1, 2017
    Publication date: December 7, 2017
    Applicant: Kodak Alaris Inc.
    Inventors: Joseph A. Manico, Young No, Madirakshi Das, Alexander C. Loui
  • Publication number: 20170352083
    Abstract: Data points, calendar entries, trends, behavioral patterns may be used to predict and pre-emptively build digital and printable products with selected collections of images without the user's active participation. The collections are selected from files on the user's device, cloud-based photo library, or other libraries shared among other individuals and grouped into thematic products. Based on analysis of the user's collections and on-line behaviors, the system may estimate types and volumes of potential media-centric products, and the resources needed for producing and distributing such media-centric products for a projected period of time. A user interface may take the form of a “virtual curator”, which is a graphical or animated persona for augmenting and managing interactions between the user and the system managing the user's stored media assets. The virtual curator can assume one of many personas, as appropriate, with each user.
    Type: Application
    Filed: June 1, 2017
    Publication date: December 7, 2017
    Applicant: Kodak Alaris Inc.
    Inventors: Kenneth Ruck, Joseph A. Manico, David Kloosterman, Alexander C. Loui, Madirakshi Das
  • Patent number: 9824271
    Abstract: An adaptable eye artifact identification and correction method is disclosed. Eye artifacts are identified and classified based on color, severity, shape, eye location, and cause. Based on this classification, an eye artifact correction algorithm is selected from a series of eye artifact correction techniques. For minor artifacts, simple color correction techniques are deployed to restore the iris color and to drive the pupil to once again appear black. For severe eye artifacts face detection and metadata analysis are utilized to search the user's image collection for recent images of the subject without the eye artifact condition. Once located, these images provide eye color and shape information for use to replace the pixels expressing the eye artifact condition. The non-artifact eye images are used to provide the appropriate eye color and shape to correct the eye artifact condition for more severe eye artifacts.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: November 21, 2017
    Assignee: KODAK ALARIS INC.
    Inventors: James Andrew Whritenor, Joseph Anthony Manico, Alexander C. Loui
  • Publication number: 20170243078
    Abstract: Embodiments of the present disclosure include a computer-implemented method that receives a digital image input, the digital image input containing one or more dynamic salient objects arranged over a background. The method also includes performing a tracking operation, the tracking operation identifying the dynamic salient object over one or more frames of the digital image input as the dynamic salient object moves over the background. The method further includes performing a clustering operation, in parallel with the tracking operation, on the digital image input, the clustering operation identifying boundary conditions of the dynamic salient object. Additionally, the method includes combining a first output from the tracking operation and a second output from the clustering operation to generate a third output. The method further includes performing a segmentation operation on the third output, the segmentation operation extracting the dynamic salient object from the digital image input.
    Type: Application
    Filed: February 24, 2017
    Publication date: August 24, 2017
    Applicant: KODAK ALARIS, INC.
    Inventors: Alexander C LOUI, Chi ZHANG
  • Patent number: 9665775
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.
    Type: Grant
    Filed: July 22, 2016
    Date of Patent: May 30, 2017
    Assignee: KODAK ALARIS INC.
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Publication number: 20170046561
    Abstract: A method for determining an impact score for a digital image includes providing the digital image wherein the digital image includes faces; using a processor to determine an image feature for the faces; using the processor to compute an object impact score for the faces, wherein the object impact score is based at least upon one of the determined image features; weighting the object impact score for the faces based on one of the determined image features for a face; using the processor to compute an impact score for the digital image by combining the weighted object impact scores for the faces in the image; and storing the computed impact score in a processor accessible memory.
    Type: Application
    Filed: October 26, 2016
    Publication date: February 16, 2017
    Applicant: Kodak Alaris Inc.
    Inventors: Raymond William Ptucha, Alexander C. Loui, Mark D. Wood, David K. Rhoda, David Kloosterman, Joseph Anthony Manico
  • Patent number: 9552374
    Abstract: A method for determining an impact score for a digital image includes providing the digital image wherein the digital image includes faces; using a processor to determine an image feature for the faces; using the processor to compute an object impact score for the faces, wherein the object impact score is based at least upon one of the determined image features; weighting the object impact score for the faces based on one of the determined image features for a face; using the processor to compute an impact score for the digital image by combining the weighted object impact scores for the faces in the image; and storing the computed impact score in a processor accessible memory.
    Type: Grant
    Filed: August 12, 2014
    Date of Patent: January 24, 2017
    Assignee: KODAK ALARIS, INC.
    Inventors: Raymond William Ptucha, Alexander C. Loui, Mark D. Wood, David K. Rhoda, David Kloosterman, Joseph Anthony Manico
  • Publication number: 20160379055
    Abstract: A method for graph-based spatiotemporal video segmentation and automatic target object extraction in high-dimensional feature space includes using a processor to automatically analyze an entire volumetric video sequence; using the processor to construct a high-dimensional feature space that includes color, motion, time, and location information so that pixels in the entire volumetric video sequence are reorganized according to their unique and distinguishable feature vectors; using the processor to create a graph model that fuses the appearance, spatial, and temporal information of all pixels of the video sequence in the high-dimensional feature space; and using the processor to group pixels in the graph model that are inherently similar and assign the same labels to them to form semantic spatiotemporal key segments.
    Type: Application
    Filed: May 27, 2016
    Publication date: December 29, 2016
    Applicant: Kodak Alaris Inc.
    Inventors: Alexander C. Loui, Lei Fan
  • Patent number: 9524349
    Abstract: A method of identifying one or more particular images from an image collection, includes indexing the image collection to provide image descriptors for each image in the image collection such that each image is described by one or more of the image descriptors; receiving a query from a user specifying at least one keyword for an image search; and using the keyword(s) to search a second collection of tagged images to identify co-occurrence keywords. The method further includes using the identified co-occurrence keywords to provide an expanded list of keywords; using the expanded list of keywords to search the image descriptors to identify a set of candidate images satisfying the keywords; grouping the set of candidate images according to at least one of the image descriptors, and selecting one or more representative images from each grouping; and displaying the representative images to the user.
    Type: Grant
    Filed: April 20, 2015
    Date of Patent: December 20, 2016
    Assignee: KODAK ALARIS INC.
    Inventors: Mark D. Wood, Alexander C. Loui
  • Publication number: 20160328615
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.
    Type: Application
    Filed: July 22, 2016
    Publication date: November 10, 2016
    Applicant: Kodak Alaris Inc.
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman