Patents by Inventor Alexander C. Loui
Alexander C. Loui has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10540568Abstract: Embodiments of the present disclosure include a computer-implemented method that receives a digital image input, the digital image input containing one or more dynamic salient objects arranged over a background. The method also includes performing a tracking operation, the tracking operation identifying the dynamic salient object over one or more frames of the digital image input as the dynamic salient object moves over the background. The method further includes performing a clustering operation, in parallel with the tracking operation, on the digital image input, the clustering operation identifying boundary conditions of the dynamic salient object. Additionally, the method includes combining a first output from the tracking operation and a second output from the clustering operation to generate a third output. The method further includes performing a segmentation operation on the third output, the segmentation operation extracting the dynamic salient object from the digital image input.Type: GrantFiled: January 31, 2019Date of Patent: January 21, 2020Assignee: KODAK ALARIS INC.Inventors: Alexander C. Loui, Chi Zhang
-
Patent number: 10528795Abstract: A method for determining an impact score for a digital image includes providing the digital image wherein the digital image includes faces; using a processor to determine an image feature for the faces; using the processor to compute an object impact score for the faces, wherein the object impact score is based at least upon one of the determined image features; weighting the object impact score for the faces based on one of the determined image features for a face; using the processor to compute an impact score for the digital image by combining the weighted object impact scores for the faces in the image; and storing the computed impact score in a processor accessible memory.Type: GrantFiled: October 26, 2016Date of Patent: January 7, 2020Assignee: KODAK ALARIS INC.Inventors: Raymond William Ptucha, Alexander C. Loui, Mark D. Wood, David K. Rhoda, David Kloosterman, Joseph Anthony Manico
-
Publication number: 20190164006Abstract: Embodiments of the present disclosure include a computer-implemented method that receives a digital image input, the digital image input containing one or more dynamic salient objects arranged over a background. The method also includes performing a tracking operation, the tracking operation identifying the dynamic salient object over one or more frames of the digital image input as the dynamic salient object moves over the background. The method further includes performing a clustering operation, in parallel with the tracking operation, on the digital image input, the clustering operation identifying boundary conditions of the dynamic salient object. Additionally, the method includes combining a first output from the tracking operation and a second output from the clustering operation to generate a third output. The method further includes performing a segmentation operation on the third output, the segmentation operation extracting the dynamic salient object from the digital image input.Type: ApplicationFiled: January 31, 2019Publication date: May 30, 2019Applicant: Kodak Alaris Inc.Inventors: Alexander C. Loui, Chi Zhang
-
Publication number: 20190156472Abstract: A system and method for performing real-time quality inspection of objects is disclosed. The system and method include a transport to move objects being inspected, allowing the inspection to be performed in-line. At least one optical acquisition unit is provided that captured optical images of the objects being inspected. The captured optical images are matched to CAD models of objects, and the matched CAD model is extracted. A laser with an illumination light beam has a wavelength in the violet or ultraviolet range then conducts scans of the objects, which are formed into three-dimensional point clouds. The point clouds are compared to the extracted CAD models for each object, where CTF are compared to user input or CAD model information and the object is determined to be acceptable or defective based on the extent of deviation between the point cloud and the CAD model.Type: ApplicationFiled: July 12, 2018Publication date: May 23, 2019Applicant: KODAK ALARIS INC.Inventors: Bruce A. LINK, Robert W. JOHNSON, Alexander C. LOUI, Jose Zvietcovich ZEGARRA, Erik GARCELL
-
Patent number: 10229340Abstract: Embodiments of the present disclosure include a computer-implemented method that receives a digital image input, the digital image input containing one or more dynamic salient objects arranged over a background. The method also includes performing a tracking operation, the tracking operation identifying the dynamic salient object over one or more frames of the digital image input as the dynamic salient object moves over the background. The method further includes performing a clustering operation, in parallel with the tracking operation, on the digital image input, the clustering operation identifying boundary conditions of the dynamic salient object. Additionally, the method includes combining a first output from the tracking operation and a second output from the clustering operation to generate a third output. The method further includes performing a segmentation operation on the third output, the segmentation operation extracting the dynamic salient object from the digital image input.Type: GrantFiled: February 24, 2017Date of Patent: March 12, 2019Assignee: KODAK ALARIS INC.Inventors: Alexander C Loui, Chi Zhang
-
Publication number: 20190065886Abstract: A method for creating navigable views includes receiving digital images, computing a set of feature points for each of the digital images, selecting one of the digital images as a reference image, identifying a salient region of interest in the reference image, identifying other digital images containing a region of interest similar to the salient region of interest in the reference image using the set of feature points computed for each of other digital images, designating a reference location for the salient region of interest in the reference image, aligning the other digital images to the image that contains the designated reference location, ordering the image that contains the designated reference location and the other digital images, and generating a navigable view.Type: ApplicationFiled: October 29, 2018Publication date: February 28, 2019Applicant: Kodak Alaris Inc.Inventors: Alexander C. Loui, Joseph A. Manico
-
Patent number: 10192117Abstract: A method for graph-based spatiotemporal video segmentation and automatic target object extraction in high-dimensional feature space includes using a processor to automatically analyze an entire volumetric video sequence; using the processor to construct a high-dimensional feature space that includes color, motion, time, and location information so that pixels in the entire volumetric video sequence are reorganized according to their unique and distinguishable feature vectors; using the processor to create a graph model that fuses the appearance, spatial, and temporal information of all pixels of the video sequence in the high-dimensional feature space; and using the processor to group pixels in the graph model that are inherently similar and assign the same labels to them to form semantic spatiotemporal key segments.Type: GrantFiled: May 27, 2016Date of Patent: January 29, 2019Assignee: KODAK ALARIS INC.Inventors: Alexander C. Loui, Lei Fan
-
Patent number: 10134440Abstract: A method for producing an audio-visual slideshow for a video sequence having an audio soundtrack and a corresponding video track including a time sequence of image frames, comprising: segmenting the audio soundtrack into a plurality of audio segments; subdividing the audio segments into a sequence of audio frames; determining a corresponding audio classification for each audio frame; automatically selecting a subset of the audio segments responsive to the audio classification for the corresponding audio frames; for each of the selected audio segments automatically analyzing the corresponding image frames to select one or more key image frames; merging the selected audio segments to form an audio summary; forming an audio-visual slideshow by combining the selected key frames with the audio summary, wherein the selected key frames are displayed synchronously with their corresponding audio segment; and storing the audio-visual slideshow in a processor-accessible storage memory.Type: GrantFiled: May 3, 2011Date of Patent: November 20, 2018Assignee: KODAK ALARIS INC.Inventors: Wei Jiang, Alexander C. Loui, Courtenay Cotton
-
Patent number: 10115033Abstract: A method for creating navigable views includes receiving digital images, computing a set of feature points for each of the digital images, selecting one of the digital images as a reference image, identifying a salient region of interest in the reference image, identifying other digital images containing a region of interest similar to the salient region of interest in the reference image using the set of feature points computed for each of other digital images, designating a reference location for the salient region of interest in the reference image, aligning the other digital images to the image that contains the designated reference location, ordering the image that contains the designated reference location and the other digital images, and generating a navigable view.Type: GrantFiled: July 28, 2014Date of Patent: October 30, 2018Assignee: Kodak Alaris Inc.Inventors: Alexander C. Loui, Joseph A. Manico
-
Patent number: 10089532Abstract: The present application is directed to new methods for automatically determining several characteristics of frames in a video sequence and automatically recommending or preparing image output products based on those frame characteristics. In some embodiments, motion characteristics of particular image frames are calculated, and those motion characteristics are automatically used to prepare or recommend image output products suitable for the motion characteristics of the frames. In other embodiments, facial, audio, and overall image quality are assessed and used to automatically recommend or prepare image output products. In still other embodiments, image frames in a video sequence are analyzed for various user-specified characteristics, which characteristics are then used to automatically recommend or prepare image output products.Type: GrantFiled: February 23, 2015Date of Patent: October 2, 2018Assignee: KODAK ALARIS INC.Inventors: Alexander C. Loui, Brian Mittelstaedt
-
SYSTEM AND METHOD FOR PREDICTIVE CURATION, PRODUCTION INFRASTRUCTURE, AND PERSONAL CONTENT ASSISTANT
Publication number: 20170351417Abstract: Data points, calendar entries, trends, behavioral patterns may be used to predict and pre-emptively build digital and printable products with selected collections of images without the user's active participation. The collections are selected from files on the user's device, cloud-based photo library, or other libraries shared among other individuals and grouped into thematic products. Based on analysis of the user's collections and on-line behaviors, the system may estimate types and volumes of potential media-centric products, and the resources needed for producing and distributing such media-centric products for a projected period of time. A user interface may take the form of a “virtual curator”, which is a graphical or animated persona for augmenting and managing interactions between the user and the system managing the user's stored media assets. The virtual curator can assume one of many personas, as appropriate, with each user.Type: ApplicationFiled: June 1, 2017Publication date: December 7, 2017Applicant: Kodak Alaris Inc.Inventors: Joseph A. Manico, Young No, Madirakshi Das, Alexander C. Loui -
SYSTEM AND METHOD FOR PREDICTIVE CURATION, PRODUCTION INFRASTRUCTURE, AND PERSONAL CONTENT ASSISTANT
Publication number: 20170352083Abstract: Data points, calendar entries, trends, behavioral patterns may be used to predict and pre-emptively build digital and printable products with selected collections of images without the user's active participation. The collections are selected from files on the user's device, cloud-based photo library, or other libraries shared among other individuals and grouped into thematic products. Based on analysis of the user's collections and on-line behaviors, the system may estimate types and volumes of potential media-centric products, and the resources needed for producing and distributing such media-centric products for a projected period of time. A user interface may take the form of a “virtual curator”, which is a graphical or animated persona for augmenting and managing interactions between the user and the system managing the user's stored media assets. The virtual curator can assume one of many personas, as appropriate, with each user.Type: ApplicationFiled: June 1, 2017Publication date: December 7, 2017Applicant: Kodak Alaris Inc.Inventors: Kenneth Ruck, Joseph A. Manico, David Kloosterman, Alexander C. Loui, Madirakshi Das -
Patent number: 9824271Abstract: An adaptable eye artifact identification and correction method is disclosed. Eye artifacts are identified and classified based on color, severity, shape, eye location, and cause. Based on this classification, an eye artifact correction algorithm is selected from a series of eye artifact correction techniques. For minor artifacts, simple color correction techniques are deployed to restore the iris color and to drive the pupil to once again appear black. For severe eye artifacts face detection and metadata analysis are utilized to search the user's image collection for recent images of the subject without the eye artifact condition. Once located, these images provide eye color and shape information for use to replace the pixels expressing the eye artifact condition. The non-artifact eye images are used to provide the appropriate eye color and shape to correct the eye artifact condition for more severe eye artifacts.Type: GrantFiled: June 25, 2014Date of Patent: November 21, 2017Assignee: KODAK ALARIS INC.Inventors: James Andrew Whritenor, Joseph Anthony Manico, Alexander C. Loui
-
Publication number: 20170243078Abstract: Embodiments of the present disclosure include a computer-implemented method that receives a digital image input, the digital image input containing one or more dynamic salient objects arranged over a background. The method also includes performing a tracking operation, the tracking operation identifying the dynamic salient object over one or more frames of the digital image input as the dynamic salient object moves over the background. The method further includes performing a clustering operation, in parallel with the tracking operation, on the digital image input, the clustering operation identifying boundary conditions of the dynamic salient object. Additionally, the method includes combining a first output from the tracking operation and a second output from the clustering operation to generate a third output. The method further includes performing a segmentation operation on the third output, the segmentation operation extracting the dynamic salient object from the digital image input.Type: ApplicationFiled: February 24, 2017Publication date: August 24, 2017Applicant: KODAK ALARIS, INC.Inventors: Alexander C LOUI, Chi ZHANG
-
Patent number: 9665775Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.Type: GrantFiled: July 22, 2016Date of Patent: May 30, 2017Assignee: KODAK ALARIS INC.Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Publication number: 20170046561Abstract: A method for determining an impact score for a digital image includes providing the digital image wherein the digital image includes faces; using a processor to determine an image feature for the faces; using the processor to compute an object impact score for the faces, wherein the object impact score is based at least upon one of the determined image features; weighting the object impact score for the faces based on one of the determined image features for a face; using the processor to compute an impact score for the digital image by combining the weighted object impact scores for the faces in the image; and storing the computed impact score in a processor accessible memory.Type: ApplicationFiled: October 26, 2016Publication date: February 16, 2017Applicant: Kodak Alaris Inc.Inventors: Raymond William Ptucha, Alexander C. Loui, Mark D. Wood, David K. Rhoda, David Kloosterman, Joseph Anthony Manico
-
Patent number: 9552374Abstract: A method for determining an impact score for a digital image includes providing the digital image wherein the digital image includes faces; using a processor to determine an image feature for the faces; using the processor to compute an object impact score for the faces, wherein the object impact score is based at least upon one of the determined image features; weighting the object impact score for the faces based on one of the determined image features for a face; using the processor to compute an impact score for the digital image by combining the weighted object impact scores for the faces in the image; and storing the computed impact score in a processor accessible memory.Type: GrantFiled: August 12, 2014Date of Patent: January 24, 2017Assignee: KODAK ALARIS, INC.Inventors: Raymond William Ptucha, Alexander C. Loui, Mark D. Wood, David K. Rhoda, David Kloosterman, Joseph Anthony Manico
-
Publication number: 20160379055Abstract: A method for graph-based spatiotemporal video segmentation and automatic target object extraction in high-dimensional feature space includes using a processor to automatically analyze an entire volumetric video sequence; using the processor to construct a high-dimensional feature space that includes color, motion, time, and location information so that pixels in the entire volumetric video sequence are reorganized according to their unique and distinguishable feature vectors; using the processor to create a graph model that fuses the appearance, spatial, and temporal information of all pixels of the video sequence in the high-dimensional feature space; and using the processor to group pixels in the graph model that are inherently similar and assign the same labels to them to form semantic spatiotemporal key segments.Type: ApplicationFiled: May 27, 2016Publication date: December 29, 2016Applicant: Kodak Alaris Inc.Inventors: Alexander C. Loui, Lei Fan
-
Patent number: 9524349Abstract: A method of identifying one or more particular images from an image collection, includes indexing the image collection to provide image descriptors for each image in the image collection such that each image is described by one or more of the image descriptors; receiving a query from a user specifying at least one keyword for an image search; and using the keyword(s) to search a second collection of tagged images to identify co-occurrence keywords. The method further includes using the identified co-occurrence keywords to provide an expanded list of keywords; using the expanded list of keywords to search the image descriptors to identify a set of candidate images satisfying the keywords; grouping the set of candidate images according to at least one of the image descriptors, and selecting one or more representative images from each grouping; and displaying the representative images to the user.Type: GrantFiled: April 20, 2015Date of Patent: December 20, 2016Assignee: KODAK ALARIS INC.Inventors: Mark D. Wood, Alexander C. Loui
-
Publication number: 20160328615Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.Type: ApplicationFiled: July 22, 2016Publication date: November 10, 2016Applicant: Kodak Alaris Inc.Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman