Patents by Inventor Bruce Harold Pillman
Bruce Harold Pillman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9665775Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.Type: GrantFiled: July 22, 2016Date of Patent: May 30, 2017Assignee: KODAK ALARIS INC.Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Publication number: 20160328615Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.Type: ApplicationFiled: July 22, 2016Publication date: November 10, 2016Applicant: Kodak Alaris Inc.Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Patent number: 9438899Abstract: A statistically lossless transform (SLT) can be used with data (e.g., image data) to be compressed. Prior to compression, an SLT stabilizes the variance for a read noise+Poisson process. The SLT redistributes (image) data that has been quantized with a linear uniform quantizer and essentially assures there are no gaps in the output values (i.e., all output digital values are used). The SLT re-quantizes the data with a quantization interval width that is proportional to the standard deviation of the process.Type: GrantFiled: December 11, 2013Date of Patent: September 6, 2016Assignee: Harris CorporationInventors: Bruce Harold Pillman, Wayne Prentice, Michael E Napoli
-
Patent number: 9424473Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.Type: GrantFiled: February 13, 2015Date of Patent: August 23, 2016Assignee: Kodak Alaris Inc.Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Patent number: 9076043Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A summary is formed based on the determined video frame clusters.Type: GrantFiled: August 3, 2012Date of Patent: July 7, 2015Assignee: Kodak Alaris Inc.Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Publication number: 20150161450Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.Type: ApplicationFiled: February 13, 2015Publication date: June 11, 2015Applicant: Kodak Alaris Inc.Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Patent number: 9013604Abstract: A digital video camera system that provides a video summary using a method that includes: designating a reference image containing a particular person; capturing a video sequence of the scene using the image sensor, the video sequence including a time sequence of image frames; processing the captured video sequence using a video processing path to form a digital video file; during the capturing of the video sequence, analyzing the captured image frames using a person recognition algorithm to identify a subset of the image frames that contain the particular person; forming the video summary including fewer than all of the image frames in the captured video sequence, wherein the video summary includes at least part of the identified subset of image frames containing the particular person; storing the digital video file in the storage memory; and storing a representation of the video summary in the storage memory.Type: GrantFiled: December 27, 2013Date of Patent: April 21, 2015Assignee: Intellectual Ventures Fund 83 LLCInventors: Keith Stoll Karn, Bruce Harold Pillman, Aaron Thomas Deever, John Robert McCoy, Frank Razavi, Robert Gretzinger
-
Patent number: 8989503Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.Type: GrantFiled: August 3, 2012Date of Patent: March 24, 2015Assignee: Kodak Alaris Inc.Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Patent number: 8928772Abstract: A method for selecting a digital image having controlled sharpness characteristics from a set of candidate digital images of a common scene, each digital image having different sharpness characteristics. An image segmentation process is used to segment each of the candidate digital images into a subject region and a background region. For each candidate digital image the subject and background regions are analyzed to determine an associated subject and background sharpness levels. An output digital image is selected by comparing the determined subject and background sharpness levels to respective aim subject and background sharpness levels. In some embodiments, the aim subject and background sharpness levels are defined in accordance with a scene type classification.Type: GrantFiled: September 21, 2012Date of Patent: January 6, 2015Assignee: Eastman Kodak CompanyInventors: Bruce Harold Pillman, Wei Hao
-
Patent number: 8913835Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A set of key video frames are selected based on the determined video frame clusters.Type: GrantFiled: August 3, 2012Date of Patent: December 16, 2014Assignee: Kodak Alaris Inc.Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Patent number: 8724919Abstract: The sharpness of a digital image is adjusted according to defined aim subject and background sharpness levels. An image segmentation process is used to segment an input digital image into a subject region and a background region. The subject and background regions are analyzed to determine corresponding subject and background sharpness levels. An enhanced digital image is formed wherein the sharpness of the subject region is adjusted responsive to the subject sharpness level and the aim subject sharpness level, and the sharpness of the background region is adjusted responsive to the background sharpness level and the aim background sharpness level. In some embodiments, the input digital image is analyzed to determined a scene type classification and the aim subject and background sharpness levels are defined in accordance with the determined scene type classification.Type: GrantFiled: September 21, 2012Date of Patent: May 13, 2014Assignee: Eastman Kodak CompanyInventors: Bruce Harold Pillman, Wei Hao
-
Publication number: 20140105500Abstract: A digital video camera system that provides a video summary using a method that includes: designating a reference image containing a particular person; capturing a video sequence of the scene using the image sensor, the video sequence including a time sequence of image frames; processing the captured video sequence using a video processing path to form a digital video file; during the capturing of the video sequence, analyzing the captured image frames using a person recognition algorithm to identify a subset of the image frames that contain the particular person; forming the video summary including fewer than all of the image frames in the captured video sequence, wherein the video summary includes at least part of the identified subset of image frames containing the particular person; storing the digital video file in the storage memory; and storing a representation of the video summary in the storage memory.Type: ApplicationFiled: December 27, 2013Publication date: April 17, 2014Applicant: Intellectual Ventures Fund 83 LLCInventors: Keith Stoll Karn, Bruce Harold Pillman, Aaron Thomas Deever, John Robert McCoy, Frank Razavi, Robert Gretzinger
-
Publication number: 20140086486Abstract: The sharpness of a digital image is adjusted according to defined aim subject and background sharpness levels. An image segmentation process is used to segment an input digital image into a subject region and a background region. The subject and background regions are analyzed to determine corresponding subject and background sharpness levels. An enhanced digital image is formed wherein the sharpness of the subject region is adjusted responsive to the subject sharpness level and the aim subject sharpness level, and the sharpness of the background region is adjusted responsive to the background sharpness level and the aim background sharpness level. In some embodiments, the input digital image is analyzed to determined a scene type classification and the aim subject and background sharpness levels are defined in accordance with the determined scene type classification.Type: ApplicationFiled: September 21, 2012Publication date: March 27, 2014Inventors: Bruce Harold Pillman, Wei Hao
-
Publication number: 20140085507Abstract: A method for selecting a digital image having controlled sharpness characteristics from a set of candidate digital images of a common scene, each digital image having different sharpness characteristics. An image segmentation process is used to segment each of the candidate digital images into a subject region and a background region. For each candidate digital image the subject and background regions are analyzed to determine an associated subject and background sharpness levels. An output digital image is selected by comparing the determined subject and background sharpness levels to respective aim subject and background sharpness levels. In some embodiments, the aim subject and background sharpness levels are defined in accordance with a scene type classification.Type: ApplicationFiled: September 21, 2012Publication date: March 27, 2014Inventors: Bruce Harold Pillman, Wei Hao
-
Patent number: 8665345Abstract: A digital video camera system that provides a video summary using a method that includes: specifying reference data, wherein the reference data indicates a feature of interest; capturing a video sequence of the scene using the image sensor, the video sequence including a time sequence of image frames; processing the captured video sequence using a video processing path to form a digital video file; during the capturing of the video sequence, analyzing the captured image frames using a feature recognition algorithm to identify a subset of the image frames that contain the feature of interest; forming the video summary including fewer than all of the image frames in the captured video sequence, wherein the video summary includes at least part of the identified subset of image frames containing the feature of interest; storing the digital video file in the storage memory; and storing a representation of the video summary in the storage memory.Type: GrantFiled: May 18, 2011Date of Patent: March 4, 2014Assignee: Intellectual Ventures Fund 83 LLCInventors: Keith Stoll Karn, Bruce Harold Pillman, Aaron Thomas Deever, John Robert McCoy, Frank Razavi, Robert Gretzinger
-
Publication number: 20140037215Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A set of key video frames are selected based on the determined video frame clusters.Type: ApplicationFiled: August 3, 2012Publication date: February 6, 2014Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Publication number: 20140037269Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A summary is formed based on the determined video frame clusters.Type: ApplicationFiled: August 3, 2012Publication date: February 6, 2014Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Publication number: 20140037216Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.Type: ApplicationFiled: August 3, 2012Publication date: February 6, 2014Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
-
Patent number: 8643746Abstract: A digital video camera system that provides a video summary using a method that includes: designating a reference image containing a particular person; capturing a video sequence of the scene using the image sensor, the video sequence including a time sequence of image frames; processing the captured video sequence using a video processing path to form a digital video file; during the capturing of the video sequence, analyzing the captured image frames using a person recognition algorithm to identify a subset of the image frames that contain the particular person; forming the video summary including fewer than all of the image frames in the captured video sequence, wherein the video summary includes at least part of the identified subset of image frames containing the particular person; storing the digital video file in the storage memory; and storing a representation of the video summary in the storage memory.Type: GrantFiled: May 18, 2011Date of Patent: February 4, 2014Assignee: Intellectual Ventures Fund 83 LLCInventors: Keith Stoll Karn, Bruce Harold Pillman, Aaron Thomas Deever, John Robert McCoy, Frank Razavi, Robert Gretzinger
-
Patent number: 8428308Abstract: A method for determining image capture settings for an electronic image capture device, comprising: capturing at least two preview images of a scene; analyzing the preview images to determine a combined motion velocity; determining one or more image capture settings responsive to the combined motion velocity; and capturing an archival image according to the determined image capture settings. The determination of the combined motion velocity includes: defining a plurality image regions; determining local motion velocities for each of the image regions; and combining the local motion velocities to determine the combined motion velocity.Type: GrantFiled: February 4, 2011Date of Patent: April 23, 2013Assignee: Apple Inc.Inventors: David Wayne Jasinski, Bruce Harold Pillman