Patents by Inventor Courtenay Cotton

Courtenay Cotton has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190116100
    Abstract: A factory automation and monitoring method includes monitoring streams of data packets of M2M data on a mirrored port of a network switch coupled to computing elements of respective machines amongst industrial machinery, determining for the M2M data, a corresponding machine and acquiring a contemporaneous state of the corresponding machine. Then, a record is stored in a data store recording an association between the M2M data for each of the data packet streams, the corresponding machine for each of the data packet streams, and the contemporaneous state for the corresponding machine. Subsequently additional M2M data is received in the mirrored port and, in response, one of the machines is identified for the additional M2M data, a record from the data store retrieved for the identified machine, and, on condition that the additional M2M data demonstrates a threshold deviation from data in the retrieved record, an alert generated.
    Type: Application
    Filed: October 16, 2017
    Publication date: April 18, 2019
    Inventors: Elkana Porag, Haim Piratinskiy, Or Biran, Courtenay Cotton
  • Patent number: 10134440
    Abstract: A method for producing an audio-visual slideshow for a video sequence having an audio soundtrack and a corresponding video track including a time sequence of image frames, comprising: segmenting the audio soundtrack into a plurality of audio segments; subdividing the audio segments into a sequence of audio frames; determining a corresponding audio classification for each audio frame; automatically selecting a subset of the audio segments responsive to the audio classification for the corresponding audio frames; for each of the selected audio segments automatically analyzing the corresponding image frames to select one or more key image frames; merging the selected audio segments to form an audio summary; forming an audio-visual slideshow by combining the selected key frames with the audio summary, wherein the selected key frames are displayed synchronously with their corresponding audio segment; and storing the audio-visual slideshow in a processor-accessible storage memory.
    Type: Grant
    Filed: May 3, 2011
    Date of Patent: November 20, 2018
    Assignee: KODAK ALARIS INC.
    Inventors: Wei Jiang, Alexander C. Loui, Courtenay Cotton
  • Publication number: 20160012853
    Abstract: A user performance that can include audio and video performance may be added to a multi-track clip. The combined user performance and clip can be stored at a local device as a composite performance, and effects processing may be applied to the user performance. The composite performance may be previewed and can be sent to a computer device over a computer network for sharing with other users.
    Type: Application
    Filed: July 8, 2015
    Publication date: January 14, 2016
    Inventors: J. Alexander Cabanilla, Courtenay Cotton, Brendan Elliott, Ariel Melendez, Jon Sheldrick, Robert D. Taub, Michael Westendorf
  • Publication number: 20120281969
    Abstract: A method for producing an audio-visual slideshow for a video sequence having an audio soundtrack and a corresponding video track including a time sequence of image frames, comprising: segmenting the audio soundtrack into a plurality of audio segments; subdividing the audio segments into a sequence of audio frames; determining a corresponding audio classification for each audio frame; automatically selecting a subset of the audio segments responsive to the audio classification for the corresponding audio frames; for each of the selected audio segments automatically analyzing the corresponding image frames to select one or more key image frames; merging the selected audio segments to form an audio summary; forming an audio-visual slideshow by combining the selected key frames with the audio summary, wherein the selected key frames are displayed synchronously with their corresponding audio segment; and storing the audio-visual slideshow in a processor-accessible storage memory.
    Type: Application
    Filed: May 3, 2011
    Publication date: November 8, 2012
    Inventors: Wei Jiang, Alexander C. Loui, Courtenay Cotton
  • Patent number: 8135221
    Abstract: A method for determining a classification for a video segment, comprising the steps of: breaking the video segment into a plurality of short-term video slices, each including a plurality of video frames and an audio signal; analyzing the video frames for each short-term video slice to form a plurality of region tracks; analyzing each region track to form a visual feature vector and a motion feature vector; analyzing the audio signal for each short-term video slice to determine an audio feature vector; forming a plurality of short-term audio-visual atoms for each short-term video slice by combining the visual feature vector and the motion feature vector for a particular region track with the corresponding audio feature vector; and using a classifier to determine a classification for the video segment responsive to the short-term audio-visual atoms.
    Type: Grant
    Filed: October 7, 2009
    Date of Patent: March 13, 2012
    Assignees: Eastman Kodak Company, Columbia University
    Inventors: Wei Jiang, Courtenay Cotton, Shih-Fu Chang, Daniel P. Ellis, Alexander C. Loui
  • Publication number: 20110081082
    Abstract: A method for determining a classification for a video segment, comprising the steps of: breaking the video segment into a plurality of short-term video slices, each including a plurality of video frames and an audio signal; analyzing the video frames for each short-term video slice to form a plurality of region tracks; analyzing each region track to form a visual feature vector and a motion feature vector; analyzing the audio signal for each short-term video slice to determine an audio feature vector; forming a plurality of short-term audio-visual atoms for each short-term video slice by combining the visual feature vector and the motion feature vector for a particular region track with the corresponding audio feature vector; and using a classifier to determine a classification for the video segment responsive to the short-term audio-visual atoms.
    Type: Application
    Filed: October 7, 2009
    Publication date: April 7, 2011
    Inventors: Wei Jiang, Courtenay Cotton, Shih-Fu Chang, Daniel P. Ellis, Alexander C. Loui