Patents by Inventor Ajay Divakaran

Ajay Divakaran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7302451
    Abstract: A method detects events in multimedia. Features are extracted from the multimedia. The features are sampled using a sliding window to obtain samples. A context model is constructed for each sample. The context models form a time series. An affinity matrix is determined from the time series models and a commutative distance metric between each pair of context models. A second generalized eigenvector is determined for the affinity matrix, and the samples are then clustered into events according to the second generalized eigenvector.
    Type: Grant
    Filed: August 20, 2004
    Date of Patent: November 27, 2007
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Regunathan Radhakrishnan, Isao Otsuka, Ajay Divakaran
  • Publication number: 20070162924
    Abstract: A method classifies segments of a video using an audio signal of the video and a set of classes. Selected classes of the set are combined as a subset of important classes, the subset of important classes being important for a specific highlighting task, the remaining classes of the set are combined as a subset of other classes. The subset of important classes and classes are trained with training audio data to form a task specific classifier. Then, the audio signal can be classified using the task specific classifier as either important or other to identify highlights in the video corresponding to the specific highlighting task. The classified audio signal can be used to segment and summarize the video.
    Type: Application
    Filed: January 6, 2006
    Publication date: July 12, 2007
    Inventors: Regunathan Radhakrishnan, Michael Siracusa, Ajay Divakaran
  • Patent number: 7240834
    Abstract: A marketing system and method for a retail environment periodically reads RFID tags attached to products to produce a list of product identifications. Consumer recommendation rules are updated according to each list, and recommendations are generated according to the updated consumer recommendation rules. Then, content can be displayed in the retail environment based on the recommendations.
    Type: Grant
    Filed: March 21, 2005
    Date of Patent: July 10, 2007
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Mamoru Kato, Daniel N. Nikovski, Ajay Divakaran
  • Publication number: 20070146159
    Abstract: A system determines real-time locations of railcars in a railroad environment. Railcars are equipped with at least four RFID tags. A RFID reader at a fixed location at every track branch in the environment reads the RFID tags. Railcar locations are updated for the railcars by determining the branches on which the railcars are located.
    Type: Application
    Filed: December 22, 2005
    Publication date: June 28, 2007
    Inventors: Mamoru Kato, Ajay Divakaran
  • Publication number: 20070091203
    Abstract: A method generates a summary of a video. Faces are detected in a plurality of frames of the video. The frames are classified according to a number of faces detected in each frame and the video is partitioned into segments according to the classifications to produce a summary of the video. For each frame classified as having a single detected face, one or more characteristics of the face is determined. The frames are labeled according to the characteristics to produce labeled clusters and the segments are partitioned into sub-segments according to the labeled clusters.
    Type: Application
    Filed: October 25, 2005
    Publication date: April 26, 2007
    Inventors: Kadir Peker, Ajay Divakaran
  • Publication number: 20070010998
    Abstract: A method tracks and analyzes dynamically a generative process that generates multivariate time series data. In one application, the method is used to detect boundaries in broadcast programs, for example, a sports broadcast and a news broadcast. In another application, significant events are detected in a signal obtained by a surveillance device, such as a video camera or microphone.
    Type: Application
    Filed: July 8, 2005
    Publication date: January 11, 2007
    Inventors: Regunathan Radhakrishnan, Ajay Divakaran
  • Patent number: 7143352
    Abstract: A method summarizes unknown content of a video. First, low-level features of the video are selected. The video is then partitioned into segments according to the low-level features. The segments are grouped into disjoint clusters where each cluster contains similar segments. The clusters are labeled according to the low-level features, and parameters characterizing the clusters are assigned. High-level patterns among the labels are found, and the these patterns are used to extract frames from the video according to form a content-adaptive summary of the unknown content of the video.
    Type: Grant
    Filed: November 1, 2002
    Date of Patent: November 28, 2006
    Assignee: Mitsubishi Electric Research Laboratories, Inc
    Inventors: Ajay Divakaran, Kadir A. Peker
  • Patent number: 7142602
    Abstract: A method segments a video into objects, without user assistance. An MPEG compressed video is converted to a structure called a pseudo spatial/temporal data using DCT coefficients and motion vectors. The compressed video is first parsed and the pseudo spatial/temporal data are formed. Seeds macro-blocks are identified using, e.g., the DCT coefficients and changes in the motion vector of macro-blocks. A video volume is “grown” around each seed macro-block using the DCT coefficients and motion distance criteria. Self-descriptors are assigned to the volume, and mutual descriptors are assigned to pairs of similar volumes. These descriptors capture motion and spatial information of the volumes. Similarity scores are determined for each possible pair-wise combination of volumes. The pair of volumes that gives the largest score is combined iteratively. In the combining stage, volumes are classified and represented in a multi-resolution coarse-to-fine hierarchy of video objects.
    Type: Grant
    Filed: May 21, 2003
    Date of Patent: November 28, 2006
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Fatih M. Porikli, Huifang Sun, Ajay Divakaran
  • Publication number: 20060208070
    Abstract: A marketing system and method for a retail environment periodically reads RFID tags attached to products to produce a list of product identifications. Consumer recommendation rules are updated according to each list, and recommendations are generated according to the updated consumer recommendation rules. Then, content can be displayed in the retail environment based on the recommendations.
    Type: Application
    Filed: March 21, 2005
    Publication date: September 21, 2006
    Inventors: Mamoru Kato, Daniel Nikovski, Ajay Divakaran
  • Patent number: 7110458
    Abstract: A method measures an intensity of motion activity in a compressed video. The intensity of the motion activity is used to partition the video into segments of equal cumulative motion activity. Key-frames are then selected from each segments. The selected key-frames are concatenated in temporal order to form a summary of the video.
    Type: Grant
    Filed: April 27, 2001
    Date of Patent: September 19, 2006
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Ajay Divakaran, Regunathan Radhakrishnan, Kadir A. Peker
  • Publication number: 20060149693
    Abstract: A method refines labeled training data audio classification of multimedia content. A first set of audio classifiers is trained using labeled audio frames of a training data set having labels corresponding to a set of audio features. Each audio frame of the labeled training data set is classified using the first set of audio classifiers to produce a refined training data set. A second set of audio classifiers is obtained using audio frames of the refined training data set, and highlights are extracted from unlabeled audio frames using the second set of audio classifiers.
    Type: Application
    Filed: January 4, 2005
    Publication date: July 6, 2006
    Inventors: Isao Otsuka, Regunathan Radhakrishnan, Ajay Divakaran
  • Publication number: 20060109283
    Abstract: A method and system for presenting a set of graphic images on a television system is presented. A sequence of frames of a video is received. The frames are decoded and scaled to reduced size frames, which are sampled temporally and periodically to provide selected frames. The selected frames are stored in a circular buffer and converted to graphic images. The graphic images are periodically composited and rendered as an output graphic image using a graphic interface.
    Type: Application
    Filed: January 4, 2006
    Publication date: May 25, 2006
    Inventors: Samuel Shipman, Ajay Divakaran
  • Publication number: 20060075346
    Abstract: A method presents a video according to compositional structures associated with the video. Each compositional structure has a label, and multiple segments that can be organized temporally or hierarchically. A particular compositional structure is selected with a remote controller, and the video is presented by a playback controller on a display device according to the compositional structure.
    Type: Application
    Filed: September 27, 2004
    Publication date: April 6, 2006
    Inventors: Tom Lanning, Ajay Divakaran, Kadir Peker, Regunathan Radhakrishnan, Ziyou Xiong, Clifton Forlines
  • Publication number: 20060059120
    Abstract: A method identifies highlight segments in a video including a sequence of frames. Audio objects are detected to identify frames associated with audio events in the video, and visual objects are detected to identify frames associated with visual events. Selected visual objects are matched with an associated audio object to form an audio-visual object only if the selected visual object matches the associated audio object, the audio-visual object identifying a candidate highlight segment. The candidate highlight segments are further refined, using low level features, to eliminate false highlight segments.
    Type: Application
    Filed: August 27, 2004
    Publication date: March 16, 2006
    Inventors: Ziyou Xiong, Regunathan Radhakrishnan, Ajay Divakaran
  • Patent number: 7003038
    Abstract: A method describes activity in a video sequence. The method measures intensity, direction, spatial, and temporal attributes in the video sequence, and the measured attributes are combined in a digital descriptor of the activity of the video sequence.
    Type: Grant
    Filed: August 13, 2002
    Date of Patent: February 21, 2006
    Assignee: Mitsubishi Electric Research Labs., Inc.
    Inventors: Ajay Divakaran, Huifang Sun, Hae-Kwang Kim, Chul-Soo Park, Xinding Sun, Bangalore S. Manjunath, Vinod V. Vasudevan, Manoranjan D. Jesudoss, Ganesh Rattinassababady, Hyundoo Shin
  • Patent number: 7003154
    Abstract: A system and method for temporally processing an input video including input frames. Each frame has an associated frame play time, and the input video has a total input video play time that is a sum of the input frame play times of all of the input frames. Each of the input frames is classified according to a content characteristic of each frames. An output frame play time is allocated to each of the input frames that is based on the classified content characteristic of each of the input frames to generate a plurality of output frames that form an output video.
    Type: Grant
    Filed: November 17, 2000
    Date of Patent: February 21, 2006
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Kadir A. Peker, Ajay Divakaran, Huifang Sun
  • Publication number: 20050251532
    Abstract: A method detects events in multimedia. Features are extracted from the multimedia. The features are sampled using a sliding window to obtain samples. A context model is constructed for each sample. The context models form a time series. An affinity matrix is determined from the time series models and a commutative distance metric between each pair of context models. A second generalized eigenvector is determined for the affinity matrix, and the samples are then clustered into events according to the second generalized eigenvector.
    Type: Application
    Filed: August 20, 2004
    Publication date: November 10, 2005
    Inventors: Regunathan Radhakrishnan, Isao Otsuka, Ajay Divakaran
  • Publication number: 20050249412
    Abstract: A method detects events in multimedia. Features are extracted from the multimedia. The features are sampled using a sliding window to obtain samples. A context model is constructed for each sample. An affinity matrix is determined from the models and a commutative distance metric between each pair of context models. A second generation eigenvector is determined for the affinity matrix, and the samples are then clustered into events according to the second generation eigenvector.
    Type: Application
    Filed: May 7, 2004
    Publication date: November 10, 2005
    Inventors: Regunathan Radhakrishnan, Ajay Divakaran
  • Patent number: 6956904
    Abstract: A method for summarizing a video first detects audio peaks in a sub-sampled audio signal of the video. Then, motion activity in the video is extracted and filtered. The filtered motion activity is quantized to a continuous stream of digital pulses, one pulse for each frame. If the motion activity is greater than a predetermined threshold the pulse is one, otherwise the pulse is zero. Each quantized pulse is tested with respect to the timing of rising and falling edges. If the pulse meets the condition of the test, then the pulse is selected as a candidate pulse related to an interesting event in the video, otherwise the pulse is discarded. The candidate pulses are correlated, time-wise to the audio peaks, and patterns between the pulses and peaks are examined. The correlation patterns segment the video into uninteresting and interesting portions, which can then be summarized.
    Type: Grant
    Filed: January 15, 2002
    Date of Patent: October 18, 2005
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Romain Cabasson, Kadir A. Peker, Ajay Divakaran
  • Publication number: 20050198570
    Abstract: A system and method summarizes multimedia stored in a compressed multimedia file partitioned into a sequence of segments, where the content of the multimedia is, for example, video signals, audio signals, text, and binary data. An associated metadata file includes index information and an importance level for each segment. The importance information is continuous over as closed interval. An importance level threshold is selected in the closed interval, and only segments of the multimedia having a particular importance level greater than the importance level threshold are reproduced. The importance level can also be determined for fixed-length windows of multiple segments, or a sliding window. Furthermore, the importance level can be weighted by a factor, such as the audio volume.
    Type: Application
    Filed: January 21, 2005
    Publication date: September 8, 2005
    Inventors: Isao Otsuka, Ajay Divakaran, Masaharu Ogawa, Kazuhiko Nakane