Patents by Inventor Nevenka Dimitrova

Nevenka Dimitrova has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 6912517
    Abstract: A system and method for delivering relevant information to recipients in a timely manner. Information from a plurality of sources is received through interface devices that permit a central processor to evaluate each information stream in light of the dynamic profiles that are stored in a subscriber database. A plurality of location sensors track and report the location of the various recipients. The processor determines the content that will be provided to the recipient and the delivery mode to be used for delivering the selected information to the subscriber.
    Type: Grant
    Filed: November 29, 2001
    Date of Patent: June 28, 2005
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Lalitha Agnihotri, John Zimmerman, Nevenka Dimitrova
  • Patent number: 6859803
    Abstract: An apparatus and method for conducting exclusive and inclusive metadata searches to identify and select multimedia programs. The apparatus of the invention including a metadata search controller that compares user specified search words with metadata words to find programs that meet user specified search criteria. The metadata search controller executes an inclusive metadata search to search for matches between a user specified search word and a metadata word that is related to the user specified search word in a word pair contained within a word pair database. The metadata search controller calculates a rank value for each program that is found by a metadata search and creates a ranked list of such programs.
    Type: Grant
    Filed: November 13, 2001
    Date of Patent: February 22, 2005
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Serhan Dagtas, Radu S. Jasinschi, Nevenka Dimitrova
  • Publication number: 20050028194
    Abstract: A video retrieval system is presented that allows a user to quickly and easily select and receive stories of interest from a video stream. The video retrieval system classifies stories and delivers samples of selected stories that match each user's current preference. The user's preferences may include particular broadcast networks, persons, story topics, keywords, and the like. Key frames of each selected story are sequentially displayed; when the user views a frame of interest, the user selects the story that is associated with the key frame for more detailed viewing. This invention is particularly well suited for targeted news retrieval. In a preferred embodiment, news stories are stored, and the selection of a news story for detailed viewing based on the associated key frames effects a playback of the selected news story. The principles of this invention also allows a user to effect a directed search of other types of broadcasts as well.
    Type: Application
    Filed: September 2, 2004
    Publication date: February 3, 2005
    Inventors: Jan Elenbaas, Nevenka Dimitrova, Thomas McGee, Mark Simpson, Jacquelyn Martino, Mohamed Abdel-Mottaleb, Marjorie Garrett, Carolyn Ramsey, Hsiang-Lung Wu, Ranjit Desai
  • Patent number: 6819863
    Abstract: For use in a video signal processor, there is disclosed a system and method for locating program boundaries and commercial boundaries using audio categories. The system comprises an audio classifier controller that obtains information concerning the audio categories of the segments of an audio signal. Audio categories include such categories as silence, music, noise and speech. The audio classifier controller determines the rates of change of the audio categories. The audio classifier controller then compares each rate of change of the audio categories with a threshold value to locate the boundaries of the programs and commercials. The audio classifier controller is also capable of classifying at least one feature of an audio category change rate using a multifeature classifier to locate the boundaries of the programs and commercials.
    Type: Grant
    Filed: December 22, 2000
    Date of Patent: November 16, 2004
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Serhan Dagtas, Nevenka Dimitrova
  • Publication number: 20040201784
    Abstract: For use in a video signal processor, there is disclosed a system and method for locating program boundaries and commercial boundaries using audio categories. The system comprises an audio classifier controller that obtains information concerning the audio categories of the segments of an audio signal. Audio categories include such categories as silence, music, noise and speech. The audio classifier controller determines the rates of change of the audio categories. The audio classifier controller then compares each rate of change of the audio categories with a threshold value to locate the boundaries of the programs and commercials. The audio classifier controller is also capable of classifying at least one feature of an audio category change rate using a multifeature classifier to locate the boundaries of the programs and commercials.
    Type: Application
    Filed: December 22, 2000
    Publication date: October 14, 2004
    Applicant: PHILIPS ELECTRONICS NORTH AMERICA CORPORATION
    Inventors: Serhan Dagtas, Nevenka Dimitrova
  • Patent number: 6771885
    Abstract: A video signal is processed to generate one or more signatures associated with a broadcast program to be recorded by a recording device. The signatures are then processed to determine an actual start time and end time of the desired broadcast program, such that the program can be properly recorded despite delays or other changes in a pre-scheduled broadcast time of the program. One or more of the extracted signatures may be based at least in part on, e.g., a keyframe similarity measure, a histogram, one or more detected commercials, a transcript, a program logo or other detected object, detected text, and a sign-on or sign-off of the desired program. Other types of signatures can also be used.
    Type: Grant
    Filed: February 7, 2000
    Date of Patent: August 3, 2004
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Lalitha Agnihotri, Nevenka Dimitrova, Thomas McGee, Nicholas J. Mankovich
  • Patent number: 6766098
    Abstract: A video indexing method and device for selecting keyframes from each detected scene in the video. The method and device detects fast motion scenes by counting the number of consecutive scene changes detected.
    Type: Grant
    Filed: December 30, 1999
    Date of Patent: July 20, 2004
    Assignee: Koninklijke Philip Electronics N.V.
    Inventors: Thomas McGee, Nevenka Dimitrova
  • Patent number: 6754389
    Abstract: A content-based classification system is provided that detects the presence of object images within a frame and determines the path, or trajectory, of each object image through multiple frames of a video segment. In a preferred embodiment, face objects and text objects are used for identifying distinguishing object trajectories. A combination of face, text, and other trajectory information is used in a preferred embodiment of this invention to classify each segment of a video sequence. In one embodiment, a hierarchical information structure is utilized to enhance the classification process. At the upper, video, information layer, the parameters used for the classification process include, for example, the number of object trajectories of each type within the segment, an average duration for each object type trajectory, and so on. At the lowest, model, information layer, the parameters include, for example, the type, color, and size of the object image corresponding to each object trajectory.
    Type: Grant
    Filed: December 1, 1999
    Date of Patent: June 22, 2004
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Nevenka Dimitrova, Lalitha Agnihotri, Gang Wei
  • Publication number: 20040098376
    Abstract: A method and system which enable a user to query a multimedia archive in one media modality and automatically retrieve correlating data in another media modality without the need for manually associating the data items through a data structure. The correlation method finds the maximum correlation between the data items without being affected by the distribution of the data in the respective subspace of each modality. Once the direction of correlation is disclosed, extracted features can be transferred from one subspace to another.
    Type: Application
    Filed: November 15, 2002
    Publication date: May 20, 2004
    Applicant: KONINKLIJKE PHILIPS ELECTRONICS N.V.
    Inventors: Dongge Li, Nevenka Dimitrova
  • Publication number: 20040090453
    Abstract: The present invention provides a method of detecting segment boundaries for a series of successive frames in a video sequence. The method includes the steps of acquiring color information from each frame, determining a color histogram for each frame, and applying a boundary detection technique utilizing the color histograms. Finally, the method includes segmenting the frames of the video sequence into uniform color segments. In addition, a system is provided for detecting segment boundaries for a series of successive frames in a video sequence. The system includes means for acquiring color information from each frame, means for determining a color histogram for each frame, and means for applying a boundary detection technique utilizing the color histograms. Finally, the system includes means for segmenting the frames of the video sequence into uniform color segments.
    Type: Application
    Filed: November 13, 2002
    Publication date: May 13, 2004
    Inventors: Radu Serban Jasinschi, Nevenka Dimitrova, Lalitha Agnihotri
  • Publication number: 20040085340
    Abstract: A method and apparatus for editing a source video that has already been taken to stabilize images in the video. To eliminate jerky motion from a video, changes in shots are first detected. Then, any jerkiness within the video of that shot is classified and the video is segmented further into smaller segments based on this classification. The jerkiness within the selected segments is removed. The corrected shot, comprising a plurality of frames, is then added to the preceding shot until all shots of the video have been appropriately corrected for jerkiness. To help the user identify the shots being edited, keyframes or snapshots of the shots are displayed, thereby allowing the user to decide whether processing of the shot is desired and which shots should be incorporated into the final video.
    Type: Application
    Filed: October 30, 2002
    Publication date: May 6, 2004
    Applicant: Koninklijke Philips Electronics N.V
    Inventors: Nevenka Dimitrova, Radu S. Jasinschi
  • Patent number: 6731788
    Abstract: An image processing device and method for classifying symbols, such as text, in a video stream employs a back propagation neural network (BPNN) whose feature space is derived from size, translation, and rotation invariant shape-dependent features. Various example feature spaces are discussed such as regular and invariant moments and an angle histogram derived from a Delaunay triangulation of a thinned, thresholded, symbol. Such feature spaces provide a good match to BPNN as a classifier because of the poor resolution of characters in video streams.
    Type: Grant
    Filed: November 17, 1999
    Date of Patent: May 4, 2004
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Lalitha Agnihotri, Nevenka Dimitrova
  • Patent number: 6721488
    Abstract: A system that utilizes a determined time between characteristics from content for subsequently identifying the content and/or the corresponding content review position in the time domain. The system utilizes a database of previously stored time between characteristics from known content for matching to the determined time between characteristics from the content source. The characteristics may correspond to video tape indexing data, keyframe data, audio characteristics, text occurrences, and/or other known characteristics from the content. When a match is found between the determined time between characteristics and the stored times between characteristics, the system identifies the content. If corresponding time domain data is stored in the database, the system identifies the current content review position in the time domain.
    Type: Grant
    Filed: November 30, 1999
    Date of Patent: April 13, 2004
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Nevenka Dimitrova, Evgeniy Leyvi
  • Patent number: 6714594
    Abstract: The process of compressing video requires the calculation of a variety data that are used in the process of compression. The invention exploits some or all of these data for purposes of content detection. For example, these data may be leveraged for purposes of commercial detection. The luminance, motion vector field, residual values, quantizer, bit rate, etc. may all be used either directly or in combination, as signatures of content. A process for content detection may employ one or more features as indicators of the start and/or end of a sequence containing a particular type of content and other features as verifiers of the type of content bounded by these start/end indicators. The features may be combined and/or refined to produce higher-level feature data with good computational economy and content-classification utility.
    Type: Grant
    Filed: May 14, 2001
    Date of Patent: March 30, 2004
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Nevenka Dimitrova, Thomas McGee, Gerhardus Engbertus Mekenkamp, Edwin Salomons
  • Patent number: 6697124
    Abstract: In a television receiver having Picture-In-Picture (PIP), a controller analyzes the content of a video signal forming a main picture, and automatically adjusts the size and position of a PIP image to correspond to regions of the main picture exhibiting the least amount of motion, texture, and/or a repeating texture. The controller also prevents the PIP image from being positioned over text or faces or other important objects in the main picture.
    Type: Grant
    Filed: March 30, 2001
    Date of Patent: February 24, 2004
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Nevenka Dimitrova, Angel Janevski
  • Patent number: 6697123
    Abstract: In a television receiver having Picture-In-Picture (PIP), a controller analyzes the content of an auxiliary video signal forming a PIP image, and automatically adjusts the shape and transparency of the PIP image in accordance with the content of the auxiliary video signal.
    Type: Grant
    Filed: March 30, 2001
    Date of Patent: February 24, 2004
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Angel Janevski, Nevenka Dimitrova
  • Publication number: 20040024780
    Abstract: The present invention provides a method, system and program product for generating a content-based table of contents for a program. Specifically, under the present invention the genre of a program having sequences is determined. Once the genre has been determined, each sequence is assigned a classification. The classifications are assigned based on video content, audio content and textual content within the sequences. Based on the genre and the classifications, keyframe(s) are selected from the sequences for use in a content-based table of contents.
    Type: Application
    Filed: August 1, 2002
    Publication date: February 5, 2004
    Applicant: Koninklijke Philips Electronics N.V.
    Inventors: Lalitha Agnihotri, Nevenka Dimitrova, Srinivas Gutta, Dongge Li
  • Publication number: 20040010480
    Abstract: A method for operating a neural network, and a program and apparatus that operate in accordance with the method. The method comprises the steps of applying data indicative of predetermined content, derived from an electronic signal including a representation of the predetermined content, to an input of at least one neural network, to cause the at least one network to generate at least one output indicative of either a detection or a non-detection of the predetermined content. Each neural network has an architecture specified by at least one corresponding parameter. The method also comprises a step of evolving the at least one parameter to modify the architecture of the at least one neural network, based on the at least one output, to increase an accuracy at which that at least one neural network detects the predetermined content indicated by the data.
    Type: Application
    Filed: July 9, 2002
    Publication date: January 15, 2004
    Inventors: Lalitha Agnihotri, James David Schaffer, Nevenka Dimitrova, Thomas McGee, Sylvie Jeannin
  • Publication number: 20030236762
    Abstract: A content maintenance system uses a time-dependent precipitation function for iteratively augmenting or removing content over time, after an initial demonstration of user interest. A plurality of parallel precipitation processes can be launched simultaneously in response to different facets of a user expression of interest. Precipitation is dependent on highlighting or extracting segment descriptors from content of interest to the user. Then segments are filtered, rated, annotated and/or prioritized from that content. The remaining segments are matched against stored search structures. When the segments match, they are precipitated out for storage and can generate new search structures.
    Type: Application
    Filed: June 21, 2002
    Publication date: December 25, 2003
    Applicant: Koninklijke Philips Electronics N.V.
    Inventors: Angel Janevski, Nevenka Dimitrova, Lalitha Agnihotri
  • Publication number: 20030236663
    Abstract: A memory storing computer readable instructions for causing a processor associated with a mega speaker identification (ID) system to instantiate functions including an audio segmentation and classification function receiving general audio data (GAD) and generating segments, a feature extraction function receiving the segments and extracting features based on mel-frequency cepstral coefficients (MFCC) therefrom, a learning and clustering function receiving the extracted features and reclassifying segments, when required, based on the extracted features, a matching and labeling function assigning a speaker ID to speech signals within the GAD, and a database function for correlating the assigned speaker ID to the respective speech signals within the GAD. The audio segmentation and classification function can assign each segment to one of N audio signal classes including silence, single speaker speech, music, environmental noise, multiple speaker's speech, simultaneous speech and music, and speech and noise.
    Type: Application
    Filed: June 19, 2002
    Publication date: December 25, 2003
    Applicant: KONINKLIJKE PHILIPS ELECTRONICS N.V.
    Inventors: Nevenka Dimitrova, Dongge Li