Patents by Inventor John Adcock

John Adcock has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7154526
    Abstract: A system in accordance with one embodiment of the present invention comprises a device for facilitating video communication between a remote participant and another location. The device can comprise a screen adapted to display the remote participant, the screen having a posture adapted to be controlled by the remote participant. A camera can be mounted adjacent to the screen, and can allow the subject to view a selected conference participant or a desired location such that when the camera is trained on the selected participant or desired location a gaze of the remote participant displayed by the screen appears substantially directed at the selected participant or desired location.
    Type: Grant
    Filed: July 11, 2003
    Date of Patent: December 26, 2006
    Assignee: Fuji Xerox Co., Ltd.
    Inventors: Jonathan T. Foote, John Adcock, Qiong Liu, Timothy E. Black
  • Publication number: 20060245616
    Abstract: A method of classifying images as slide images or non-slide images can be done by capturing a video stream, analyzing an image of the video stream and determining whether the image is a slide image. The analyzing can be based on text height and/or edge region(s) of the image.
    Type: Application
    Filed: April 28, 2005
    Publication date: November 2, 2006
    Inventors: Laurent Denoue, Matthew Cooper, David Hillbert, John Adcock, Daniel Billsus
  • Publication number: 20060106767
    Abstract: A system and method for identifying query-related keywords in documents found in a search using latent semantic analysis. The documents are represented as a document term matrix M containing one or more document term-weight vectors d, which may be term-frequency (tf) vectors or term-frequency inverse-document-frequency (tf-idf) vectors. This matrix is subjected to a truncated singular value decomposition. The resulting transform matrix U can be used to project a query term-weight vector q into the reduced N-dimensional space, followed by its expansion back into the full vector space using the inverse of U. To perform a search, the similarity of qexpanded is measured relative to each candidate document vector in this space. Exemplary similarity functions are dot product and cosine similarity. Keywords are selected with the highest values in qexpanded that are also comprised in at least one document. Matching keywords from the query may be highlighted in the search results.
    Type: Application
    Filed: November 12, 2004
    Publication date: May 18, 2006
    Applicant: Fuji Xerox Co., Ltd.
    Inventors: John Adcock, Matthew Cooper, Andreas Girgensohn, Lynn Wilcox
  • Publication number: 20060106764
    Abstract: The invention displays video search results in a form that makes it easy for users to determine which results are truly relevant. Each story returned as a search result is visualized as a collage of keyframes from the story's shots. The selected keyframes and their sizes depend on the corresponding shots' respective relevance. Shot relevance depends on the search retrieval score of the shot and, in some embodiments, also depends on the search retrieval score of the shot's parent story. Once areas have been determined, the keyframes are scaled and/or cropped to fit into the area. In one embodiment, users can mark one or more shots as being relevant to the search. In one embodiment, a timeline is created and displayed with one or more neighbor stories that are each part of the video and which are closest in time of creation to the selected story.
    Type: Application
    Filed: November 12, 2004
    Publication date: May 18, 2006
    Applicant: Fuji Xerox Co., Ltd
    Inventors: Andreas Girgensohn, John Adcock, Lynn Wilcox
  • Publication number: 20060090123
    Abstract: Embodiments of the present invention enable the extraction, classification, storage, and supplementation of presentation video. A media system receives a video signal carrying presentation video. The media system processes the video signal and generates images for slides of the presentation. The media system then extracts text from the images and uses the text and other characteristics to classify the images and store them in a database. Additionally, the system enables viewers of the presentation to provide feedback on the presentation, which can be used to supplement the presentation.
    Type: Application
    Filed: October 26, 2004
    Publication date: April 27, 2006
    Applicant: Fuji Xerox Co., Ltd.
    Inventors: Laurent Denoue, Jonathan Trevor, David Hilbert, John Adcock
  • Publication number: 20060090134
    Abstract: Embodiments of the present invention include a video server that can detect and track the image of a pointing indicator in an input video stream representation of a computer display. The video server checks ordered frames of the video signal and determines movements for a pointing indicator such as a mouse arrow. Certain motions by the pointing indicator, such as lingering over a button or menu item or circling a button or menu item can provoke a control action on the server.
    Type: Application
    Filed: October 26, 2004
    Publication date: April 27, 2006
    Applicant: Fuji Xerox Co., Ltd.
    Inventors: John Adcock, Laurent Denoue, Jonathan Trevor, David Hilbert
  • Publication number: 20050249360
    Abstract: Systems and methods determine the location of a microphone with an unknown location, given the location of a number of other microphones by determining a difference in an arrival time between a first audio signal generated by and microphone with a known location and a second audio signal generated by another microphone with an unknown location, wherein the first and second audio signals are a representation of a substantially same sound emitted from an acoustic source with a known location; determining, based on at least the determined difference in arrival time, a distance between the acoustic source with the known location and the microphone with the unknown location; and determining, based on the determined distance between the acoustic source with the known location and the microphone with the unknown location, the location of the unknown microphone.
    Type: Application
    Filed: May 7, 2004
    Publication date: November 10, 2005
    Applicant: FUJI XEROX CO., LTD.
    Inventors: John Adcock, Jonathan Foote
  • Publication number: 20050105806
    Abstract: In one aspect, the present invention is directed to a method and an apparatus for organizing digital media, particularly digital photos, using face recognition. According to a first aspect of the present invention, a computer-based method for organizing digital photos comprises: extracting objects of interest from a plurality of photographs; cropping said plurality of photographs to generate images of isolated objects of interest; applying a recognition algorithm to determine the similarity of isolated objects of interest with a reference; displaying a plurality of objects arranged as a function of the determined similarity; and receiving user input to associate said objects with a particular classification.
    Type: Application
    Filed: December 15, 2003
    Publication date: May 19, 2005
    Inventors: Yasuhiko Nagaoka, Sugiharto Widjaja, Yuwen Wu, Jeffery Sunzeri, John Adcock, Andreas Girgensohn, Lynn Wilcox
  • Publication number: 20050007445
    Abstract: A system in accordance with one embodiment of the present invention comprises a device for facilitating video communication between a remote participant and another location. The device can comprise a screen adapted to display the remote participant, the screen having a posture adapted to be controlled by the remote participant. A camera can be mounted adjacent to the screen, and can allow the subject to view a selected conference participant or a desired location such that when the camera is trained on the selected participant or desired location a gaze of the remote participant displayed by the screen appears substantially directed at the selected participant or desired location.
    Type: Application
    Filed: July 11, 2003
    Publication date: January 13, 2005
    Inventors: Jonathan Foote, John Adcock, Qiong Liu, Timothy Black
  • Publication number: 20050002535
    Abstract: An audio device management system (ADMS) manages remote audio devices via user selections in video links. The system enhances audio acquisition quality by receiving and processing human suggestions, forming customized two-way audio links according to user requests, and learning audio pickup strategies and camera management strategies from user operations. The ADMS control interface for a remote user provides a multi-window GUI that provides an overview window and selection display window. The ADMS provides users with more flexibility to enhance audio signals according to their needs and makes it more convenient to form customized two-way audio links without requiring users to remember a list of phone numbers. The ADMS also automatically manages available microphones for audio pickup based on microphone sound quality and the system's past experience when users monitor a structured audio environment without explicitly expressing their attentions in the video window.
    Type: Application
    Filed: July 2, 2003
    Publication date: January 6, 2005
    Inventors: Qiong Liu, Donald Kimber, Jonathan Foote, Chunyuan Liao, John Adcock
  • Publication number: 20040004659
    Abstract: Provides a system for detecting an intersection between more than one panoramic video sequence and detecting the orientation of the sequences forming the intersection. Video images and corresponding location data are received. If required, the images and location data is processed to ensure the images contain location data. An intersection between two paths is then derived from the video images by deriving a rough intersection between two images, determining a neighborhood for the two images, and dividing each image in the neighborhood into strips. An identifying value is derived from each strip to create a row of strip values which are then converted to the frequency domain. A distance measure is taken between strips in the frequency domain, and the intersection is determined from the images having the smallest distance measure between them.
    Type: Application
    Filed: July 2, 2002
    Publication date: January 8, 2004
    Inventors: Jonathan T. Foote, Donald Kimber, Xinding Sun, John Adcock