Patents by Inventor Serge Ayer

Serge Ayer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9959644
    Abstract: A computerized method for annotating at least one feature of an image of a view, includes the steps of obtaining the image with an image sensor of a portable device, and retrieving at least one condition. Based on the at least one condition, the method automatically selects a feature identification method among a plurality of features identification methods. It then applies the feature identification method for identifying the at least one feature, and annotates some of the identified features.
    Type: Grant
    Filed: November 17, 2011
    Date of Patent: May 1, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Mathieu Monney, Serge Ayer, Martin Vetterli
  • Patent number: 9237263
    Abstract: The present invention relates to an annotating method including the steps of capturing (100) data representing a light field with a plenoptic image capture device (4); matching (101) the captured data with a corresponding reference data; retrieving an annotation associated with an element of the reference data (102); rendering (103) a view generated from the captured data and including at least one annotation.
    Type: Grant
    Filed: October 5, 2012
    Date of Patent: January 12, 2016
    Assignee: Vidinoti SA
    Inventors: Laurent Rime, Mathieu Monney, Serge Ayer, Martin Vetterli
  • Patent number: 9094616
    Abstract: A method for capturing and processing an image, comprising: capturing a captured image with an image sensor; retrieving steganographic marks hidden in said captured image; processing said captured image based on said hidden marks, so as to generate a processed image; matching the processed image with a reference image from a set of reference images and superimposing elements on the captured image, depending on said reference image. A user device, comprising an image sensor suitable for capturing at least one image and a processor configured to at retrieve hidden marks in said captured image and generating a processed image based on said hidden marks retrieval and annotations processed remotely; the user device is part of a system with a remote server matching the image.
    Type: Grant
    Filed: October 16, 2012
    Date of Patent: July 28, 2015
    Assignee: Vidinoti SA
    Inventors: Laurent Rime, Mathieu Monney, Serge Ayer
  • Patent number: 8848058
    Abstract: A method is disclosed for analyzing with a computer (1) the motion of an athlete (3), of a team or a patient during an activity, said method comprising the steps of defining a number of unevenly time-spaced key positions in said motion, said key positions being of particular interest for analyzing the correct execution of said motion by said athlete (3) or team. A video sequence (11) of said motion is acquired with a camera (2) and still pictures (12) are extracted from said video sequence (11). Templates can trigger the automatic extraction of still pictures (12). For extraction purposes, a metadata recorded with a sensor (5) at the same time as the video sequence (11) can be used. Said still pictures (12) correspond to said previously defined key positions. Thereafter said extracted still pictures (12) are displayed simultaneously on a same display (10).
    Type: Grant
    Filed: December 21, 2007
    Date of Patent: September 30, 2014
    Assignee: Dartfish SA
    Inventors: Serge Ayer, Pascal Binggeli, Thomas Bebie, Michael Frossard, Philippe Schroeter, Emmanuel Reusens
  • Publication number: 20140181630
    Abstract: A method comprising the steps of: retrieving data (100) representing a light field with a plenoptic capture device (4); executing program code for matching the retrieved data with corresponding reference data (101); executing program code for retrieving at least one annotation (61, 63, 64) in a plenoptic format associated with an element of said reference data (102); executing program code for generating annotated data in a plenoptic format from said retrieved data and said annotation (103).
    Type: Application
    Filed: December 21, 2012
    Publication date: June 26, 2014
    Applicant: Vidinoti SA
    Inventors: Mathieu MONNEY, Laurent Rime, Serge Ayer
  • Publication number: 20140104441
    Abstract: A method for capturing and processing an image, comprising: capturing a captured image with an image sensor; retrieving steganographic marks hidden in said captured image; processing said captured image based on said hidden marks, so as to generate a processed image; matching the processed image with a reference image from a set of reference images and superimposing elements on the captured image, depending on said reference image. A user device, comprising an image sensor suitable for capturing at least one image and a processor configured to at retrieve hidden marks in said captured image and generating a processed image based on said hidden marks retrieval and annotations processed remotely; the user device is part of a system with a remote server matching the image.
    Type: Application
    Filed: October 16, 2012
    Publication date: April 17, 2014
    Applicant: VIDINOTI SA
    Inventors: Laurent Rime, Mathieu Monney, Serge Ayer
  • Publication number: 20140098191
    Abstract: The present invention relates to an annotating method comprising the steps of: capturing (100) data representing a light field with a plenoptic image capture device (4); matching (101) the captured data with a corresponding reference data; retrieving an annotation associated with an element of said reference data (102); rendering (103) a view generated from said captured data and including at least one annotation.
    Type: Application
    Filed: October 5, 2012
    Publication date: April 10, 2014
    Applicant: VIDINOTI SA
    Inventors: Laurent RIME, Mathieu MONNEY, Serge AYER, Martin VETTERLI
  • Patent number: 8675021
    Abstract: Given two video sequences, a composite video sequence can be generated (15) which includes visual elements from each of the given sequences, suitably synchronized (11) and represented in a chosen focal plane. A composite video sequence can be made also by similarly combining a video sequence with an audio sequence. In the composite video sequence, contestants, action figures or objects can be shown against a common background (12) even if the given video sequences differ as to background, with the common background taken from one or the other of the given sequences, for example. Alternatively, a different suitable background can be used, e.g. as derived from the given video sequences, as obtained from another video sequence or image, or as otherwise synthesized.
    Type: Grant
    Filed: September 9, 2009
    Date of Patent: March 18, 2014
    Assignee: Dartfish SA
    Inventors: Emmanuel Reusens, Martin Vetterli, Serge Ayer, Victor Bergonzoli
  • Patent number: 8675016
    Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.
    Type: Grant
    Filed: March 28, 2012
    Date of Patent: March 18, 2014
    Assignee: Ecole Polytechnique Federale de Lausanne (EPFL)
    Inventors: Martin Vetterli, Serge Ayer
  • Publication number: 20130311868
    Abstract: A computerized method for annotating at least one feature of an image of a view, includes the steps of obtaining the image with an image sensor of a portable device, and retrieving at least one condition. Based on the at least one condition, the method automatically selects a feature identification method among a plurality of features identification methods. It then applies the feature identification method for identifying the at least one feature, and annotates some of the identified features.
    Type: Application
    Filed: November 17, 2011
    Publication date: November 21, 2013
    Applicant: EPFL-TTO QUARLIER DE I'INNOVATION-J
    Inventors: Mathieu Monney, Serge Ayer, Martin Vetterli
  • Patent number: 8432414
    Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.
    Type: Grant
    Filed: March 26, 2001
    Date of Patent: April 30, 2013
    Assignee: Ecole Polytechnique Federale de Lausanne
    Inventors: Martin Vetterli, Serge Ayer
  • Publication number: 20120212507
    Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.
    Type: Application
    Filed: March 28, 2012
    Publication date: August 23, 2012
    Inventors: Martin Vetterli, Serge Ayer
  • Patent number: 7843510
    Abstract: Given two video sequences, a composite video sequence can be generated which includes visual elements from each of the given sequences, suitably synchronized and represented in a chosen focal plane. For example, given two video sequences with each showing a different contestant individually racing the same down-hill course, the composite sequence can include elements from each of the given sequences to show the contestants as if racing simultaneously. A composite video sequence can be made also by similarly combining a video sequence with an audio sequence.
    Type: Grant
    Filed: January 19, 1999
    Date of Patent: November 30, 2010
    Assignee: Ecole Polytechnique Federale de Lausanne
    Inventors: Serge Ayer, Martin Vetterli
  • Publication number: 20080094472
    Abstract: A method is disclosed for analyzing with a computer (1) the motion of an athlete (3), of a team or a patient during an activity, said method comprising the steps of defining a number of unevenly time-spaced key positions in said motion, said key positions being of particular interest for analyzing the correct execution of said motion by said athlete (3) or team. A video sequence (11) of said motion is acquired with a camera (2) and still pictures (12) are extracted from said video sequence (11). Templates can trigger the automatic extraction of still pictures (12). For extraction purposes, a metadata recorded with a sensor (5) at the same time as the video sequence (11) can be used. Said still pictures (12) correspond to said previously defined key positions. Thereafter said extracted still pictures (12) are displayed simultaneously on a same display (10).
    Type: Application
    Filed: December 21, 2007
    Publication date: April 24, 2008
    Inventors: Serge Ayer, Pascal Binggeli, Thomas Bebie, Michael Frossard, Philippe Schroeter, Emmanuel Reusens
  • Patent number: 7042493
    Abstract: Standard video footage even from a single video camera can be used to obtain, in an automated fashion, a stroboscope sequence of a sports event, for example. The sequence may be represented as a static images of a photographic nature, or by a video sequence in which camera motion remains present, in which case the video sequence can be rendered as a panning camera movement on a stroboscope picture or as an animated stroboscope sequence in which the moving object leaves a trailing trace of copies along its path. Multiple cameras can be used for an expanded field of view or for comparison of multiple sequences, for example.
    Type: Grant
    Filed: April 6, 2001
    Date of Patent: May 9, 2006
    Inventors: Paolo Prandoni, Emmanuel Reusens, Martin Vetterli, Luciano Sbaiz, Serge Ayer
  • Publication number: 20040017504
    Abstract: Standard video footage even from a single video camera can be used to obtain, in an automated fashion, a stroboscope sequence of a sports event, for example. The sequence may be represented as a static images of a photographic nature, or by a video sequence in which camera motion remains present, in which case the video sequence can be rendered as a panning camera movement on a stroboscope picture or as an animated stroboscope sequence in which the moving object leaves a trailing trace of copies along its path. Multiple cameras can be used for an expanded field of view or for comparison of multiple sequences, for example.
    Type: Application
    Filed: April 6, 2001
    Publication date: January 29, 2004
    Applicant: InMotion Technologies Ltd.
    Inventors: Paolo Prandoni, Emmanuel Reusens, Martin Vetterli, Luciano Sbaiz, Serge Ayer
  • Publication number: 20020075282
    Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.
    Type: Application
    Filed: March 26, 2001
    Publication date: June 20, 2002
    Inventors: Martin Vetterli, Serge Ayer
  • Patent number: 6320624
    Abstract: Given two video sequences, a composite video sequence can be generated which includes visual elements from each of the given sequences, suitably synchronized and represented in a chosen focal plane. For example, given two video sequences with each showing a different contestant individually racing the same down-hill course, the composite sequence can include elements from each of the given sequences to show the contestants as if racing simultaneously. A composite video sequence can be made also by similarly combining a video sequence with an audio sequence.
    Type: Grant
    Filed: January 16, 1998
    Date of Patent: November 20, 2001
    Assignee: Ecole Polytechnique Fédérale
    Inventors: Serge Ayer, Martin Vetterli
  • Patent number: 6208353
    Abstract: For annotating a digital image with information from a digital map, features which are visible from a viewer position are extracted from the map. The extracted features are matched with corresponding features in the image, and feature annotations are transferred from the map to the image to obtain an integrated view. The technique facilitates the annotation of photographs, and it can be included in navigation and simulation systems.
    Type: Grant
    Filed: September 5, 1997
    Date of Patent: March 27, 2001
    Assignee: Ecole Polytechnique Fedérale de Lausanne
    Inventors: Serge Ayer, Martin Vetterli