Patents by Inventor Serge Ayer
Serge Ayer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9959644Abstract: A computerized method for annotating at least one feature of an image of a view, includes the steps of obtaining the image with an image sensor of a portable device, and retrieving at least one condition. Based on the at least one condition, the method automatically selects a feature identification method among a plurality of features identification methods. It then applies the feature identification method for identifying the at least one feature, and annotates some of the identified features.Type: GrantFiled: November 17, 2011Date of Patent: May 1, 2018Assignee: QUALCOMM IncorporatedInventors: Mathieu Monney, Serge Ayer, Martin Vetterli
-
Patent number: 9237263Abstract: The present invention relates to an annotating method including the steps of capturing (100) data representing a light field with a plenoptic image capture device (4); matching (101) the captured data with a corresponding reference data; retrieving an annotation associated with an element of the reference data (102); rendering (103) a view generated from the captured data and including at least one annotation.Type: GrantFiled: October 5, 2012Date of Patent: January 12, 2016Assignee: Vidinoti SAInventors: Laurent Rime, Mathieu Monney, Serge Ayer, Martin Vetterli
-
Patent number: 9094616Abstract: A method for capturing and processing an image, comprising: capturing a captured image with an image sensor; retrieving steganographic marks hidden in said captured image; processing said captured image based on said hidden marks, so as to generate a processed image; matching the processed image with a reference image from a set of reference images and superimposing elements on the captured image, depending on said reference image. A user device, comprising an image sensor suitable for capturing at least one image and a processor configured to at retrieve hidden marks in said captured image and generating a processed image based on said hidden marks retrieval and annotations processed remotely; the user device is part of a system with a remote server matching the image.Type: GrantFiled: October 16, 2012Date of Patent: July 28, 2015Assignee: Vidinoti SAInventors: Laurent Rime, Mathieu Monney, Serge Ayer
-
Patent number: 8848058Abstract: A method is disclosed for analyzing with a computer (1) the motion of an athlete (3), of a team or a patient during an activity, said method comprising the steps of defining a number of unevenly time-spaced key positions in said motion, said key positions being of particular interest for analyzing the correct execution of said motion by said athlete (3) or team. A video sequence (11) of said motion is acquired with a camera (2) and still pictures (12) are extracted from said video sequence (11). Templates can trigger the automatic extraction of still pictures (12). For extraction purposes, a metadata recorded with a sensor (5) at the same time as the video sequence (11) can be used. Said still pictures (12) correspond to said previously defined key positions. Thereafter said extracted still pictures (12) are displayed simultaneously on a same display (10).Type: GrantFiled: December 21, 2007Date of Patent: September 30, 2014Assignee: Dartfish SAInventors: Serge Ayer, Pascal Binggeli, Thomas Bebie, Michael Frossard, Philippe Schroeter, Emmanuel Reusens
-
Publication number: 20140181630Abstract: A method comprising the steps of: retrieving data (100) representing a light field with a plenoptic capture device (4); executing program code for matching the retrieved data with corresponding reference data (101); executing program code for retrieving at least one annotation (61, 63, 64) in a plenoptic format associated with an element of said reference data (102); executing program code for generating annotated data in a plenoptic format from said retrieved data and said annotation (103).Type: ApplicationFiled: December 21, 2012Publication date: June 26, 2014Applicant: Vidinoti SAInventors: Mathieu MONNEY, Laurent Rime, Serge Ayer
-
Publication number: 20140104441Abstract: A method for capturing and processing an image, comprising: capturing a captured image with an image sensor; retrieving steganographic marks hidden in said captured image; processing said captured image based on said hidden marks, so as to generate a processed image; matching the processed image with a reference image from a set of reference images and superimposing elements on the captured image, depending on said reference image. A user device, comprising an image sensor suitable for capturing at least one image and a processor configured to at retrieve hidden marks in said captured image and generating a processed image based on said hidden marks retrieval and annotations processed remotely; the user device is part of a system with a remote server matching the image.Type: ApplicationFiled: October 16, 2012Publication date: April 17, 2014Applicant: VIDINOTI SAInventors: Laurent Rime, Mathieu Monney, Serge Ayer
-
Publication number: 20140098191Abstract: The present invention relates to an annotating method comprising the steps of: capturing (100) data representing a light field with a plenoptic image capture device (4); matching (101) the captured data with a corresponding reference data; retrieving an annotation associated with an element of said reference data (102); rendering (103) a view generated from said captured data and including at least one annotation.Type: ApplicationFiled: October 5, 2012Publication date: April 10, 2014Applicant: VIDINOTI SAInventors: Laurent RIME, Mathieu MONNEY, Serge AYER, Martin VETTERLI
-
Patent number: 8675021Abstract: Given two video sequences, a composite video sequence can be generated (15) which includes visual elements from each of the given sequences, suitably synchronized (11) and represented in a chosen focal plane. A composite video sequence can be made also by similarly combining a video sequence with an audio sequence. In the composite video sequence, contestants, action figures or objects can be shown against a common background (12) even if the given video sequences differ as to background, with the common background taken from one or the other of the given sequences, for example. Alternatively, a different suitable background can be used, e.g. as derived from the given video sequences, as obtained from another video sequence or image, or as otherwise synthesized.Type: GrantFiled: September 9, 2009Date of Patent: March 18, 2014Assignee: Dartfish SAInventors: Emmanuel Reusens, Martin Vetterli, Serge Ayer, Victor Bergonzoli
-
Patent number: 8675016Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.Type: GrantFiled: March 28, 2012Date of Patent: March 18, 2014Assignee: Ecole Polytechnique Federale de Lausanne (EPFL)Inventors: Martin Vetterli, Serge Ayer
-
Publication number: 20130311868Abstract: A computerized method for annotating at least one feature of an image of a view, includes the steps of obtaining the image with an image sensor of a portable device, and retrieving at least one condition. Based on the at least one condition, the method automatically selects a feature identification method among a plurality of features identification methods. It then applies the feature identification method for identifying the at least one feature, and annotates some of the identified features.Type: ApplicationFiled: November 17, 2011Publication date: November 21, 2013Applicant: EPFL-TTO QUARLIER DE I'INNOVATION-JInventors: Mathieu Monney, Serge Ayer, Martin Vetterli
-
Patent number: 8432414Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.Type: GrantFiled: March 26, 2001Date of Patent: April 30, 2013Assignee: Ecole Polytechnique Federale de LausanneInventors: Martin Vetterli, Serge Ayer
-
Publication number: 20120212507Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.Type: ApplicationFiled: March 28, 2012Publication date: August 23, 2012Inventors: Martin Vetterli, Serge Ayer
-
Patent number: 7843510Abstract: Given two video sequences, a composite video sequence can be generated which includes visual elements from each of the given sequences, suitably synchronized and represented in a chosen focal plane. For example, given two video sequences with each showing a different contestant individually racing the same down-hill course, the composite sequence can include elements from each of the given sequences to show the contestants as if racing simultaneously. A composite video sequence can be made also by similarly combining a video sequence with an audio sequence.Type: GrantFiled: January 19, 1999Date of Patent: November 30, 2010Assignee: Ecole Polytechnique Federale de LausanneInventors: Serge Ayer, Martin Vetterli
-
Publication number: 20080094472Abstract: A method is disclosed for analyzing with a computer (1) the motion of an athlete (3), of a team or a patient during an activity, said method comprising the steps of defining a number of unevenly time-spaced key positions in said motion, said key positions being of particular interest for analyzing the correct execution of said motion by said athlete (3) or team. A video sequence (11) of said motion is acquired with a camera (2) and still pictures (12) are extracted from said video sequence (11). Templates can trigger the automatic extraction of still pictures (12). For extraction purposes, a metadata recorded with a sensor (5) at the same time as the video sequence (11) can be used. Said still pictures (12) correspond to said previously defined key positions. Thereafter said extracted still pictures (12) are displayed simultaneously on a same display (10).Type: ApplicationFiled: December 21, 2007Publication date: April 24, 2008Inventors: Serge Ayer, Pascal Binggeli, Thomas Bebie, Michael Frossard, Philippe Schroeter, Emmanuel Reusens
-
Patent number: 7042493Abstract: Standard video footage even from a single video camera can be used to obtain, in an automated fashion, a stroboscope sequence of a sports event, for example. The sequence may be represented as a static images of a photographic nature, or by a video sequence in which camera motion remains present, in which case the video sequence can be rendered as a panning camera movement on a stroboscope picture or as an animated stroboscope sequence in which the moving object leaves a trailing trace of copies along its path. Multiple cameras can be used for an expanded field of view or for comparison of multiple sequences, for example.Type: GrantFiled: April 6, 2001Date of Patent: May 9, 2006Inventors: Paolo Prandoni, Emmanuel Reusens, Martin Vetterli, Luciano Sbaiz, Serge Ayer
-
Publication number: 20040017504Abstract: Standard video footage even from a single video camera can be used to obtain, in an automated fashion, a stroboscope sequence of a sports event, for example. The sequence may be represented as a static images of a photographic nature, or by a video sequence in which camera motion remains present, in which case the video sequence can be rendered as a panning camera movement on a stroboscope picture or as an animated stroboscope sequence in which the moving object leaves a trailing trace of copies along its path. Multiple cameras can be used for an expanded field of view or for comparison of multiple sequences, for example.Type: ApplicationFiled: April 6, 2001Publication date: January 29, 2004Applicant: InMotion Technologies Ltd.Inventors: Paolo Prandoni, Emmanuel Reusens, Martin Vetterli, Luciano Sbaiz, Serge Ayer
-
Publication number: 20020075282Abstract: In a view, e.g. of scenery, of a shopping or museum display, or of a meeting or conference, automated processing can be used to annotate objects which are visible from a viewer position. Annotation can be of objects selected by the viewer, and can be displayed visually, for example, with or without an image of the view.Type: ApplicationFiled: March 26, 2001Publication date: June 20, 2002Inventors: Martin Vetterli, Serge Ayer
-
Patent number: 6320624Abstract: Given two video sequences, a composite video sequence can be generated which includes visual elements from each of the given sequences, suitably synchronized and represented in a chosen focal plane. For example, given two video sequences with each showing a different contestant individually racing the same down-hill course, the composite sequence can include elements from each of the given sequences to show the contestants as if racing simultaneously. A composite video sequence can be made also by similarly combining a video sequence with an audio sequence.Type: GrantFiled: January 16, 1998Date of Patent: November 20, 2001Assignee: Ecole Polytechnique FédéraleInventors: Serge Ayer, Martin Vetterli
-
Patent number: 6208353Abstract: For annotating a digital image with information from a digital map, features which are visible from a viewer position are extracted from the map. The extracted features are matched with corresponding features in the image, and feature annotations are transferred from the map to the image to obtain an integrated view. The technique facilitates the annotation of photographs, and it can be included in navigation and simulation systems.Type: GrantFiled: September 5, 1997Date of Patent: March 27, 2001Assignee: Ecole Polytechnique Fedérale de LausanneInventors: Serge Ayer, Martin Vetterli