Patents by Inventor Stéphane Valente

Stéphane Valente has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11885669
    Abstract: The present disclosure generally relates to systems and methods for imaging and sensing vibrations of an object. In one implementation, a system is provided for detecting vibrations that comprises a sensor with a plurality of pixels, each pixel including at least one photosensitive element. The sensor is configured to output signals related to illumination on each pixel. The system also includes at least one processor configured to receive the output signals from the sensor and detect, based on the output signals, one or more polarities. The at least one processor is also configured to determine timestamps associated with the output signals, analyze the timestamps and the polarities to detect vibrations of an object in the field of view of the sensor, and determine one or more frequencies of the detected vibrations.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: January 30, 2024
    Assignee: PROPHESEE
    Inventors: Geoffrey F. Burns, Manuele Brambilla, Christoph Posch, Gaspard Florentz, Stephane Valente
  • Publication number: 20220329771
    Abstract: A method for registering pixels provided in a pixel event stream comprising: acquiring image frames from a frame-based camera, each image frame being generated using an exposure period; generating a first point matrix from one or more of the image frames, the first point matrix being associated with an acquisition period of the image frames; acquiring a pixel event stream generated during the acquisition period; generating a second point matrix from pixel events of the pixel event stream, occurring during the acquisition period of the first point matrix; computing a correlation scoring function applied to at least a part of the points of the first and second point matrices, and estimating respective positions of points of the second point matrix in the first point matrix, due to depths of the points of the first point matrix related to the second point matrix, by maximizing the correlation scoring function.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 13, 2022
    Inventors: Daniele PERRONE, Jacques MANDERSCHIED, Stephane VALENTE
  • Publication number: 20220205835
    Abstract: The present disclosure generally relates to systems and methods for imaging and sensing vibrations of an object. In one implementation, a system is provided for detecting vibrations that comprises a sensor with a plurality of pixels, each pixel including at least one photosensitive element. The sensor is configured to output signals related to illumination on each pixel. The system also includes at least one processor configured to receive the output signals from the sensor and detect, based on the output signals, one or more polarities. The at least one processor is also configured to determine timestamps associated with the output signals, analyze the timestamps and the polarities to detect vibrations of an object in the field of view of the sensor, and determine one or more frequencies of the detected vibrations.
    Type: Application
    Filed: April 24, 2020
    Publication date: June 30, 2022
    Applicant: PROPHESEE
    Inventors: Geoffrey F. BURNS, Manuele BRAMBILLA, Christoph POSCH, Gaspard FLORENTZ, Stephane VALENTE
  • Patent number: 10072923
    Abstract: The method for processing signals originating for example from several proximity sensors for the recognition of a movement of an object, comprises first respective samplings of the said signals delivered by the sensors so as to obtain a first set of first date-stamped samples, the generation, from the first set of first date-stamped samples, of new sampling times comprising a start of movement time, an end of movement time, and times regularly spaced between the start of movement time and the end of movement time, a re-sampling of the signal delivered by each sensor between the start of movement time and the end of movement time at the said new sampling times using the first samples, in such a manner as to generate a second set of second date-stamped samples, and a processing of the said second set of date-stamped samples by a movement recognition algorithm.
    Type: Grant
    Filed: March 18, 2015
    Date of Patent: September 11, 2018
    Assignee: STMICROELECTRONICS SA
    Inventor: Stéphane Valente
  • Patent number: 10021301
    Abstract: An apparatus comprising a plurality of image modules and a plurality of processors. The image modules may each comprise (i) a sensor configured to generate images and (ii) a lens mounted to the sensor. The processors may each be configured to (A) receive the images from a subset of the plurality of image modules and (B) generate a plurality of video streams. Each one of the video streams may be generated by one of the processors in response to the images received from one of the image modules. The subset of the plurality of image modules may comprise at least two distinct image modules of the plurality of image modules. The lenses may be arranged to allow the images to provide coverage for a spherical field of view of a scene surrounding the apparatus.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: July 10, 2018
    Assignee: VIDEOSTITCH INC.
    Inventors: Alexander Fink, Nicolas Burtey, George Haber, Stephane Valente
  • Patent number: 10003741
    Abstract: A system comprising a camera and a computing device. The camera may comprise (a) a plurality of capture devices configured to capture images of an environment surrounding the camera to provide a spherical field of view and (b) a first interface. The computing device may comprise (a) a processor and (b) a second interface. The camera may be configured to encode a plurality of video streams based on the captured images. The first interface may be configured to transfer the plurality of video streams to the second interface. The processor may perform a stitching operation on the plurality of video streams to generate a single video signal. The stitching operation may be performed on the plurality of video streams in real time as the plurality of video streams are transferred. The single video signal may be configured to represent an omnidirectional view based on the environment surrounding the camera.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: June 19, 2018
    Assignee: VIDEOSTITCH INC.
    Inventors: Alexander Fink, Nicolas Burtey, George Haber, Stephane Valente
  • Patent number: 9992400
    Abstract: A system includes a video source and a computing device. The video source may be configured to generate a plurality of video streams that capture a view of an environment. The computing device generally includes one or more processors configured to (i) perform a stitching operation on the plurality of video streams to generate a video signal representative of a spherical field of view of the environment, (ii) transmit a display signal that represents a projection of the video signal to be displayed to a user utilizing an immersive display, (iii) receive a plurality of commands from the user while the user observes the environment in the immersive display and (iv) adjust a plurality of parameters in one or more of the video source and the stitching operation in real time in response to the commands.
    Type: Grant
    Filed: February 23, 2016
    Date of Patent: June 5, 2018
    Assignee: VIDEOSTITCH INC.
    Inventors: Nicolas Burtey, Stéphane Valente, Alexander Fink
  • Publication number: 20180139433
    Abstract: An apparatus comprising a plurality of lenses and a frame. The plurality of lenses may be arranged to provide coverage for an omnidirectional field of view of a scene surrounding the apparatus and each have an optical axis directed to provide coverage for a respective area of the omnidirectional field of view. The frame may be configured to hold the plurality of lenses. At least a first one of the lenses and at least a second one of the lenses are neighboring lenses. An orientation of at least two of the neighboring lenses is configured to control parallax effects when the omnidirectional field of view is recorded using the plurality of lenses. The parallax effects are controlled by configuring the optical axes of the neighboring lenses to not line in the same plane.
    Type: Application
    Filed: January 16, 2018
    Publication date: May 17, 2018
    Inventors: Alexander Fink, Nicolas Burtey, George Haber, Stéphane Valente
  • Publication number: 20180130497
    Abstract: A system comprising a video source, one or more audio sources and a computing device. The video source may be configured to generate a plurality of video streams that capture a view of an environment. The one or more audio sources may be configured to capture audio data of the environment. The computing device may comprise one or more processors configured to (i) perform a stitching operation on the plurality of video streams to generate a video signal representative of an immersive field of view of the environment, (ii) generate a sound field based on the audio data, (iii) identify an orientation for the sound field with respect to the video signal, and (iv) determine a rotation of the sound field based on the orientation. The rotation of the sound field aligns the sound field to the video signal.
    Type: Application
    Filed: December 18, 2017
    Publication date: May 10, 2018
    Inventors: Lucas McCauley, Alexander Fink, Nicolas Burtey, Stéphane Valente
  • Publication number: 20180124315
    Abstract: An apparatus comprising a plurality of image modules and a plurality of processors. The image modules may each comprise (i) a sensor configured to generate images and (ii) a lens mounted to the sensor. The processors may each be configured to (A) receive the images from a subset of the plurality of image modules and (B) generate a plurality of video streams. Each one of the video streams may be generated by one of the processors in response to the images received from one of the image modules. The subset of the plurality of image modules may comprise at least two distinct image modules of the plurality of image modules. The lenses may be arranged to allow the images to provide coverage for a spherical field of view of a scene surrounding the apparatus.
    Type: Application
    Filed: December 14, 2017
    Publication date: May 3, 2018
    Inventors: Alexander Fink, Nicolas Burtey, George Haber, Stephane Valente
  • Patent number: 9881647
    Abstract: A system comprising a video source, one or more audio sources and a computing device. The video source may be configured to generate a plurality of video streams that capture a view of an environment. The one or more audio sources may be configured to capture audio data of the environment. The computing device may comprise one or more processors configured to (i) perform a stitching operation on the plurality of video streams to generate a video signal representative of an immersive field of view of the environment, (ii) generate a sound field based on the audio data, (iii) identify an orientation for the sound field with respect to the video signal, and (iv) determine a rotation of the sound field based on the orientation. The rotation of the sound field aligns the sound field to the video signal.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: January 30, 2018
    Assignee: VIDEOSTITCH INC.
    Inventors: Lucas McCauley, Alexander Fink, Nicolas Burtey, Stéphane Valente
  • Patent number: 9883159
    Abstract: An apparatus comprising a plurality of lenses and a frame. The plurality of lenses may be arranged to provide coverage for a spherical field of view of a scene surrounding the apparatus and each have an optical axis directed to provide coverage for a respective area of the spherical field of view. The frame may be configured to hold a first subset of the plurality of lenses and a second subset of the plurality of lenses. At least one of the lenses in the first subset and at least one of the lenses in the second subset are neighboring lenses. An orientation of at least two of the neighboring lenses is configured to reduce parallax effects when the spherical field of view is recorded using the plurality of lenses. The parallax effects are reduced by configuring the optical axes of the neighboring lenses to not intersect.
    Type: Grant
    Filed: January 27, 2016
    Date of Patent: January 30, 2018
    Assignee: VIDEOSTITCH INC.
    Inventors: Alexander Fink, Nicolas Burtey, George Haber, Stéphane Valente
  • Publication number: 20170372748
    Abstract: A system comprising a video source, one or more audio sources and a computing device. The video source may be configured to generate a plurality of video streams that capture a view of an environment. The one or more audio sources may be configured to capture audio data of the environment. The computing device may comprise one or more processors configured to (i) perform a stitching operation on the plurality of video streams to generate a video signal representative of an immersive field of view of the environment, (ii) generate a sound field based on the audio data, (iii) identify an orientation for the sound field with respect to the video signal, and (iv) determine a rotation of the sound field based on the orientation. The rotation of the sound field aligns the sound field to the video signal.
    Type: Application
    Filed: June 28, 2016
    Publication date: December 28, 2017
    Inventors: Lucas McCauley, Alexander Fink, Nicolas Burtey, Stéphane Valente
  • Publication number: 20170366752
    Abstract: A system comprising a camera and a computing device. The camera may comprise (a) a plurality of capture devices configured to capture images of an environment surrounding the camera to provide a spherical field of view and (b) a first interface. The computing device may comprise (a) a processor and (b) a second interface. The camera may be configured to encode a plurality of video streams based on the captured images. The first interface may be configured to transfer the plurality of video streams to the second interface. The processor may perform a stitching operation on the plurality of video streams to generate a single video signal. The stitching operation may be performed on the plurality of video streams in real time as the plurality of video streams are transferred. The single video signal may be configured to represent an omnidirectional view based on the environment surrounding the camera.
    Type: Application
    Filed: September 1, 2017
    Publication date: December 21, 2017
    Inventors: Alexander Fink, Nicolas Burtey, George Haber, Stephane Valente
  • Patent number: 9843725
    Abstract: An apparatus comprising a plurality of image modules and a plurality of processors. The image modules may each comprise (i) a sensor configured to generate images and (ii) a lens mounted to the sensor. The processors may each be configured to (A) receive the images from a subset of the plurality of image modules and (B) generate a plurality of video streams. Each one of the video streams may be generated by one of the processors in response to the images received from one of the image modules. The subset of the plurality of image modules may comprise at least two distinct image modules of the plurality of image modules. The lenses may be arranged to allow the images to provide coverage for a spherical field of view of a scene surrounding the apparatus.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: December 12, 2017
    Assignee: VIDEOSTITCH INC.
    Inventors: Alexander Fink, Nicolas Burtey, George Haber, Stephane Valente
  • Publication number: 20170347219
    Abstract: A system comprising a video display device, an audio output device and a computing device. The computing device may comprise one or more processors configured to (i) determine orientation angles of a spherical video based on an input, (ii) extract a viewport from the spherical video based on the orientation angles, (iii) output the viewport to the video display device, (iv) render a sound field based on the orientation angles and the audio output device and (v) output the sound field to the audio output device. Sound sources that comprise the sound field are adjusted to align with the viewport. The sound sources outside of the viewport are attenuated.
    Type: Application
    Filed: May 27, 2016
    Publication date: November 30, 2017
    Inventors: Lucas McCauley, Stéphane Valente, Alexander Fink, Nicolas Burtey
  • Publication number: 20170322396
    Abstract: An apparatus comprising a plurality of lenses and a frame. The lenses may be arranged to provide coverage for a spherical field of view of a scene surrounding the apparatus. The frame may be configured to hold (A) a first subset of the lenses, (B) a second subset of the lenses and (C) a third subset of the lenses. At least one of the lenses in the first subset and at least one of the lenses in the second subset may be neighboring lenses arranged around a periphery of the apparatus. At least one of the lenses in the third subset and at least one of the lenses in the first subset or the second subset may be neighboring lenses. At least two of the neighboring lenses may be oriented to adjust parallax effects for a pre-determined purpose when the spherical field of view is recorded.
    Type: Application
    Filed: May 6, 2016
    Publication date: November 9, 2017
    Inventors: Nicolas Burtey, Alexander Fink, Stéphane Valente
  • Publication number: 20170293461
    Abstract: A system comprising a video source, one or more audio sources and a computing device. The video source may be configured to generate a video signal. The audio sources may be configured to generate audio streams. The computing device may comprise one or more processors configured to (i) transmit a display signal that provides a representation of the video signal to be displayed to a user, (ii) receive a plurality of commands from the user while the user observes the representation of the video signal and (iii) adjust the audio streams in response to the commands. The commands may identify a location of the audio sources in the representation of the video signal. The representation of the video signal may be used as a frame of reference for the location of the audio sources.
    Type: Application
    Filed: April 7, 2016
    Publication date: October 12, 2017
    Inventors: Lucas McCauley, Stéphane Valente, Alexander Fink, Nicolas Burtey
  • Patent number: 9787896
    Abstract: A system comprising a camera and a computing device. The camera may comprise (a) a plurality of capture devices configured to capture images of an environment surrounding the camera to provide a spherical field of view and (b) a first interface. The computing device may comprise (a) a processor and (b) a second interface. The camera may be configured to encode a plurality of video streams based on the captured images. The first interface may be configured to transfer the plurality of video streams to the second interface. The processor may perform a stitching operation on the plurality of video streams to generate a single video signal. The stitching operation may be performed on the plurality of video streams in real time as the plurality of video streams are transferred. The single video signal may be configured to represent an omnidirectional view based on the environment surrounding the camera.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: October 10, 2017
    Assignee: VIDEOSTITCH INC.
    Inventors: Alexander Fink, Nicolas Burtey, George Haber, Stephane Valente
  • Publication number: 20170244884
    Abstract: A system includes a video source and a computing device. The video source may be configured to generate a plurality of video streams that capture a view of an environment. The computing device generally includes one or more processors configured to (i) perform a stitching operation on the plurality of video streams to generate a video signal representative of a spherical field of view of the environment, (ii) transmit a display signal that represents a projection of the video signal to be displayed to a user utilizing an immersive display, (iii) receive a plurality of commands from the user while the user observes the environment in the immersive display and (iv) adjust a plurality of parameters in one or more of the video source and the stitching operation in real time in response to the commands.
    Type: Application
    Filed: February 23, 2016
    Publication date: August 24, 2017
    Inventors: Nicolas Burtey, Stéphane Valente, Alexander Fink