Patents by Inventor Jean-Charles Bazin
Jean-Charles Bazin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240073376Abstract: In holographic calling, it is difficult to capture the eyes of a caller due to lighting effects on an artificial reality (XR) headset. However, it can be important to capture the eyes when rendering the caller as they can show emotion, gaze, physical characteristics, etc., that aid in natural communication. Thus, implementations can capture the eyes of the caller using an external image capture device by briefly turning off the lighting effects on the XR headset. Some implementations can trigger the image capture device to capture an image of the eyes by temporal multiplexing in which timers on both the image capture device and the XR headset are synchronized. In other implementations, the image capture device can be an event-based camera that is automatically triggered to capture an image of the eyes based on a detected pixel change caused by deactivation of the lighting effects on the XR headset.Type: ApplicationFiled: August 26, 2022Publication date: February 29, 2024Inventors: Jean-Charles BAZIN, Alexandre CHAPIRO
-
Patent number: 11100617Abstract: Proposed are a deep learning method and apparatus for the automatic upright rectification of VR content. The deep learning method for the automatic upright rectification of VR content according to an embodiment may include inputting a VR image, to a neural network and outputting orientation information of the VR image through a trained neural network.Type: GrantFiled: October 11, 2019Date of Patent: August 24, 2021Assignee: Korea Advanced Institute of Science and TechnologyInventors: Jean-Charles Bazin, Rae Hyuk Jung, Seung Joon Lee
-
Patent number: 10977831Abstract: Disclosed herein is a camera calibration method based on deep learning including acquiring an image captured by a camera and predicting an intrinsic parameter of the camera by applying, to the acquired image, a neural network module trained to predict the intrinsic parameter.Type: GrantFiled: February 15, 2019Date of Patent: April 13, 2021Inventors: Jean-Charles Bazin, Oleksandr Bogdan, Viktor Eckstein, Francois Rameau
-
Patent number: 10728427Abstract: Described herein are apparatus, systems and methods for synchronizing a reference video with an input video. A method comprises extracting first motion data from the input video having a first set of frames, extracting second motion data from the reference video having a second set of frames, computing motion descriptors for each frame in the first set of frames and the second set of frames based on the first and second motion data, respectively, and non-linearly mapping the first set of frames to the second set of frames based on the motion descriptors, respectively.Type: GrantFiled: December 15, 2016Date of Patent: July 28, 2020Assignee: Disney Enterprises, Inc.Inventors: Jean-Charles Bazin, Alexander Sorkine-Hornung
-
Patent number: 10726581Abstract: There is provided a video processing system for use with a video having frames including a first frame and neighboring frames of the first frame. The system includes a memory storing a video processing application, and a processor. The processor is configured to execute the video processing application to sample scene points corresponding to an output pixel of the first frame of the frames of the video, the scene points including alternate observations of a same scene point from the neighboring frames of the first frame of the video, and filter the scene points corresponding to the output pixel to determine a color of the output pixel by calculating a weighted combination of the scene points corresponding to the output pixel.Type: GrantFiled: June 18, 2015Date of Patent: July 28, 2020Assignee: Disney Enterprises, Inc.Inventors: Oliver Wang, Marcus Magnor, Felix Klose, Jean-Charles Bazin, Alexander Sorkine Hornung
-
Publication number: 20200118255Abstract: Proposed are a deep learning method and apparatus for the automatic upright rectification of VR content. The deep learning method for the automatic upright rectification of VR content according to an embodiment may include inputting a VR image, to a neural network and outputting orientation information of the VR image through a trained neural network.Type: ApplicationFiled: October 11, 2019Publication date: April 16, 2020Applicant: Korea Advanced Institute of Science and TechnologyInventors: Jean-Charles BAZIN, Rae Hyuk JUNG, Seung Joon LEE
-
Patent number: 10580165Abstract: The present disclosure relates to an apparatus, system and method for processing transmedia content data. More specifically, the disclosure provides for identifying and inserting one item of media content within another item of media content, e.g. inserting a video within a video, such that the first item of media content appears as part of the second item. The invention involves analysing a first visual media item to identify one or more spatial locations to insert the second visual media item within the image data of the first visual media item, detecting characteristics of the one or more identified spatial locations, transforming the second visual media item according to the detected characteristics and combining the first visual media item and second visual media item by inserting the transformed second visual media item into the first visual media item at the one or more identified spatial locations.Type: GrantFiled: September 26, 2017Date of Patent: March 3, 2020Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)Inventors: Alex Sorkine-Hornung, Simone Meier, Jean-Charles Bazin, Sasha Schriber, Markus Gross, Oliver Wang
-
Publication number: 20200043197Abstract: Disclosed herein is a camera calibration method based on deep learning including acquiring an image captured by a camera and predicting an intrinsic parameter of the camera by applying, to the acquired image, a neural network module trained to predict the intrinsic parameter.Type: ApplicationFiled: February 15, 2019Publication date: February 6, 2020Inventors: Jean-Charles BAZIN, Oleksandr BOGDAN, Viktor ECKSTEIN, Francois RAMEAU
-
Patent number: 10419669Abstract: Systems and methods to generate omnistereoscopic panoramic videos are presented herein. Depth information, flow fields, and/or other information may be used to determine interpolated frame images between adjacent frame images. An omnistereoscopic panoramic video may be used in a real-world VR application.Type: GrantFiled: January 17, 2017Date of Patent: September 17, 2019Assignee: Disney Enterprises, Inc.Inventors: Alexander Sorkine Hornung, Christopher Schroers, Jean-Charles Bazin
-
Publication number: 20190096094Abstract: The present disclosure relates to an apparatus, system and method for processing transmedia content data. More specifically, the disclosure provides for identifying and inserting one item of media content within another item of media content, e.g. inserting a video within a video, such that the first item of media content appears as part of the second item. The invention involves analysing a first visual media item to identify one or more spatial locations to insert the second visual media item within the image data of the first visual media item, detecting characteristics of the one or more identified spatial locations, transforming the second visual media item according to the detected characteristics and combining the first visual media item and second visual media item by inserting the transformed second visual media item into the first visual media item at the one or more identified spatial locations.Type: ApplicationFiled: September 26, 2017Publication date: March 28, 2019Applicants: DISNEY ENTERPRISES, INC., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Alex Sorkine-Hornung, Simone Meier, Jean-Charles Bazin, Sasha Schriber, Markus Gross, Oliver Wang
-
Publication number: 20180205884Abstract: Systems and methods to generate omnistereoscopic panoramic videos are presented herein. Depth information, flow fields, and/or other information may be used to determine interpolated frame images between adjacent frame images. An omnistereoscopic panoramic video may be used in a real-world VR application.Type: ApplicationFiled: January 17, 2017Publication date: July 19, 2018Inventors: Alexander Sorkine Hornung, Christopher Schroers, Jean-Charles Bazin
-
Publication number: 20180176423Abstract: Described herein are apparatus, systems and methods for synchronizing a reference video with an input video. A method comprises extracting first motion data from the input video having a first set of frames, extracting second motion data from the reference video having a second set of frames, computing motion descriptors for each frame in the first set of frames and the second set of frames based on the first and second motion data, respectively, and non-linearly mapping the first set of frames to the second set of frames based on the motion descriptors, respectively.Type: ApplicationFiled: December 15, 2016Publication date: June 21, 2018Inventors: Jean-Charles Bazin, Alexander Sorkine-Hornung
-
Patent number: 9684953Abstract: A method for image processing in video conferencing, for correcting the gaze of an interlocutor in an image or a sequence of images captured by at least one real camera, comprises the steps of the at least one real camera acquiring an original image of the interlocutor; synthesizing a corrected view of the interlocutor's face as seen by a virtual camera, the virtual camera being located on the interlocutor's line of sight and oriented towards the interlocutor; transferring the corrected view of the interlocutor's face from the synthesized view into the original image, thereby generating a final image; at least one of displaying the final image and transmitting the final image.Type: GrantFiled: November 13, 2012Date of Patent: June 20, 2017Assignees: ETH Zurich, The Technion Research and Development Foundation Ltd.Inventors: Claudia Kuster, Tiberiu Popa, Jean-Charles Bazin, Markus Gross, Craig Gotsman
-
Publication number: 20160373717Abstract: There is provided a video processing system for use with a video having frames including a first frame and neighboring frames of the first frame. The system includes a memory storing a video processing application, and a processor. The processor is configured to execute the video processing application to sample scene points corresponding to an output pixel of the first frame of the frames of the video, the scene points including alternate observations of a same scene point from the neighboring frames of the first frame of the video, and filter the scene points corresponding to the output pixel to determine a color of the output pixel by calculating a weighted combination of the scene points corresponding to the output pixel.Type: ApplicationFiled: June 18, 2015Publication date: December 22, 2016Inventors: Oliver Wang, Marcus Magnor, Felix Klose, Jean-Charles Bazin, Alexander Sorkine Hornung
-
Publication number: 20150009277Abstract: A method for image processing in video conferencing, for correcting the gaze of an interlocutor in an image or a sequence of images captured by at least one real camera, comprises the steps of the at least one real camera acquiring an original image of the interlocutor; synthesizing a corrected view of the interlocutor's face as seen by a virtual camera, the virtual camera being located on the interlocutor's line of sight and oriented towards the interlocutor; transferring the corrected view of the interlocutor's face from the synthesized view into the original image, thereby generating a final image; at least one of displaying the final image and transmitting the final image.Type: ApplicationFiled: November 13, 2013Publication date: January 8, 2015Applicants: ETH Zürich, The Technion Research and Development Foundation Ltd.Inventors: Claudia Kuster, Tiberiu Poppa, Jean-Charles Bazin, Markus Gross, Craig Gotsman