Patents by Inventor Johannes Peter Kopf
Johannes Peter Kopf has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961201Abstract: In one embodiment, a method includes accessing multiple 3D photos to be concurrently displayed through multiple frames positioned in a virtual space, each of the of 3D photos having an optimal viewing point in the virtual space and determining a reference point based on a head pose of a viewer within the virtual space. The method may further include adjusting each 3D photo by rotating the 3D photo so that the optimal viewing point of the 3D photo points at the reference point, translating the rotated 3D photo toward the reference point, and non-uniformly scaling the rotated and translated 3D photo based on a scaling factor determined using the reference point and a position of the frame through which the 3D photo is to be viewed. The method may further include rendering an image comprising the adjusted multiple 3D photos as seen through the multiple frames.Type: GrantFiled: March 7, 2022Date of Patent: April 16, 2024Assignee: Meta Platforms Technologies, LLCInventors: Johannes Peter Kopf, Xuejian Rong, Tuotuo Li, Ocean Quigley
-
Patent number: 11748940Abstract: In one embodiment, a computing system may determine a view position, a view direction, and a time with respect to a scene. The system may access a spatiotemporal representation of the scene generated based on (1) a monocular video including images each capturing at least a portion of the scene at a corresponding time and (2) depth values of the portion of the scene captured by each image. The system may generate an image based on the view position, the view direction, the time, and the spatiotemporal representation. A pixel value of the image corresponding to the view position may be determined based on volume densities and color values at sampling locations along the view direction and at the time in the spatiotemporal representation. The system may output the image to the display, representing the scene at the time as viewed from the view position and in the view direction.Type: GrantFiled: October 11, 2021Date of Patent: September 5, 2023Assignee: Meta Platforms Technologies, LLCInventors: Wenqi Xian, Jia-Bin Huang, Johannes Peter Kopf, Changil Kim
-
Publication number: 20230252599Abstract: Each image in a sequence of images includes three-dimensional locations of object features depicted in the image, and a first camera position of the camera when the image is captured. A gap is detected between first camera positions associated with a first continuous and first camera positions associated with a second continuous subset, the first camera positions associated with the second continuous subset adjusted to close the gap. A view path for a virtual camera is determined based on the first camera positions and the adjusted first camera positions. Second camera positions are determined for the virtual camera, for each of the second camera positions: one of the first camera positions associated with the sequence of images is selected and warped using the first camera position, the second camera position, and the three-dimensional locations of object features depicted in the selected image. A sequence of the warped images is outputted.Type: ApplicationFiled: April 11, 2023Publication date: August 10, 2023Inventors: Andrei Viktorovich Chtcherbatchenko, Francis Yunfeng Ge, Bo Yin, Shi Chen, Fabian Langguth, Johannes Peter Kopf, Suhib Fakhri Mahmod Alsisan, Richard Szeliski
-
Patent number: 11651473Abstract: In one embodiment, a method includes generating an outputted sequence of warped images from a captured sequence of images. Using this captured sequence of images, a computing system may determine one or more three-dimensional locations of object features and a corresponding camera position for each image in the captured sequence of images. Utilizing the camera positions for each image, the computing system may determine a view path representing the perspective of a virtual camera. The computing system may identify one or more virtual camera positions for the virtual camera located on the view path, and subsequently warp one or more images from the sequence of captured images to represent the perspective of the virtual camera at each of the respective virtual camera positions. This results in a sequence of warped images that may be outputted for viewing and interaction on a client device.Type: GrantFiled: May 22, 2020Date of Patent: May 16, 2023Assignee: Meta Platforms, Inc.Inventors: Andrei Viktorovich Chtcherbatchenko, Francis Yunfeng Ge, Bo Yin, Shi Chen, Fabian Langguth, Johannes Peter Kopf, Suhib Fakhri Mahmod Alsisan, Richard Szeliski
-
Publication number: 20220383602Abstract: In one embodiment, a method includes accessing multiple 3D photos to be concurrently displayed through multiple frames positioned in a virtual space, each of the of 3D photos having an optimal viewing point in the virtual space and determining a reference point based on a head pose of a viewer within the virtual space. The method may further include adjusting each 3D photo by rotating the 3D photo so that the optimal viewing point of the 3D photo points at the reference point, translating the rotated 3D photo toward the reference point, and non-uniformly scaling the rotated and translated 3D photo based on a scaling factor determined using the reference point and a position of the frame through which the 3D photo is to be viewed. The method may further include rendering an image comprising the adjusted multiple 3D photos as seen through the multiple frames.Type: ApplicationFiled: March 7, 2022Publication date: December 1, 2022Inventors: Johannes Peter Kopf, Xuejian Rong, Tuotuo Li, Ocean Quigley
-
Publication number: 20210366075Abstract: In one embodiment, a method includes generating an outputted sequence of warped images from a captured sequence of images. Using this captured sequence of images, a computing system may determine one or more three-dimensional locations of object features and a corresponding camera position for each image in the captured sequence of images. Utilizing the camera positions for each image, the computing system may determine a view path representing the perspective of a virtual camera. The computing system may identify one or more virtual camera positions for the virtual camera located on the view path, and subsequently warp one or more images from the sequence of captured images to represent the perspective of the virtual camera at each of the respective virtual camera positions. This results in a sequence of warped images that may be outputted for viewing and interaction on a client device.Type: ApplicationFiled: May 22, 2020Publication date: November 25, 2021Inventors: Andrei Chtcherbatchenko, Francis Yunfeng Ge, Bo Yin, Shi Chen, Fabian Langguth, Johannes Peter Kopf, Suhib Fakhri Mahmod Alsisan, Richard Szeliski
-
Patent number: 10805253Abstract: Systems, methods, and non-transitory computer readable media are configured to detect a concept reflected in a first media content item to which a user is provided access. It is determined that the concept has a threshold level of relevance to the user. The concept is associated with an element that upon selection causes a transition to a second media content item to which the user is provided access, the second media content item reflecting the concept. The element is presented in the first media content item for the user.Type: GrantFiled: December 30, 2016Date of Patent: October 13, 2020Assignee: Facebook, Inc.Inventors: John Samuel Barnett, Johannes Peter Kopf
-
Patent number: 10805662Abstract: A server device and method are provided for use in predictive server-side rendering of scenes based on client-side user input. The server device may include a processor and a storage device holding instructions for an application program executable by the processor to receive, at the application program, a current navigation input in a stream of navigation inputs from a client device over a network, calculate a predicted future navigation input based on the current navigation input and a current application state of the application program, render a future scene based on the predicted future navigation input to a rendering surface, and send the rendering surface to the client device over the network.Type: GrantFiled: November 25, 2019Date of Patent: October 13, 2020Assignee: Microsoft Technology Licensing, LLCInventors: David Chiyuan Chu, Eduardo Alberto Cuervo Laffaye, Johannes Peter Kopf, Alastair Wolman, Yury Degtyarev, Kyungmin Lee, Sergey Grizan
-
Patent number: 10789723Abstract: In one embodiment, a method includes generating depth map for a reference image and generating a three-dimensional (3D) model for a plurality of objects in the reference image based on the depth map. The method additionally includes determining, out of the objects in the 3D model, a background object having a boundary adjacent to a foreground object. The method also includes determining that at least a portion of a surface of the background object is hidden by the foreground object and extending, in the 3D model, the surface of the background object to include the portion hidden by the foreground object. The method further includes in-paint pixels of the extended surface of the background object with pixels that approximate the portion of the surface of the background object hidden by the foreground object.Type: GrantFiled: April 18, 2018Date of Patent: September 29, 2020Assignee: Facebook, Inc.Inventors: Johannes Peter Kopf, Brian Dolhansky, Suhib Fakhri Mahmod Alsisan
-
Patent number: 10750139Abstract: A head mounted display device including a processor configured to compute a rendered rendering surface of a predicted scene having a predicted user viewpoint, the predicted user viewpoint being a prediction of a viewpoint that a user will have at a point in time that was predicted for the user of the head mounted display device prior to the point in time, receive, from the user input device, a subsequent user navigation input near the point in time in the stream of user input, determine an actual user viewpoint based on the subsequent user navigation input, determine a user viewpoint misprediction based on the predicted user viewpoint and the actual user viewpoint, and reconstruct a viewport for the actual user viewpoint from the rendered rendering surface.Type: GrantFiled: May 23, 2017Date of Patent: August 18, 2020Assignee: Microsoft Technology Licensing, LLCInventors: David Chiyuan Chu, Eduardo Alberto Cuervo Laffaye, Johannes Peter Kopf, Alastair Wolman, Yury Degtyarev, Kyungmin Lee, Sergey Grizan
-
Patent number: 10726595Abstract: Systems, methods, and non-transitory computer readable media are configured to detect a concept reflected in a first media content item to which a user is provided access. It is determined that the concept has a threshold level of relevance to the user. The concept is associated with an element that upon selection causes a transition to a second media content item to which the user is provided access, the second media content item reflecting the concept. The element is presented in the first media content item for the user.Type: GrantFiled: December 30, 2016Date of Patent: July 28, 2020Assignee: Facebook, Inc.Inventors: John Samuel Barnett, Johannes Peter Kopf
-
Patent number: 10607318Abstract: Systems, methods, and non-transitory computer-readable media can generate an initial alpha mask for an image based on machine learning techniques. A plurality of uncertain pixels is defined in the initial alpha mask. For each uncertain pixel in the plurality of uncertain pixels, a binary value is assigned based on a nearest certain neighbor determination.Type: GrantFiled: December 20, 2017Date of Patent: March 31, 2020Assignee: Facebook, Inc.Inventors: Jason George McHugh, Michael F. Cohen, Johannes Peter Kopf, Piotr Dollar
-
Publication number: 20200092599Abstract: A server device and method are provided for use in predictive server-side rendering of scenes based on client-side user input. The server device may include a processor and a storage device holding instructions for an application program executable by the processor to receive, at the application program, a current navigation input in a stream of navigation inputs from a client device over a network, calculate a predicted future navigation input based on the current navigation input and a current application state of the application program, render a future scene based on the predicted future navigation input to a rendering surface, and send the rendering surface to the client device over the network.Type: ApplicationFiled: November 25, 2019Publication date: March 19, 2020Applicant: Microsoft Technology Licensing, LLCInventors: David Chiyuan Chu, Eduardo Alberto Cuervo Laffaye, Johannes Peter Kopf, Alastair Wolman, Yury Degtyarev, Kyungmin Lee, Sergey Grizan
-
Patent number: 10499090Abstract: Systems, methods, and non-transitory computer readable media are configured to detect a concept reflected in a first media content item to which a user is provided access. It is determined that the concept has a threshold level of relevance to the user. The concept is associated with an element that upon selection causes a transition to a second media content item to which the user is provided access, the second media content item reflecting the concept. The element is presented in the first media content item for the user.Type: GrantFiled: December 30, 2016Date of Patent: December 3, 2019Assignee: Facebook, Inc.Inventors: John Samuel Barnett, Johannes Peter Kopf
-
Patent number: 10491941Abstract: A server device and method are provided for use in predictive server-side rendering of scenes based on client-side user input. The server device may include a processor and a storage device holding instructions for an application program executable by the processor to receive, at the application program, a current navigation input in a stream of navigation inputs from a client device over a network, calculate a predicted future navigation input based on the current navigation input and a current application state of the application program, render a future scene based on the predicted future navigation input to a rendering surface, and send the rendering surface to the client device over the network.Type: GrantFiled: August 30, 2017Date of Patent: November 26, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: David Chiyuan Chu, Eduardo Alberto Cuervo Laffaye, Johannes Peter Kopf, Alastair Wolman, Yury Degtyarev, Kyungmin Lee, Sergey Grizan
-
Patent number: 10425582Abstract: An image processing system generates 360-degree stabilized videos with higher robustness, speed, and smoothing ability using a hybrid 3D-2D stabilization model. The image processing system first receives an input video data (e.g., a 360-degree video data) for rotation stabilization. After tracking feature points through the input video data, the image processing system determines key frames and estimates rotations of key frames using a 3D reasoning based on the tracked feature points. The image processing system also optimizes inner frames between key frames using a 2D analysis based on the estimated key frame rotation. After the 3D reasoning and the 2D analysis, the image processing system may reapply a smoothed version of raw rotations to preserve desirable rotations included in the original input video data, and generates a stabilized version of the input video data (e.g., a 360-degree stabilized video).Type: GrantFiled: August 25, 2016Date of Patent: September 24, 2019Assignee: Facebook, Inc.Inventor: Johannes Peter Kopf
-
Publication number: 20180300851Abstract: A reactive profile picture brings a profile image to life by displaying short video segments of the target user expressing a relevant emotion in reaction to an action by a viewing user that relates to content associated with the target user in an online system such as a social media web site. The viewing user therefore experiences a real-time reaction in a manner similar to a face-to-face interaction. The reactive profile picture can be automatically generated from either a video input of the target user or from a single input image of the target user.Type: ApplicationFiled: April 13, 2018Publication date: October 18, 2018Inventors: Hadar Elor, Michael F. Cohen, Johannes Peter Kopf
-
Publication number: 20180302612Abstract: To enable better sharing and preservation of immersive experiences, a graphics system reconstructs a three-dimensional scene from a set of images of the scene taken from different vantage points. The system processes each image to extract depth information therefrom and then stitches the images (both color and depth information) into a multi-layered panorama that includes at least front and back surface layers. The front and back surface layers are then merged to remove redundancies and create connections between neighboring pixels that are likely to represent the same object, while removing connections between neighboring pixels that are not. The resulting layered panorama with depth information can be rendered using a virtual reality (VR) system, a mobile device, or other computing and display platforms using standard rendering techniques, to enable three-dimensional viewing of the scene.Type: ApplicationFiled: June 26, 2018Publication date: October 18, 2018Inventors: Johannes Peter Kopf, Lars Peter Johannes Hedman, Richard Szeliski
-
Publication number: 20180300534Abstract: A reactive profile picture brings a profile image to life by displaying short video segments of the target user expressing a relevant emotion in reaction to an action by a viewing user that relates to content associated with the target user in an online system such as a social media web site. The viewing user therefore experiences a real-time reaction in a manner similar to a face-to-face interaction. The reactive profile picture can be automatically generated from either a video input of the target user or from a single input image of the target user.Type: ApplicationFiled: April 13, 2018Publication date: October 18, 2018Inventors: Hadar Elor, Michael F. Cohen, Johannes Peter Kopf
-
Patent number: 10038894Abstract: To enable better sharing and preservation of immersive experiences, a graphics system reconstructs a three-dimensional scene from a set of images of the scene taken from different vantage points. The system processes each image to extract depth information therefrom and then stitches the images (both color and depth information) into a multi-layered panorama that includes at least front and back surface layers. The front and back surface layers are then merged to remove redundancies and create connections between neighboring pixels that are likely to represent the same object, while removing connections between neighboring pixels that are not. The resulting layered panorama with depth information can be rendered using a virtual reality (VR) system, a mobile device, or other computing and display platforms using standard rendering techniques, to enable three-dimensional viewing of the scene.Type: GrantFiled: April 17, 2017Date of Patent: July 31, 2018Assignee: Facebook, Inc.Inventors: Johannes Peter Kopf, Lars Peter Johannes Hedman, Richard Szeliski