Patents by Inventor Tobias RICK

Tobias RICK has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104872
    Abstract: Various implementations provide a view of a 3D environment including a portal for viewing a stereo item (e.g., a photo or video) positioned a distance behind the portal. One or more visual effects are provided based on texture of one or more portions of the stereo item, e.g., texture at cutoff or visible edges of the stereo item. The effects change the appearance of the stereo item or the portal itself, e.g., improving visual comfort issues by minimizing window violations or otherwise enhancing the viewing experience. Various implementations provide a view of a 3D environment including an immersive view of a stereo item without using portal. Such a visual effect may be provided to partially obscure the surrounding 3D environment.
    Type: Application
    Filed: September 21, 2023
    Publication date: March 28, 2024
    Inventors: Bryce L. Schmidtchen, Bryan Cline, Charles O. Goddard, Michael I. Weinstein, Tsao-Wei Huang, Tobias Rick, Vedant Saran, Alexander Menzies, Alexandre Da Veiga
  • Publication number: 20240096013
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
    Type: Application
    Filed: November 22, 2023
    Publication date: March 21, 2024
    Inventors: Rafael Felipe Veiga Saracchini, Tobias Rick, Zachary Z. Becker
  • Patent number: 11935187
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: March 19, 2024
    Inventors: Rafael Saracchini, Tobias Rick, Zachary Z. Becker
  • Publication number: 20240062488
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model of an object based on images and tracked positions of a device during acquisition of the images. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment acquired via a camera on the device, identifying the object in at least some of the images, tracking positions of the device during acquisition of the images based on identifying the object in the at least some of the images, the positions identifying positioning of the device with respect to a coordinate system defined based on a position and orientation of the object, and generating a 3D model of the object based on the images and positions of the device during acquisition of the images.
    Type: Application
    Filed: November 1, 2023
    Publication date: February 22, 2024
    Inventors: Thorsten Gernoth, Chen Huang, Onur C. Hamsici, Shuo Feng, Hao Tang, Tobias Rick
  • Patent number: 11875455
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: January 16, 2024
    Inventors: Rafael Saracchini, Tobias Rick, Zachary Z. Becker
  • Publication number: 20240007607
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determine and provide a transition (optionally including a transition effect) between different types of views of three-dimensional (3D) content. For example, an example process may include obtaining a 3D content item, providing a first view of the 3D content item within a 3D environment, determining to transition from the first view to a second view of the 3D content item based on a criterion, and providing the second view of the 3D content item within the 3D environment, where the left eye view and the right eye view of the 3D content item are based on at least one of the left eye content and the right eye content.
    Type: Application
    Filed: September 13, 2023
    Publication date: January 4, 2024
    Inventors: Alexandre Da Veiga, Alexander Menzies, Tobias Rick
  • Publication number: 20230419593
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present views of media objects using different viewing states determined based on context. In some implementations, a view of a 3D environment is presented. Then, a context associated with viewing one or more media objects within the 3D environment is determined, the media objects associated with data for providing an appearance of depth within the one or more media objects. Based on the context, a viewing state is determined for viewing a media object of the one or more media objects within the 3D environment, the viewing state defining whether the media object will be presented as a planar object or with depth within the media object. In accordance with a determination that the viewing state is a first viewing state, the media object is presented within the 3D environment using its associated data for providing the appearance of depth.
    Type: Application
    Filed: September 11, 2023
    Publication date: December 28, 2023
    Inventors: Alexandre Da Veiga, Tobias Rick, Timothy R. Pease
  • Publication number: 20230403386
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provides a view of a three-dimensional (3D) environment that includes a projection of a 3D image, such as a multi-directional stereo image or video content. For example, an example process may include obtaining a three-dimensional (3D) image including a stereoscopic image pair including left eye content corresponding to a left eye viewpoint and right eye content corresponding to a right eye viewpoint, generating a projection of the 3D image within a 3D environment by projecting portions of the 3D image to form a shape within the 3D environment, the shape based on an angle of view of the 3D image, where the 3D environment includes additional content separate from the 3D image, and providing a view of the 3D environment including the projection of the 3D image.
    Type: Application
    Filed: August 28, 2023
    Publication date: December 14, 2023
    Inventors: Alexandre Da Veiga, Tobias Rick
  • Publication number: 20230336865
    Abstract: The present disclosure generally relates to techniques and user interfaces for capturing media, displaying a preview of media, displaying a recording indicator, displaying a camera user interface, and/or displaying previously captured media.
    Type: Application
    Filed: November 22, 2022
    Publication date: October 19, 2023
    Inventors: Alexandre DA VEIGA, Lee S. Broughton, Angel Suet Yan CHEUNG, Stephen O. LEMAY, Chia Yang LIN, Behkish J. MANZARI, Ivan MARKOVIC, Alexander MENZIES, Aaron MORING, Jonathan RAVASZ, Tobias RICK, Bryce L. SCHMIDTCHEN, William A. SORRENTINO, III
  • Patent number: 11580692
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: February 14, 2023
    Assignee: Apple Inc.
    Inventors: Rafael Saracchini, Tobias Rick, Zachary Z. Becker
  • Patent number: 11217021
    Abstract: A mixed reality system that includes a head-mounted display (HMD) that provides 3D virtual views of a user's environment augmented with virtual content. The HMD may include sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors provide the information as inputs to a controller that renders frames including virtual content based at least in part on the inputs from the sensors. The HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: January 4, 2022
    Assignee: Apple Inc.
    Inventors: Ricardo J. Motta, Brett D. Miller, Tobias Rick, Manohar B. Srikanth
  • Publication number: 20210279967
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model of an object based on images and tracked positions of a device during acquisition of the images. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment acquired via a camera on the device, identifying the object in at least some of the images, tracking positions of the device during acquisition of the images based on identifying the object in the at least some of the images, the positions identifying positioning of the device with respect to a coordinate system defined based on a position and orientation of the object, and generating a 3D model of the object based on the images and positions of the device during acquisition of the images.
    Type: Application
    Filed: February 19, 2021
    Publication date: September 9, 2021
    Inventors: Thorsten Gernoth, Chen Huang, Onur C. Hamsici, Shuo Feng, Hao Tang, Tobias Rick
  • Publication number: 20210264664
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
    Type: Application
    Filed: February 9, 2021
    Publication date: August 26, 2021
    Inventors: Rafael Saracchini, Tobias Rick, Zachary Z. Becker
  • Patent number: 11044398
    Abstract: A light field panorama system in which a user holding a mobile device performs a gesture to capture images of a scene from different positions. Additional information, for example position and orientation information, may also be captured. The images and information may be processed to determine metadata including the relative positions of the images and depth information for the images. The images and metadata may be stored as a light field panorama. The light field panorama may be processed by a rendering engine to render different 3D views of the scene to allow a viewer to explore the scene from different positions and angles with six degrees of freedom. Using a rendering and viewing system such as a mobile device or head-mounted display, the viewer may see behind or over objects in the scene, zoom in or out on the scene, or view different parts of the scene.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: June 22, 2021
    Assignee: Apple Inc.
    Inventors: Gabriel D. Molina, Ricardo J. Motta, Gary L. Vondran, Jr., Dan Lelescu, Tobias Rick, Brett Miller
  • Publication number: 20200106959
    Abstract: A light field panorama system in which a user holding a mobile device performs a gesture to capture images of a scene from different positions. Additional information, for example position and orientation information, may also be captured. The images and information may be processed to determine metadata including the relative positions of the images and depth information for the images. The images and metadata may be stored as a light field panorama. The light field panorama may be processed by a rendering engine to render different 3D views of the scene to allow a viewer to explore the scene from different positions and angles with six degrees of freedom. Using a rendering and viewing system such as a mobile device or head-mounted display, the viewer may see behind or over objects in the scene, zoom in or out on the scene, or view different parts of the scene.
    Type: Application
    Filed: September 25, 2019
    Publication date: April 2, 2020
    Applicant: Apple Inc.
    Inventors: Gabriel D. Molina, Ricardo J. Motta, Gary L. Vondran, JR., Dan Lelescu, Tobias Rick, Brett Miller
  • Publication number: 20190221044
    Abstract: A mixed reality system that includes a head-mounted display (HMD) that provides 3D virtual views of a user's environment augmented with virtual content. The HMD may include sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors provide the information as inputs to a controller that renders frames including virtual content based at least in part on the inputs from the sensors. The HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user.
    Type: Application
    Filed: March 21, 2019
    Publication date: July 18, 2019
    Applicant: Apple Inc.
    Inventors: Ricardo J. Motta, Brett D. Miller, Tobias Rick, Manohar B. Srikanth
  • Patent number: 10193549
    Abstract: In one aspect, a modular sensing apparatus will be described. The modular sensing apparatus includes a flexible substrate and multiple sensors. The flexible substrate is reconfigurable into different shapes that conform to differently shaped structures. The multiple sensors are positioned on the substrate. Various embodiments relate to software, devices and/or systems that involve or communicate with the modular sensing apparatus.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: January 29, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Nan-Wei Gong, Tobias Rick, Arun Rakesh Yoganandan, Henry Holtzman, Jae Woo Chung, Kumi Akiyoshi
  • Publication number: 20180082482
    Abstract: A mixed reality system that includes a head-mounted display (HMD) that provides 3D virtual views of a user's environment augmented with virtual content. The HMD may include sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors provide the information as inputs to a controller that renders frames including virtual content based at least in part on the inputs from the sensors. The HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user.
    Type: Application
    Filed: September 21, 2017
    Publication date: March 22, 2018
    Applicant: Apple Inc.
    Inventors: Ricardo J. Motta, Brett D. Miller, Tobias Rick, Manohar B. Srikanth
  • Publication number: 20170187377
    Abstract: In one aspect, a modular sensing apparatus will be described. The modular sensing apparatus includes a flexible substrate and multiple sensors. The flexible substrate is reconfigurable into different shapes that conform to differently shaped structures. The multiple sensors are positioned on the substrate. Various embodiments relate to software, devices and/or systems that involve or communicate with the modular sensing apparatus.
    Type: Application
    Filed: December 29, 2015
    Publication date: June 29, 2017
    Inventors: Nan-Wei Gong, Tobias Rick, Arun Rakesh Yoganandan, Henry Holtzman, Jae Woo Chung, Kumi Akiyoshi
  • Publication number: 20140035805
    Abstract: A Spatial Operating Environment (SOE) with markerless gestural control includes a sensor coupled to a processor that runs numerous applications. A gestural interface application executes on the processor. The gestural interface application receives data from the sensor that corresponds to a hand of a user detected by the sensor, and tracks the hand by generating images from the data and associating blobs in the images with tracks of the hand. The gestural interface application detects a pose of the hand by classifying each blob as corresponding to an object shape. The gestural interface application generates a gesture signal in response to a gesture comprising the pose and the tracks, and controls the applications with the gesture signal.
    Type: Application
    Filed: June 4, 2013
    Publication date: February 6, 2014
    Inventors: David MINNEN, Alan BROWNING, Peter HAWKES, Tobias RICK, Miguel Sanchez VALDES, Alessandro VALLI, Dan CHAK, Paul YARIN