Patents by Inventor Adrian P. Lindberg

Adrian P. Lindberg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104843
    Abstract: In some embodiments, a computer system facilitates depth conflict mitigation for a virtual object that is in contact with one or more physical objects in a three-dimensional environment by reducing visual prominence of one or more portions of the virtual object. In some embodiments, a computer system adjusts the visibility of one or more virtual objects in a three-dimensional environment by applying a visual effect to the one or more virtual objects in response to detecting one or more portions of a user. In some embodiments, a computer system modifies visual prominence in accordance with a level of engagement with a virtual object.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 28, 2024
    Inventors: Christopher D. MCKENZIE, Benjamin HYLAK, Conner J. BROOKS, Adrian P. LINDBERG, Bryce L. SCHMIDTCHEN
  • Publication number: 20240104686
    Abstract: Techniques are disclosed herein for implementing a novel, low latency, guidance map-free video matting system, e.g., for use in extended reality (XR) platforms. The techniques may be designed to work with low resolution auxiliary inputs (e.g., binary segmentation masks) and to generate alpha mattes (e.g., alpha mattes configured to segment out any object(s) of interest, such as human hands, from a captured image) in near real-time and in a computationally efficient manner. Further, in a domain-specific setting, the system can function on a captured image stream alone, i.e., it would not require any auxiliary inputs, thereby reducing computational costs—without compromising on visual quality and user comfort. Once an alpha matte has been generated, various alpha-aware graphical processing operations may be performed on the captured images according to the generated alpha mattes (e.g.
    Type: Application
    Filed: September 19, 2023
    Publication date: March 28, 2024
    Inventors: Srinidhi Aravamudhan, Adrian P. Lindberg, Eshan Verma, Jaya Vijetha Gattupalli, Mingshan Wang, Ranjit Desai, Vinay Palakkode
  • Patent number: 11922588
    Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: March 5, 2024
    Assignee: Apple Inc.
    Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
  • Patent number: 11783550
    Abstract: Implementations of the subject technology provide for image composition for extended reality systems. Image composition may include combining virtual content from virtual images with physical content from images captured by one or more cameras. The virtual content and the physical content can be combined to form a composite image using depth information for the virtual content and the physical content. An adjustment mask may be generated to indicate edges or boundaries between virtual and physical content at which artifact correction for the composite image can be performed.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: October 10, 2023
    Assignee: Apple Inc.
    Inventors: Daniele Casaburo, Adrian P. Lindberg
  • Publication number: 20230319296
    Abstract: A method is provided that includes receiving content data captured by a sensor and receiving a context signal representing a user context. The received content data is scaled using a trained model, wherein the context signal is an input to the trained model, and the scaled content data is provided for presentation to a user.
    Type: Application
    Filed: August 16, 2022
    Publication date: October 5, 2023
    Inventors: Ranjit Desai, Adrian P. Lindberg, Kaushik Raghunath, Vinay Palakkode
  • Patent number: 11636656
    Abstract: Various implementations disclosed herein include devices, systems, and methods that create additional depth frames where a depth camera runs at a lower frame rate than a light intensity camera. Rather than upconverting the depth frames by simply repeating a previous depth camera frame, additional depth frames are created by adjusting some of the depth values of a prior frame based on the RGB camera data (e.g., by “dragging” depths from their positions in the prior depth frame to new positions for a new frame). Specifically, a contour image is generated, and changes in the contour image are used to determine how to adjust (e.g., drag) the depth values for the additional depth frames. The contour image may be based on a mask (e.g., occlusions masks identifying where the hand occludes the virtual cube).
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: April 25, 2023
    Assignee: Apple Inc.
    Inventors: Daniele Casaburo, Adrian P. Lindberg
  • Publication number: 20220335699
    Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.
    Type: Application
    Filed: July 1, 2022
    Publication date: October 20, 2022
    Applicant: APPLE INC.
    Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
  • Patent number: 11393174
    Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: July 19, 2022
    Assignee: APPLE INC.
    Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
  • Publication number: 20220084289
    Abstract: Implementations of the subject technology provide for image composition for extended reality systems. Image composition may include combining virtual content from virtual images with physical content from images captured by one or more cameras. The virtual content and the physical content can be combined to form a composite image using depth information for the virtual content and the physical content. An adjustment mask may be generated to indicate edges or boundaries between virtual and physical content at which artifact correction for the composite image can be performed.
    Type: Application
    Filed: February 26, 2021
    Publication date: March 17, 2022
    Inventors: Daniele CASABURO, Adrian P. LINDBERG
  • Patent number: 11151798
    Abstract: Various implementations disclosed herein include devices, systems, and methods that create additional depth frames in the circumstance wherein a depth camera runs at a lower frame rate than a light intensity camera. Rather than upconverting the depth frames by simply repeating a previous depth camera frame, additional depth frames are created by adjusting some of the depth values of a prior frame based on the RGB camera data (e.g., by “dragging” depths from their positions in the prior depth frame to new positions for a new frame). Specifically, a contour image (e.g., identifying interior and exterior outlines of a hand with respect to a virtual cube that the hand occludes) is generated based on a mask (e.g., occlusions masks identifying where the hand occludes the virtual cube). Changes in the contour image are used to determine how to adjust (e.g., drag) the depth values for the additional depth frames.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: October 19, 2021
    Assignee: Apple Inc.
    Inventors: Daniele Casaburo, Adrian P. Lindberg
  • Publication number: 20200402319
    Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.
    Type: Application
    Filed: September 4, 2020
    Publication date: December 24, 2020
    Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
  • Patent number: 10777007
    Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.
    Type: Grant
    Filed: January 10, 2018
    Date of Patent: September 15, 2020
    Assignee: Apple Inc.
    Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
  • Patent number: 10643373
    Abstract: Various embodiments of the disclosure pertain to an augmented or virtual reality interface for interacting with maps displayed from a virtual camera perspective on a mobile device. Instead of manipulating the position of the virtual camera using a touchscreen interface, some embodiments allow a spatial location of the mobile device to control the position of the virtual camera. For example, a user can tilt the mobile device to obtain different angles of the virtual camera. As another example, the user can move the mobile device vertically to change the height of the virtual camera, e.g., a higher altitude above the ground.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: May 5, 2020
    Assignee: Apple Inc.
    Inventors: Nathan L. Fillhardt, Adrian P. Lindberg, Vincent P. Arroyo, Justin M. Strawn
  • Publication number: 20190371072
    Abstract: Some implementations involve, on a computing device having a processor, a memory, and an image sensor, obtaining an image of a physical environment using the image sensor. Various implementations detect a depiction of a physical environment object in the image, and determine a 3D location of the object in a 3D space based on the depiction of the object in the image and a 3D model of the physical environment object. Various implementations determine an occlusion based on the 3D location of the object and a 3D location of a virtual object in the 3D space. A CGR experience is then displayed based on the occlusion, where at least a portion of the object or the virtual object is occluded by the other.
    Type: Application
    Filed: May 31, 2019
    Publication date: December 5, 2019
    Inventors: Adrian P. Lindberg, Amritpal Singh Saini, Stefan Auer
  • Publication number: 20190102943
    Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.
    Type: Application
    Filed: January 10, 2018
    Publication date: April 4, 2019
    Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
  • Publication number: 20180365883
    Abstract: Various embodiments of the disclosure pertain to an augmented or virtual reality interface for interacting with maps displayed from a virtual camera perspective on a mobile device. Instead of manipulating the position of the virtual camera using a touchscreen interface, some embodiments allow a spatial location of the mobile device to control the position of the virtual camera. For example, a user can tilt the mobile device to obtain different angles of the virtual camera. As another example, the user can move the mobile device vertically to change the height of the virtual camera, e.g., a higher altitude above the ground.
    Type: Application
    Filed: May 16, 2018
    Publication date: December 20, 2018
    Inventors: Nathan L. Fillhardt, Adrian P. Lindberg, Vincent P. Arroyo, Justin M. Strawn