Patents by Inventor Adrian P. Lindberg
Adrian P. Lindberg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240104843Abstract: In some embodiments, a computer system facilitates depth conflict mitigation for a virtual object that is in contact with one or more physical objects in a three-dimensional environment by reducing visual prominence of one or more portions of the virtual object. In some embodiments, a computer system adjusts the visibility of one or more virtual objects in a three-dimensional environment by applying a visual effect to the one or more virtual objects in response to detecting one or more portions of a user. In some embodiments, a computer system modifies visual prominence in accordance with a level of engagement with a virtual object.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Christopher D. MCKENZIE, Benjamin HYLAK, Conner J. BROOKS, Adrian P. LINDBERG, Bryce L. SCHMIDTCHEN
-
Publication number: 20240104686Abstract: Techniques are disclosed herein for implementing a novel, low latency, guidance map-free video matting system, e.g., for use in extended reality (XR) platforms. The techniques may be designed to work with low resolution auxiliary inputs (e.g., binary segmentation masks) and to generate alpha mattes (e.g., alpha mattes configured to segment out any object(s) of interest, such as human hands, from a captured image) in near real-time and in a computationally efficient manner. Further, in a domain-specific setting, the system can function on a captured image stream alone, i.e., it would not require any auxiliary inputs, thereby reducing computational costs—without compromising on visual quality and user comfort. Once an alpha matte has been generated, various alpha-aware graphical processing operations may be performed on the captured images according to the generated alpha mattes (e.g.Type: ApplicationFiled: September 19, 2023Publication date: March 28, 2024Inventors: Srinidhi Aravamudhan, Adrian P. Lindberg, Eshan Verma, Jaya Vijetha Gattupalli, Mingshan Wang, Ranjit Desai, Vinay Palakkode
-
Patent number: 11922588Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.Type: GrantFiled: July 1, 2022Date of Patent: March 5, 2024Assignee: Apple Inc.Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
-
Patent number: 11783550Abstract: Implementations of the subject technology provide for image composition for extended reality systems. Image composition may include combining virtual content from virtual images with physical content from images captured by one or more cameras. The virtual content and the physical content can be combined to form a composite image using depth information for the virtual content and the physical content. An adjustment mask may be generated to indicate edges or boundaries between virtual and physical content at which artifact correction for the composite image can be performed.Type: GrantFiled: February 26, 2021Date of Patent: October 10, 2023Assignee: Apple Inc.Inventors: Daniele Casaburo, Adrian P. Lindberg
-
Publication number: 20230319296Abstract: A method is provided that includes receiving content data captured by a sensor and receiving a context signal representing a user context. The received content data is scaled using a trained model, wherein the context signal is an input to the trained model, and the scaled content data is provided for presentation to a user.Type: ApplicationFiled: August 16, 2022Publication date: October 5, 2023Inventors: Ranjit Desai, Adrian P. Lindberg, Kaushik Raghunath, Vinay Palakkode
-
Patent number: 11636656Abstract: Various implementations disclosed herein include devices, systems, and methods that create additional depth frames where a depth camera runs at a lower frame rate than a light intensity camera. Rather than upconverting the depth frames by simply repeating a previous depth camera frame, additional depth frames are created by adjusting some of the depth values of a prior frame based on the RGB camera data (e.g., by “dragging” depths from their positions in the prior depth frame to new positions for a new frame). Specifically, a contour image is generated, and changes in the contour image are used to determine how to adjust (e.g., drag) the depth values for the additional depth frames. The contour image may be based on a mask (e.g., occlusions masks identifying where the hand occludes the virtual cube).Type: GrantFiled: October 1, 2021Date of Patent: April 25, 2023Assignee: Apple Inc.Inventors: Daniele Casaburo, Adrian P. Lindberg
-
Publication number: 20220335699Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.Type: ApplicationFiled: July 1, 2022Publication date: October 20, 2022Applicant: APPLE INC.Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
-
Patent number: 11393174Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.Type: GrantFiled: September 4, 2020Date of Patent: July 19, 2022Assignee: APPLE INC.Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
-
Publication number: 20220084289Abstract: Implementations of the subject technology provide for image composition for extended reality systems. Image composition may include combining virtual content from virtual images with physical content from images captured by one or more cameras. The virtual content and the physical content can be combined to form a composite image using depth information for the virtual content and the physical content. An adjustment mask may be generated to indicate edges or boundaries between virtual and physical content at which artifact correction for the composite image can be performed.Type: ApplicationFiled: February 26, 2021Publication date: March 17, 2022Inventors: Daniele CASABURO, Adrian P. LINDBERG
-
Patent number: 11151798Abstract: Various implementations disclosed herein include devices, systems, and methods that create additional depth frames in the circumstance wherein a depth camera runs at a lower frame rate than a light intensity camera. Rather than upconverting the depth frames by simply repeating a previous depth camera frame, additional depth frames are created by adjusting some of the depth values of a prior frame based on the RGB camera data (e.g., by “dragging” depths from their positions in the prior depth frame to new positions for a new frame). Specifically, a contour image (e.g., identifying interior and exterior outlines of a hand with respect to a virtual cube that the hand occludes) is generated based on a mask (e.g., occlusions masks identifying where the hand occludes the virtual cube). Changes in the contour image are used to determine how to adjust (e.g., drag) the depth values for the additional depth frames.Type: GrantFiled: November 5, 2020Date of Patent: October 19, 2021Assignee: Apple Inc.Inventors: Daniele Casaburo, Adrian P. Lindberg
-
Publication number: 20200402319Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.Type: ApplicationFiled: September 4, 2020Publication date: December 24, 2020Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
-
Patent number: 10777007Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.Type: GrantFiled: January 10, 2018Date of Patent: September 15, 2020Assignee: Apple Inc.Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
-
Patent number: 10643373Abstract: Various embodiments of the disclosure pertain to an augmented or virtual reality interface for interacting with maps displayed from a virtual camera perspective on a mobile device. Instead of manipulating the position of the virtual camera using a touchscreen interface, some embodiments allow a spatial location of the mobile device to control the position of the virtual camera. For example, a user can tilt the mobile device to obtain different angles of the virtual camera. As another example, the user can move the mobile device vertically to change the height of the virtual camera, e.g., a higher altitude above the ground.Type: GrantFiled: May 16, 2018Date of Patent: May 5, 2020Assignee: Apple Inc.Inventors: Nathan L. Fillhardt, Adrian P. Lindberg, Vincent P. Arroyo, Justin M. Strawn
-
Publication number: 20190371072Abstract: Some implementations involve, on a computing device having a processor, a memory, and an image sensor, obtaining an image of a physical environment using the image sensor. Various implementations detect a depiction of a physical environment object in the image, and determine a 3D location of the object in a 3D space based on the depiction of the object in the image and a 3D model of the physical environment object. Various implementations determine an occlusion based on the 3D location of the object and a 3D location of a virtual object in the 3D space. A CGR experience is then displayed based on the occlusion, where at least a portion of the object or the virtual object is occluded by the other.Type: ApplicationFiled: May 31, 2019Publication date: December 5, 2019Inventors: Adrian P. Lindberg, Amritpal Singh Saini, Stefan Auer
-
Publication number: 20190102943Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.Type: ApplicationFiled: January 10, 2018Publication date: April 4, 2019Inventors: Nathan L. Fillhardt, Syed Mohsin Hasan, Adrian P. Lindberg
-
Publication number: 20180365883Abstract: Various embodiments of the disclosure pertain to an augmented or virtual reality interface for interacting with maps displayed from a virtual camera perspective on a mobile device. Instead of manipulating the position of the virtual camera using a touchscreen interface, some embodiments allow a spatial location of the mobile device to control the position of the virtual camera. For example, a user can tilt the mobile device to obtain different angles of the virtual camera. As another example, the user can move the mobile device vertically to change the height of the virtual camera, e.g., a higher altitude above the ground.Type: ApplicationFiled: May 16, 2018Publication date: December 20, 2018Inventors: Nathan L. Fillhardt, Adrian P. Lindberg, Vincent P. Arroyo, Justin M. Strawn