Patents by Inventor Alexandre da Veiga
Alexandre da Veiga has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12206830Abstract: Embodiments disclosed herein are directed to devices, systems, and methods for separately processing an image stream to generate a first set of images for display by a set of displays and a second set of images for storage or transfer as part of a media capture event. Specifically, a first set of images from an image stream may be used to generate a first set of transformed images, and these transformed images may be displayed on a set of displays. A second set of images may be selected from the image stream in response to a capture request associated with a media capture event, and the second set of images may be used to generate a second set of transformed images. A set of output images may be generated from the second set of transformed images, and the set of output images may be stored or transmitted for later viewing.Type: GrantFiled: May 10, 2024Date of Patent: January 21, 2025Assignee: APPLE INC.Inventors: Alexander Menzies, Tobias Rick, Alexandre Da Veiga, Bryce L Schmidtchen, Vedant Saran, Bryan Cline, Michael I Weinstein, Tsao-Wei Huang
-
Patent number: 12198280Abstract: In an exemplary process, a set of parameters corresponding to characteristics of a physical setting of a user is obtained. Based on the parameters, at least one display placement value and a fixed boundary location corresponding to the physical setting are obtained. In accordance with a determination that the at least one display placement value satisfies a display placement criterion, a virtual display is displayed at the fixed boundary location corresponding to the physical setting.Type: GrantFiled: December 20, 2022Date of Patent: January 14, 2025Assignee: Apple Inc.Inventors: Timothy R. Pease, Alexandre Da Veiga, David H. Huang, Peng Liu, Robert K. Molholm
-
Publication number: 20240414308Abstract: Various implementations disclosed herein include devices, systems, and methods that dynamically apply a 3D effect to a 2D asset. For example, a process may obtain an image depicting two-dimensional (2D) content. The process may further determine to apply a three-dimensional (3D) effect to the image via a head mounted device (HMD). The process may further, in accordance with determining to apply the 3D effect to the image, present a view of a 3D environment including the image. The image may be positioned at a location within the 3D environment and the view may depict the image using the 3D effect.Type: ApplicationFiled: May 31, 2024Publication date: December 12, 2024Inventors: Tobias Rick, Alexandre Da Veiga, Alexander Menzies, Vladlen Koltun, Vicki M Murley, Dean Jackson, Chelsea E Pugh, Alexa Rockwell
-
Publication number: 20240404177Abstract: Devices, systems, and methods that provide a view of a three-dimensional (3D) environment (e.g., a viewer's room) with a portal providing views of another user's (e.g., a sender's) background environment during a communication session. For example, a process at a first electronic device may include presenting a view of a first 3D environment. Data representing a second 3D environment based at least in part on sensor data captured in a physical environment of a second electronic device may be obtained. Portal content based on the data representing the second 3D environment and a viewpoint within the first 3D environment may be determined. A portal with the portal content may be displayed in the view of the first 3D environment, where the portal content depicts a portion of the second 3D environment viewed through the portal from the viewpoint.Type: ApplicationFiled: May 31, 2024Publication date: December 5, 2024Inventors: Alexandre Da Veiga, Wei Wang, Jeffrey S Norris
-
Publication number: 20240406365Abstract: Embodiments disclosed herein are directed to devices, systems, and methods for separately processing an image stream to generate a first set of images for display by a set of displays and a second set of images for storage or transfer as part of a media capture event. Specifically, a first set of images from an image stream may be used to generate a first set of transformed images, and these transformed images may be displayed on a set of displays. A second set of images may be selected from the image stream in response to a capture request associated with a media capture event, and the second set of images may be used to generate a second set of transformed images. A set of output images may be generated from the second set of transformed images, and the set of output images may be stored or transmitted for later viewing.Type: ApplicationFiled: May 10, 2024Publication date: December 5, 2024Inventors: Alexander Menzies, Tobias Rick, Alexandre Da Veiga, Bryce L. Schmidtchen, Vedant Saran, Bryan Cline, Michael I. Weinstein, Tsao-Wei Huang
-
Publication number: 20240355081Abstract: Various implementations provide a view of a 3D environment including a portal for viewing a stereo item (e.g., a photo or video) positioned a distance behind the portal. One or more visual effects are provided based on texture of one or more portions of the stereo item, e.g., texture at cutoff or visible edges of the stereo item. The effects change the appearance of the stereo item or the portal itself, e.g., improving visual comfort issues by minimizing window violations or otherwise enhancing the viewing experience. Various implementations provide a view of a 3D environment including an immersive view of a stereo item without using portal. Such a visual effect may be provided to partially obscure the surrounding 3D environment.Type: ApplicationFiled: July 1, 2024Publication date: October 24, 2024Inventors: Bryce L. Schmidtchen, Bryan Cline, Charles O. Goddard, Michael I. Weinstein, Tsao-Wei Huang, Tobias Rick, Vedant Saran, Alexander Menzies, Alexandre Da Veiga
-
Publication number: 20240339027Abstract: A method includes receiving a signal that indicates a location of a control device that is configured to change an operating state of a controlled device. The method also includes identifying a first visible device and a second visible device in one or more images, matching the first visible device with the control device based on a location of the first visible device matching the location of the control device, matching the second visible device with the controlled device, pairing the control device with a host device, and controlling the control device using the host device to change the operating state of the controlled device.Type: ApplicationFiled: June 17, 2024Publication date: October 10, 2024Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
-
Publication number: 20240257478Abstract: Various implementations disclosed herein include devices, systems, and methods that provide an XR environment that depicts an environment with a viewing portal for viewing an added content item that is positioned behind the viewing portal. Some implementations determine a first position for a viewing portal within a 3D coordinate system corresponding to an environment, determine a second position for a content item within the 3D coordinate system, where the second position is opposite a front surface of the viewing portal, and determine a viewpoint of an electronic device within the 3D coordinate system. In some implementations, a portion of the content item visible through the viewing portal from the viewpoint is identified, where the portion is identified based on the first position, the second position, and the viewpoint. Then, a view of the identified portion of the content item is provided in the environment.Type: ApplicationFiled: March 19, 2024Publication date: August 1, 2024Inventors: Alexandre DA VEIGA, Timothy T. CHONG, Timothy R. PEASE
-
Patent number: 12051170Abstract: Various implementations provide a view of a 3D environment including a portal for viewing a stereo item (e.g., a photo or video) positioned a distance behind the portal. One or more visual effects are provided based on texture of one or more portions of the stereo item, e.g., texture at cutoff or visible edges of the stereo item. The effects change the appearance of the stereo item or the portal itself, e.g., improving visual comfort issues by minimizing window violations or otherwise enhancing the viewing experience. Various implementations provide a view of a 3D environment including an immersive view of a stereo item without using portal. Such a visual effect may be provided to partially obscure the surrounding 3D environment.Type: GrantFiled: September 21, 2023Date of Patent: July 30, 2024Assignee: APPLE INC.Inventors: Bryce L. Schmidtchen, Bryan Cline, Charles O. Goddard, Michael I. Weinstein, Tsao-Wei Huang, Tobias Rick, Vedant Saran, Alexander Menzies, Alexandre Da Veiga
-
Patent number: 12039859Abstract: A method includes receiving one or more signals that each indicate a device type for a respective remote device, identifying one or more visible devices in one or more images, matching a first device from the one or more visible devices with a first signal from the one or more signals based on a device type of the first device matching a device type for the first signal and based on a visible output of the first device, pairing the first device with a second device, and controlling a function of the first device using the second device.Type: GrantFiled: November 3, 2022Date of Patent: July 16, 2024Assignee: APPLE INC.Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
-
Publication number: 20240233097Abstract: Various implementations disclosed herein include devices, systems, and methods that generate 2D views of a 3D environment using a 3D point cloud where the cloud points selected for each view are based on a low-resolution 3D mesh. In some implementations, a 3D point cloud of a physical environment is obtained, the 3D point cloud including points each having a 3D location and representing an appearance of a portion of the physical environment. Then, a 3D mesh is obtained corresponding to the 3D point cloud, and a 2D view of the 3D point cloud from a viewpoint is generated using a subset of the points of the 3D point cloud, where the subset of points is selected based on the 3D mesh.Type: ApplicationFiled: February 21, 2024Publication date: July 11, 2024Inventors: Long H. NGO, Alexandre DA VEIGA
-
Publication number: 20240221337Abstract: Various implementations disclosed herein include devices, systems, and methods that provide 3D content that is presented over time (e.g. a video of 3D point-based frames), where the 3D content includes only content of interest, e.g., showing just one particular person, the floor near that person, and objects with which the person is near or interacting. The presented content may be stabilized within the viewers environment, for example, by removing content changes corresponding to movement of the capturing device.Type: ApplicationFiled: March 13, 2024Publication date: July 4, 2024Inventors: Alexandre DA VEIGA, Alexander MENZIES, Michael I. WEINSTEIN, Vedant SARAN
-
Publication number: 20240203055Abstract: Various implementations disclosed herein include devices, systems, and methods that generate a 3D representation of a physical environment by generating a point cloud and selectively replacing some of the points. For example, points representing flat surfaces (e.g., walls and ceilings) may be replaced with planar elements that are “painted” using image data. In contrast, points representing non-flat portions (e.g. furniture, curtains, wall hangings, etc.) are left as points in the model or altered in a different way. Selectively altering the 3D representation may provide (a) a cleaner feeling and/or (b) a more compact 3D representation for more efficient and faster communication and/or rendering.Type: ApplicationFiled: February 21, 2024Publication date: June 20, 2024Inventors: Long H. NGO, Alexandre DA VEIGA
-
Publication number: 20240202944Abstract: Various implementations provide a method for determining position data of a first device relative to a three-dimensional (3D) representation during a communication session. For example, a 3D representation is determined by a first device to correspond to a current physical environment of the first device. Then a spatial relationship is determined between the 3D representation and the current physical environment. Then position data is determined to correspond to a position of the first device relative to the 3D representation and based on a location of the first device in the current physical environment and the spatial relationship between the 3D representation and the current physical environment. The position data is then provided during a communication session between the first device and a second device with a view of the 3D representation including a representation of a user of the first device presented to a user of the second device.Type: ApplicationFiled: March 1, 2024Publication date: June 20, 2024Inventors: Bruno M. SOMMER, Alexandre DA VEIGA, Long H. NGO, Sebastian P. HERSCHER
-
Publication number: 20240104872Abstract: Various implementations provide a view of a 3D environment including a portal for viewing a stereo item (e.g., a photo or video) positioned a distance behind the portal. One or more visual effects are provided based on texture of one or more portions of the stereo item, e.g., texture at cutoff or visible edges of the stereo item. The effects change the appearance of the stereo item or the portal itself, e.g., improving visual comfort issues by minimizing window violations or otherwise enhancing the viewing experience. Various implementations provide a view of a 3D environment including an immersive view of a stereo item without using portal. Such a visual effect may be provided to partially obscure the surrounding 3D environment.Type: ApplicationFiled: September 21, 2023Publication date: March 28, 2024Inventors: Bryce L. Schmidtchen, Bryan Cline, Charles O. Goddard, Michael I. Weinstein, Tsao-Wei Huang, Tobias Rick, Vedant Saran, Alexander Menzies, Alexandre Da Veiga
-
Patent number: 11941818Abstract: Various implementations disclosed herein include devices, systems, and methods that determine a 3D location of an edge based on image and depth data. This involves determining a 2D location of a line segment corresponding to an edge of an object based on a light-intensity image, determining a 3D location of a plane based on depth values (e.g., based on sampling depth near the edge/on both sides of the edge and fitting a plane to the sampled points), and determining a 3D location of the line segment based on the plane (e.g., by projecting the line segment onto the plane). The devices, systems, and methods may involve classifying an edge as a particular edge type (e.g., fold, cliff, plane) and detecting the edge based on such classification.Type: GrantFiled: March 3, 2021Date of Patent: March 26, 2024Assignee: Apple Inc.Inventors: Vedant Saran, Alexandre Da Veiga
-
Publication number: 20240062413Abstract: A method includes obtaining first pass-through image data characterized by a first pose. The method includes obtaining respective pixel characterization vectors for pixels in the first pass-through image data. The method includes identifying a feature of an object within the first pass-through image data in accordance with a determination that pixel characterization vectors for the feature satisfy a feature confidence threshold. The method includes displaying the first pass-through image data and an AR display marker that corresponds to the feature. The method includes obtaining second pass-through image data characterized by a second pose. The method includes transforming the AR display marker to a position associated with the second pose in order to track the feature. The method includes displaying the second pass-through image data and maintaining display of the AR display marker that corresponds to the feature of the object based on the transformation.Type: ApplicationFiled: October 26, 2023Publication date: February 22, 2024Inventors: Jeffrey S. Norris, Alexandre Da Veiga, Bruno M. Sommer, Ye Cong, Tobias Eble, Moinul Khan, Nicolas Bonnier, Hao Pan
-
Publication number: 20240037886Abstract: Various implementations disclosed herein include devices, systems, and methods that generate and share/transmit a 3D representation of a physical environment during a communication session. Some of the elements (e.g., points) of the 3D representation may be replaced to improve the quality and/or efficiency of the modeling and transmitting processes. A user's device may provide a view and/or feedback during a scan of the physical environment during the communication session to facilitate accurate understanding of what is being transmitted. Additional information, e.g., a second representation of a portion of the physical environment, may also be transmitted during a communication session. The second representations may represent an aspect (e.g., more details, photo quality images, live, etc.) of a portion not represented by the 3D representation.Type: ApplicationFiled: October 16, 2023Publication date: February 1, 2024Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
-
Publication number: 20240040099Abstract: Various implementations disclosed herein include devices, systems, and methods that modify a video to enable replay of the video with a depth-based effect based on gaze of a user during capture or playback of the video. For example, an example process may include tracking a gaze direction during a capture or a playback of a video, wherein the video includes a sequence of frames including depictions of a three-dimensional (3D) environment, identifying a portion of the 3D environment depicted in the video based on the gaze direction, determining a depth of the identified portion of the 3D environment, and modifying the video to enable replay of the video with a depth-based effect based on the depth of the identified portion.Type: ApplicationFiled: October 5, 2023Publication date: February 1, 2024Inventors: Alexandre Da Veiga, Timothy T. Chong
-
Publication number: 20240007607Abstract: Various implementations disclosed herein include devices, systems, and methods that determine and provide a transition (optionally including a transition effect) between different types of views of three-dimensional (3D) content. For example, an example process may include obtaining a 3D content item, providing a first view of the 3D content item within a 3D environment, determining to transition from the first view to a second view of the 3D content item based on a criterion, and providing the second view of the 3D content item within the 3D environment, where the left eye view and the right eye view of the 3D content item are based on at least one of the left eye content and the right eye content.Type: ApplicationFiled: September 13, 2023Publication date: January 4, 2024Inventors: Alexandre Da Veiga, Alexander Menzies, Tobias Rick