Patents by Inventor Alexandre da Veiga

Alexandre da Veiga has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104872
    Abstract: Various implementations provide a view of a 3D environment including a portal for viewing a stereo item (e.g., a photo or video) positioned a distance behind the portal. One or more visual effects are provided based on texture of one or more portions of the stereo item, e.g., texture at cutoff or visible edges of the stereo item. The effects change the appearance of the stereo item or the portal itself, e.g., improving visual comfort issues by minimizing window violations or otherwise enhancing the viewing experience. Various implementations provide a view of a 3D environment including an immersive view of a stereo item without using portal. Such a visual effect may be provided to partially obscure the surrounding 3D environment.
    Type: Application
    Filed: September 21, 2023
    Publication date: March 28, 2024
    Inventors: Bryce L. Schmidtchen, Bryan Cline, Charles O. Goddard, Michael I. Weinstein, Tsao-Wei Huang, Tobias Rick, Vedant Saran, Alexander Menzies, Alexandre Da Veiga
  • Patent number: 11941818
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determine a 3D location of an edge based on image and depth data. This involves determining a 2D location of a line segment corresponding to an edge of an object based on a light-intensity image, determining a 3D location of a plane based on depth values (e.g., based on sampling depth near the edge/on both sides of the edge and fitting a plane to the sampled points), and determining a 3D location of the line segment based on the plane (e.g., by projecting the line segment onto the plane). The devices, systems, and methods may involve classifying an edge as a particular edge type (e.g., fold, cliff, plane) and detecting the edge based on such classification.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: March 26, 2024
    Assignee: Apple Inc.
    Inventors: Vedant Saran, Alexandre Da Veiga
  • Publication number: 20240062413
    Abstract: A method includes obtaining first pass-through image data characterized by a first pose. The method includes obtaining respective pixel characterization vectors for pixels in the first pass-through image data. The method includes identifying a feature of an object within the first pass-through image data in accordance with a determination that pixel characterization vectors for the feature satisfy a feature confidence threshold. The method includes displaying the first pass-through image data and an AR display marker that corresponds to the feature. The method includes obtaining second pass-through image data characterized by a second pose. The method includes transforming the AR display marker to a position associated with the second pose in order to track the feature. The method includes displaying the second pass-through image data and maintaining display of the AR display marker that corresponds to the feature of the object based on the transformation.
    Type: Application
    Filed: October 26, 2023
    Publication date: February 22, 2024
    Inventors: Jeffrey S. Norris, Alexandre Da Veiga, Bruno M. Sommer, Ye Cong, Tobias Eble, Moinul Khan, Nicolas Bonnier, Hao Pan
  • Publication number: 20240037886
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generate and share/transmit a 3D representation of a physical environment during a communication session. Some of the elements (e.g., points) of the 3D representation may be replaced to improve the quality and/or efficiency of the modeling and transmitting processes. A user's device may provide a view and/or feedback during a scan of the physical environment during the communication session to facilitate accurate understanding of what is being transmitted. Additional information, e.g., a second representation of a portion of the physical environment, may also be transmitted during a communication session. The second representations may represent an aspect (e.g., more details, photo quality images, live, etc.) of a portion not represented by the 3D representation.
    Type: Application
    Filed: October 16, 2023
    Publication date: February 1, 2024
    Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
  • Publication number: 20240040099
    Abstract: Various implementations disclosed herein include devices, systems, and methods that modify a video to enable replay of the video with a depth-based effect based on gaze of a user during capture or playback of the video. For example, an example process may include tracking a gaze direction during a capture or a playback of a video, wherein the video includes a sequence of frames including depictions of a three-dimensional (3D) environment, identifying a portion of the 3D environment depicted in the video based on the gaze direction, determining a depth of the identified portion of the 3D environment, and modifying the video to enable replay of the video with a depth-based effect based on the depth of the identified portion.
    Type: Application
    Filed: October 5, 2023
    Publication date: February 1, 2024
    Inventors: Alexandre Da Veiga, Timothy T. Chong
  • Publication number: 20240005622
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a communication session in which a first device receives and uses streamed avatar data to render views that include a time-varying avatar, e.g., video content of some or all of another user sent from the other user's device during the communication session. In order to efficiently use resources (e.g., power, bandwidth, etc.), some implementations adapt the avatar provision process (e.g., video framerate, image resolution, etc.) based on user context, e.g., whether the viewer is looking at the avatar, whether the avatar is within the viewer's foveal region, or whether the avatar is within the viewer's field of view.
    Type: Application
    Filed: June 21, 2023
    Publication date: January 4, 2024
    Inventors: Hayden J. Lee, Connor A. Smith, Alexandre Da Veiga, Leanid Vouk, Sebastian P. Herscher
  • Publication number: 20240007607
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determine and provide a transition (optionally including a transition effect) between different types of views of three-dimensional (3D) content. For example, an example process may include obtaining a 3D content item, providing a first view of the 3D content item within a 3D environment, determining to transition from the first view to a second view of the 3D content item based on a criterion, and providing the second view of the 3D content item within the 3D environment, where the left eye view and the right eye view of the 3D content item are based on at least one of the left eye content and the right eye content.
    Type: Application
    Filed: September 13, 2023
    Publication date: January 4, 2024
    Inventors: Alexandre Da Veiga, Alexander Menzies, Tobias Rick
  • Publication number: 20230419593
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present views of media objects using different viewing states determined based on context. In some implementations, a view of a 3D environment is presented. Then, a context associated with viewing one or more media objects within the 3D environment is determined, the media objects associated with data for providing an appearance of depth within the one or more media objects. Based on the context, a viewing state is determined for viewing a media object of the one or more media objects within the 3D environment, the viewing state defining whether the media object will be presented as a planar object or with depth within the media object. In accordance with a determination that the viewing state is a first viewing state, the media object is presented within the 3D environment using its associated data for providing the appearance of depth.
    Type: Application
    Filed: September 11, 2023
    Publication date: December 28, 2023
    Inventors: Alexandre Da Veiga, Tobias Rick, Timothy R. Pease
  • Publication number: 20230419625
    Abstract: Various implementations provide a representation of at least a portion of a user within a three-dimensional (3D) environment other than the user's physical environment. Based on detecting a condition, a representation of another object of the user's physical environment is shown to provide context. As examples, a representation of a sitting surface may be shown based on detecting that the user is sitting down, representations of a table and coffee cup may be shown based on detecting that the user is reaching out to pick up a coffee cup, a representation of a second user may be shown based on detecting a voice or the user turning his attention towards a moving object or sound, and a depiction of a puppy may be shown when the puppy's bark is detected.
    Type: Application
    Filed: September 13, 2023
    Publication date: December 28, 2023
    Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
  • Publication number: 20230403386
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provides a view of a three-dimensional (3D) environment that includes a projection of a 3D image, such as a multi-directional stereo image or video content. For example, an example process may include obtaining a three-dimensional (3D) image including a stereoscopic image pair including left eye content corresponding to a left eye viewpoint and right eye content corresponding to a right eye viewpoint, generating a projection of the 3D image within a 3D environment by projecting portions of the 3D image to form a shape within the 3D environment, the shape based on an angle of view of the 3D image, where the 3D environment includes additional content separate from the 3D image, and providing a view of the 3D environment including the projection of the 3D image.
    Type: Application
    Filed: August 28, 2023
    Publication date: December 14, 2023
    Inventors: Alexandre Da Veiga, Tobias Rick
  • Patent number: 11830214
    Abstract: A method includes obtaining first pass-through image data characterized by a first pose. The method includes obtaining respective pixel characterization vectors for pixels in the first pass-through image data. The method includes identifying a feature of an object within the first pass-through image data in accordance with a determination that pixel characterization vectors for the feature satisfy a feature confidence threshold. The method includes displaying the first pass-through image data and an AR display marker that corresponds to the feature. The method includes obtaining second pass-through image data characterized by a second pose. The method includes transforming the AR display marker to a position associated with the second pose in order to track the feature. The method includes displaying the second pass-through image data and maintaining display of the AR display marker that corresponds to the feature of the object based on the transformation.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: November 28, 2023
    Assignee: APPLE INC.
    Inventors: Jeffrey S. Norris, Alexandre Da Veiga, Bruno M. Sommer, Ye Cong, Tobias Eble, Moinul Khan, Nicolas Bonnier, Hao Pan
  • Publication number: 20230344973
    Abstract: Various implementations disclosed herein include devices, systems, and methods that that modify audio of played back AV content based on context in accordance with some implementations. In some implementations audio-visual content of a physical environment is obtained, and the audio-visual content includes visual content and audio content that includes a plurality of audio portions corresponding to the visual content. In some implementations, a context for presenting the audio-visual content is determined, and a temporal relationship between one or more audio portions of the plurality of audio portions and the visual content is determined based on the context. Then, synthesized audio-visual content is presented based on the temporal relationship.
    Type: Application
    Filed: June 28, 2023
    Publication date: October 26, 2023
    Inventors: Shai Messingher Lang, Alexandre Da Veiga, Spencer H. Ray, Symeon Delikaris Manias
  • Publication number: 20230336865
    Abstract: The present disclosure generally relates to techniques and user interfaces for capturing media, displaying a preview of media, displaying a recording indicator, displaying a camera user interface, and/or displaying previously captured media.
    Type: Application
    Filed: November 22, 2022
    Publication date: October 19, 2023
    Inventors: Alexandre DA VEIGA, Lee S. Broughton, Angel Suet Yan CHEUNG, Stephen O. LEMAY, Chia Yang LIN, Behkish J. MANZARI, Ivan MARKOVIC, Alexander MENZIES, Aaron MORING, Jonathan RAVASZ, Tobias RICK, Bryce L. SCHMIDTCHEN, William A. SORRENTINO, III
  • Patent number: 11783499
    Abstract: In accordance with some embodiments, a technique that enables an electronic device with a camera to automatically gather and generate requisite data from the real-world environment to allow the electronic device to quickly and efficiently determine and provide accurate measurements of physical spaces and/or objects within the real-world environment is described.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: October 10, 2023
    Assignee: Apple Inc.
    Inventor: Alexandre da Veiga
  • Publication number: 20230298278
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determine how to present a three-dimensional (3D) photo in an extended reality (XR) environment (e.g., in 3D, 2D, blurry, or not at all) based on viewing position of a user active in the XR environment relative to a placement of the 3D photo in the XR environment. In some implementations, at an electronic device having a processor, a 3D photo that is an incomplete 3D representation created based on one or more images captured by an image capture device is obtained. In some implementations, a viewing position of the electronic device relative to a placement position of the 3D photo is determined, and a presentation mode for the 3D photo is determined based on the viewing position. In some implementations, the 3D photo is provided at the placement position based on the presentation mode in the XR environment.
    Type: Application
    Filed: November 4, 2022
    Publication date: September 21, 2023
    Inventors: Alexandre DA VEIGA, Jeffrey S. NORRIS, Madhurani R. SAPRE, Spencer H. RAY
  • Publication number: 20230298281
    Abstract: In an exemplary process, a set of parameters corresponding to characteristics of a physical setting of a user is obtained. Based on the parameters, at least one display placement value and a fixed boundary location corresponding to the physical setting are obtained. In accordance with a determination that the at least one display placement value satisfies a display placement criterion, a virtual display is displayed at the fixed boundary location corresponding to the physical setting.
    Type: Application
    Filed: December 20, 2022
    Publication date: September 21, 2023
    Inventors: Timothy R. PEASE, Alexandre DA VEIGA, David H. Y. HUANG, Peng LIU, Robert K. MOLHOLM
  • Patent number: 11763477
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determines a distance between a portion (e.g., top) of a person's head and the floor below as an estimate of the person's height. For example, an example process may include determining a first position on a head of a person in a three-dimensional (3D) coordinate system, the first position determined based on detecting a feature on the head based on a two-dimensional (2D) image of the person in a physical environment, determining a second position on a floor below the first position in the 3D coordinate system, and estimating a height based on determining a distance between the first position and the second position.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: September 19, 2023
    Assignee: Apple Inc.
    Inventors: Alexandre Da Veiga, George J. Hito, Timothy R. Pease
  • Patent number: 11763479
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide measurements of objects based on a location of a surface of the objects. An exemplary process may include obtaining a three-dimensional (3D) representation of a physical environment that was generated based on depth data and light intensity image data, generating a 3D bounding box corresponding to an object in the physical environment based on the 3D representation, determining a class of the object based on the 3D semantic data, determining a location of a surface of the object based on the class of the object, the location determined by identifying a plane within the 3D bounding box having semantics in the 3D semantic data satisfying surface criteria for the object, and providing a measurement of the object, the measurement of the object determined based on the location of the surface of the object.
    Type: Grant
    Filed: December 1, 2022
    Date of Patent: September 19, 2023
    Assignee: Apple Inc.
    Inventors: Amit Jain, Aditya Sankar, Qi Shan, Alexandre Da Veiga, Shreyas V. Joshi
  • Publication number: 20230290078
    Abstract: Various implementations use object information to facilitate a communication session. Some implementations create a dense reconstruction (e.g., a point cloud or triangular mesh) of a physical environment, for example, using light intensity images and depth sensor data. A less data-intensive object information is also created to represent the physical environment for more efficient storage, editing, sharing, and use. In some implementations, the object information includes object attribute and location information. In some implementations, a 2D floorplan or other 2D representation provides object locations and metadata (e.g., object type, texture, heights, dimensions, etc.) provide object attributes. The object location and attribute information may be used, during a communication session, to generate a 3D graphical environment that is representative of the physical environment.
    Type: Application
    Filed: November 22, 2022
    Publication date: September 14, 2023
    Inventors: Bradley W. PEEBLER, Alexandre DA VEIGA
  • Patent number: 11756229
    Abstract: Systems and methods for localization for mobile devices are described. Some implementations may include accessing motion data captured using one or more motion sensors; determining, based on the motion data, a coarse localization, wherein the coarse localization includes a first estimate of position; obtaining one or more feature point maps, wherein the feature point maps are associated with a position of the coarse localization; accessing images captured using one or more image sensors; determining, based on the images, a fine localization pose by localizing into a feature point map of the one or more feature point maps, wherein the fine localization pose includes a second estimate of position and an estimate of orientation; generating, based on the fine localization pose, a virtual object image including a view of a virtual object; and displaying the virtual object image.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: September 12, 2023
    Assignee: APPLE INC.
    Inventors: Bruno M. Sommer, Alexandre Da Veiga