Patents by Inventor Alexandre da Veiga
Alexandre da Veiga has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240037886Abstract: Various implementations disclosed herein include devices, systems, and methods that generate and share/transmit a 3D representation of a physical environment during a communication session. Some of the elements (e.g., points) of the 3D representation may be replaced to improve the quality and/or efficiency of the modeling and transmitting processes. A user's device may provide a view and/or feedback during a scan of the physical environment during the communication session to facilitate accurate understanding of what is being transmitted. Additional information, e.g., a second representation of a portion of the physical environment, may also be transmitted during a communication session. The second representations may represent an aspect (e.g., more details, photo quality images, live, etc.) of a portion not represented by the 3D representation.Type: ApplicationFiled: October 16, 2023Publication date: February 1, 2024Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
-
Publication number: 20240040099Abstract: Various implementations disclosed herein include devices, systems, and methods that modify a video to enable replay of the video with a depth-based effect based on gaze of a user during capture or playback of the video. For example, an example process may include tracking a gaze direction during a capture or a playback of a video, wherein the video includes a sequence of frames including depictions of a three-dimensional (3D) environment, identifying a portion of the 3D environment depicted in the video based on the gaze direction, determining a depth of the identified portion of the 3D environment, and modifying the video to enable replay of the video with a depth-based effect based on the depth of the identified portion.Type: ApplicationFiled: October 5, 2023Publication date: February 1, 2024Inventors: Alexandre Da Veiga, Timothy T. Chong
-
Publication number: 20240007607Abstract: Various implementations disclosed herein include devices, systems, and methods that determine and provide a transition (optionally including a transition effect) between different types of views of three-dimensional (3D) content. For example, an example process may include obtaining a 3D content item, providing a first view of the 3D content item within a 3D environment, determining to transition from the first view to a second view of the 3D content item based on a criterion, and providing the second view of the 3D content item within the 3D environment, where the left eye view and the right eye view of the 3D content item are based on at least one of the left eye content and the right eye content.Type: ApplicationFiled: September 13, 2023Publication date: January 4, 2024Inventors: Alexandre Da Veiga, Alexander Menzies, Tobias Rick
-
Publication number: 20240005622Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a communication session in which a first device receives and uses streamed avatar data to render views that include a time-varying avatar, e.g., video content of some or all of another user sent from the other user's device during the communication session. In order to efficiently use resources (e.g., power, bandwidth, etc.), some implementations adapt the avatar provision process (e.g., video framerate, image resolution, etc.) based on user context, e.g., whether the viewer is looking at the avatar, whether the avatar is within the viewer's foveal region, or whether the avatar is within the viewer's field of view.Type: ApplicationFiled: June 21, 2023Publication date: January 4, 2024Inventors: Hayden J. Lee, Connor A. Smith, Alexandre Da Veiga, Leanid Vouk, Sebastian P. Herscher
-
Publication number: 20230419593Abstract: Various implementations disclosed herein include devices, systems, and methods that present views of media objects using different viewing states determined based on context. In some implementations, a view of a 3D environment is presented. Then, a context associated with viewing one or more media objects within the 3D environment is determined, the media objects associated with data for providing an appearance of depth within the one or more media objects. Based on the context, a viewing state is determined for viewing a media object of the one or more media objects within the 3D environment, the viewing state defining whether the media object will be presented as a planar object or with depth within the media object. In accordance with a determination that the viewing state is a first viewing state, the media object is presented within the 3D environment using its associated data for providing the appearance of depth.Type: ApplicationFiled: September 11, 2023Publication date: December 28, 2023Inventors: Alexandre Da Veiga, Tobias Rick, Timothy R. Pease
-
Publication number: 20230419625Abstract: Various implementations provide a representation of at least a portion of a user within a three-dimensional (3D) environment other than the user's physical environment. Based on detecting a condition, a representation of another object of the user's physical environment is shown to provide context. As examples, a representation of a sitting surface may be shown based on detecting that the user is sitting down, representations of a table and coffee cup may be shown based on detecting that the user is reaching out to pick up a coffee cup, a representation of a second user may be shown based on detecting a voice or the user turning his attention towards a moving object or sound, and a depiction of a puppy may be shown when the puppy's bark is detected.Type: ApplicationFiled: September 13, 2023Publication date: December 28, 2023Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
-
Publication number: 20230403386Abstract: Various implementations disclosed herein include devices, systems, and methods that provides a view of a three-dimensional (3D) environment that includes a projection of a 3D image, such as a multi-directional stereo image or video content. For example, an example process may include obtaining a three-dimensional (3D) image including a stereoscopic image pair including left eye content corresponding to a left eye viewpoint and right eye content corresponding to a right eye viewpoint, generating a projection of the 3D image within a 3D environment by projecting portions of the 3D image to form a shape within the 3D environment, the shape based on an angle of view of the 3D image, where the 3D environment includes additional content separate from the 3D image, and providing a view of the 3D environment including the projection of the 3D image.Type: ApplicationFiled: August 28, 2023Publication date: December 14, 2023Inventors: Alexandre Da Veiga, Tobias Rick
-
Patent number: 11830214Abstract: A method includes obtaining first pass-through image data characterized by a first pose. The method includes obtaining respective pixel characterization vectors for pixels in the first pass-through image data. The method includes identifying a feature of an object within the first pass-through image data in accordance with a determination that pixel characterization vectors for the feature satisfy a feature confidence threshold. The method includes displaying the first pass-through image data and an AR display marker that corresponds to the feature. The method includes obtaining second pass-through image data characterized by a second pose. The method includes transforming the AR display marker to a position associated with the second pose in order to track the feature. The method includes displaying the second pass-through image data and maintaining display of the AR display marker that corresponds to the feature of the object based on the transformation.Type: GrantFiled: May 29, 2019Date of Patent: November 28, 2023Assignee: APPLE INC.Inventors: Jeffrey S. Norris, Alexandre Da Veiga, Bruno M. Sommer, Ye Cong, Tobias Eble, Moinul Khan, Nicolas Bonnier, Hao Pan
-
Publication number: 20230344973Abstract: Various implementations disclosed herein include devices, systems, and methods that that modify audio of played back AV content based on context in accordance with some implementations. In some implementations audio-visual content of a physical environment is obtained, and the audio-visual content includes visual content and audio content that includes a plurality of audio portions corresponding to the visual content. In some implementations, a context for presenting the audio-visual content is determined, and a temporal relationship between one or more audio portions of the plurality of audio portions and the visual content is determined based on the context. Then, synthesized audio-visual content is presented based on the temporal relationship.Type: ApplicationFiled: June 28, 2023Publication date: October 26, 2023Inventors: Shai Messingher Lang, Alexandre Da Veiga, Spencer H. Ray, Symeon Delikaris Manias
-
Publication number: 20230336865Abstract: The present disclosure generally relates to techniques and user interfaces for capturing media, displaying a preview of media, displaying a recording indicator, displaying a camera user interface, and/or displaying previously captured media.Type: ApplicationFiled: November 22, 2022Publication date: October 19, 2023Inventors: Alexandre DA VEIGA, Lee S. Broughton, Angel Suet Yan CHEUNG, Stephen O. LEMAY, Chia Yang LIN, Behkish J. MANZARI, Ivan MARKOVIC, Alexander MENZIES, Aaron MORING, Jonathan RAVASZ, Tobias RICK, Bryce L. SCHMIDTCHEN, William A. SORRENTINO, III
-
Patent number: 11783499Abstract: In accordance with some embodiments, a technique that enables an electronic device with a camera to automatically gather and generate requisite data from the real-world environment to allow the electronic device to quickly and efficiently determine and provide accurate measurements of physical spaces and/or objects within the real-world environment is described.Type: GrantFiled: April 19, 2021Date of Patent: October 10, 2023Assignee: Apple Inc.Inventor: Alexandre da Veiga
-
Publication number: 20230298278Abstract: Various implementations disclosed herein include devices, systems, and methods that determine how to present a three-dimensional (3D) photo in an extended reality (XR) environment (e.g., in 3D, 2D, blurry, or not at all) based on viewing position of a user active in the XR environment relative to a placement of the 3D photo in the XR environment. In some implementations, at an electronic device having a processor, a 3D photo that is an incomplete 3D representation created based on one or more images captured by an image capture device is obtained. In some implementations, a viewing position of the electronic device relative to a placement position of the 3D photo is determined, and a presentation mode for the 3D photo is determined based on the viewing position. In some implementations, the 3D photo is provided at the placement position based on the presentation mode in the XR environment.Type: ApplicationFiled: November 4, 2022Publication date: September 21, 2023Inventors: Alexandre DA VEIGA, Jeffrey S. NORRIS, Madhurani R. SAPRE, Spencer H. RAY
-
Publication number: 20230298281Abstract: In an exemplary process, a set of parameters corresponding to characteristics of a physical setting of a user is obtained. Based on the parameters, at least one display placement value and a fixed boundary location corresponding to the physical setting are obtained. In accordance with a determination that the at least one display placement value satisfies a display placement criterion, a virtual display is displayed at the fixed boundary location corresponding to the physical setting.Type: ApplicationFiled: December 20, 2022Publication date: September 21, 2023Inventors: Timothy R. PEASE, Alexandre DA VEIGA, David H. Y. HUANG, Peng LIU, Robert K. MOLHOLM
-
Patent number: 11763479Abstract: Various implementations disclosed herein include devices, systems, and methods that provide measurements of objects based on a location of a surface of the objects. An exemplary process may include obtaining a three-dimensional (3D) representation of a physical environment that was generated based on depth data and light intensity image data, generating a 3D bounding box corresponding to an object in the physical environment based on the 3D representation, determining a class of the object based on the 3D semantic data, determining a location of a surface of the object based on the class of the object, the location determined by identifying a plane within the 3D bounding box having semantics in the 3D semantic data satisfying surface criteria for the object, and providing a measurement of the object, the measurement of the object determined based on the location of the surface of the object.Type: GrantFiled: December 1, 2022Date of Patent: September 19, 2023Assignee: Apple Inc.Inventors: Amit Jain, Aditya Sankar, Qi Shan, Alexandre Da Veiga, Shreyas V. Joshi
-
Patent number: 11763477Abstract: Various implementations disclosed herein include devices, systems, and methods that determines a distance between a portion (e.g., top) of a person's head and the floor below as an estimate of the person's height. For example, an example process may include determining a first position on a head of a person in a three-dimensional (3D) coordinate system, the first position determined based on detecting a feature on the head based on a two-dimensional (2D) image of the person in a physical environment, determining a second position on a floor below the first position in the 3D coordinate system, and estimating a height based on determining a distance between the first position and the second position.Type: GrantFiled: February 23, 2021Date of Patent: September 19, 2023Assignee: Apple Inc.Inventors: Alexandre Da Veiga, George J. Hito, Timothy R. Pease
-
Publication number: 20230290078Abstract: Various implementations use object information to facilitate a communication session. Some implementations create a dense reconstruction (e.g., a point cloud or triangular mesh) of a physical environment, for example, using light intensity images and depth sensor data. A less data-intensive object information is also created to represent the physical environment for more efficient storage, editing, sharing, and use. In some implementations, the object information includes object attribute and location information. In some implementations, a 2D floorplan or other 2D representation provides object locations and metadata (e.g., object type, texture, heights, dimensions, etc.) provide object attributes. The object location and attribute information may be used, during a communication session, to generate a 3D graphical environment that is representative of the physical environment.Type: ApplicationFiled: November 22, 2022Publication date: September 14, 2023Inventors: Bradley W. PEEBLER, Alexandre DA VEIGA
-
Patent number: 11756229Abstract: Systems and methods for localization for mobile devices are described. Some implementations may include accessing motion data captured using one or more motion sensors; determining, based on the motion data, a coarse localization, wherein the coarse localization includes a first estimate of position; obtaining one or more feature point maps, wherein the feature point maps are associated with a position of the coarse localization; accessing images captured using one or more image sensors; determining, based on the images, a fine localization pose by localizing into a feature point map of the one or more feature point maps, wherein the fine localization pose includes a second estimate of position and an estimate of orientation; generating, based on the fine localization pose, a virtual object image including a view of a virtual object; and displaying the virtual object image.Type: GrantFiled: October 4, 2021Date of Patent: September 12, 2023Assignee: APPLE INC.Inventors: Bruno M. Sommer, Alexandre Da Veiga
-
Publication number: 20230281933Abstract: Various implementations disclosed herein include devices, systems, and methods that create a 3D video that includes determining first adjustments (e.g., first transforms) to video frames (e.g., one or more RGB images and depth images per frame) to align content in a coordinate system to remove the effects of capturing camera motion. Various implementations disclosed herein include devices, systems, and methods that playback a 3D video and includes determining second adjustments (e.g., second transforms) to remove the effects of movement of a viewing electronic device relative to a viewing environment during playback of the 3D video. Some implementations distinguish static content and moving content of the video frames to playback only moving objects or facilitate concurrent playback of multiple spatially related 3D videos. The 3D video may include images, audio, or 3D video of a video-capture-device user.Type: ApplicationFiled: November 10, 2022Publication date: September 7, 2023Inventors: Timothy R. PEASE, Alexandre DA VEIGA, Benjamin H. BOESEL, David H. HUANG, Jonathan PERRON, Shih-Sang CHIU, Spencer H. RAY
-
Patent number: 11748445Abstract: This disclosure relates to managing a feature point map. The managing can include adding feature points to the feature point map using feature points captured by multiple devices and/or by a single device at different times. The resulting feature point map may be referred to as a global feature point map, storing feature points from multiple feature point maps. The global feature point map may be stored in a global feature point database accessible by a server so that the server may provide the global feature point map to devices requesting feature point maps for particular locations. In such examples, the global feature point map may allow a user to localize in locations without having to scan for feature points in the locations. In this manner, the user can localize in locations in which the user has never previously visited.Type: GrantFiled: April 29, 2020Date of Patent: September 5, 2023Assignee: Apple Inc.Inventors: Bruno M. Sommer, Alexandre da Veiga
-
Patent number: 11733956Abstract: In one implementation, a method of providing display device sharing and interactivity in simulated reality is performed at a first electronic device including one or more processors and a non-transitory memory. The method includes obtaining a gesture input to a first display device in communication with the first electronic device from a first user, where the first display device includes a first display. The method further includes transmitting a representation of the first display to a second electronic device in response to obtaining the gesture input. The method additionally includes receiving an input message directed to the first display device from the second electronic device, where the input message includes an input directive obtained by the second electronic device from a second user. The method also includes transmitting the input message to the first display device for execution by the first display device.Type: GrantFiled: September 3, 2019Date of Patent: August 22, 2023Assignee: APPLE INC.Inventors: Bruno M. Sommer, Alexandre Da Veiga, Ioana Negoita