Patents by Inventor Alexandre da Veiga
Alexandre da Veiga has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230298278Abstract: Various implementations disclosed herein include devices, systems, and methods that determine how to present a three-dimensional (3D) photo in an extended reality (XR) environment (e.g., in 3D, 2D, blurry, or not at all) based on viewing position of a user active in the XR environment relative to a placement of the 3D photo in the XR environment. In some implementations, at an electronic device having a processor, a 3D photo that is an incomplete 3D representation created based on one or more images captured by an image capture device is obtained. In some implementations, a viewing position of the electronic device relative to a placement position of the 3D photo is determined, and a presentation mode for the 3D photo is determined based on the viewing position. In some implementations, the 3D photo is provided at the placement position based on the presentation mode in the XR environment.Type: ApplicationFiled: November 4, 2022Publication date: September 21, 2023Inventors: Alexandre DA VEIGA, Jeffrey S. NORRIS, Madhurani R. SAPRE, Spencer H. RAY
-
Publication number: 20230298281Abstract: In an exemplary process, a set of parameters corresponding to characteristics of a physical setting of a user is obtained. Based on the parameters, at least one display placement value and a fixed boundary location corresponding to the physical setting are obtained. In accordance with a determination that the at least one display placement value satisfies a display placement criterion, a virtual display is displayed at the fixed boundary location corresponding to the physical setting.Type: ApplicationFiled: December 20, 2022Publication date: September 21, 2023Inventors: Timothy R. PEASE, Alexandre DA VEIGA, David H. Y. HUANG, Peng LIU, Robert K. MOLHOLM
-
Publication number: 20230290078Abstract: Various implementations use object information to facilitate a communication session. Some implementations create a dense reconstruction (e.g., a point cloud or triangular mesh) of a physical environment, for example, using light intensity images and depth sensor data. A less data-intensive object information is also created to represent the physical environment for more efficient storage, editing, sharing, and use. In some implementations, the object information includes object attribute and location information. In some implementations, a 2D floorplan or other 2D representation provides object locations and metadata (e.g., object type, texture, heights, dimensions, etc.) provide object attributes. The object location and attribute information may be used, during a communication session, to generate a 3D graphical environment that is representative of the physical environment.Type: ApplicationFiled: November 22, 2022Publication date: September 14, 2023Inventors: Bradley W. PEEBLER, Alexandre DA VEIGA
-
Publication number: 20230281933Abstract: Various implementations disclosed herein include devices, systems, and methods that create a 3D video that includes determining first adjustments (e.g., first transforms) to video frames (e.g., one or more RGB images and depth images per frame) to align content in a coordinate system to remove the effects of capturing camera motion. Various implementations disclosed herein include devices, systems, and methods that playback a 3D video and includes determining second adjustments (e.g., second transforms) to remove the effects of movement of a viewing electronic device relative to a viewing environment during playback of the 3D video. Some implementations distinguish static content and moving content of the video frames to playback only moving objects or facilitate concurrent playback of multiple spatially related 3D videos. The 3D video may include images, audio, or 3D video of a video-capture-device user.Type: ApplicationFiled: November 10, 2022Publication date: September 7, 2023Inventors: Timothy R. PEASE, Alexandre DA VEIGA, Benjamin H. BOESEL, David H. HUANG, Jonathan PERRON, Shih-Sang CHIU, Spencer H. RAY
-
Publication number: 20230262406Abstract: Various implementations disclosed herein include devices, systems, and methods that display visual content as part of a 3D environment and add audio corresponding to the visual content. The audio may be spatialized to be from one or more audio source locations within the 3D environment. For example, a video may be presented on a virtual surface within an extended reality (XR) environment while audio associated with the video is spatialized to sound as if it is produced from an audio source location corresponding to that virtual surface. How the audio is provided may be determined based on the position of the viewer (e.g., the user or his/her device) relative to the presented visual content.Type: ApplicationFiled: December 12, 2022Publication date: August 17, 2023Inventors: Shai Messingher Lang, Alexandre Da Veiga, Spencer H. Ray, Symeon Delikaris Manias
-
Publication number: 20230094061Abstract: Various implementations disclosed herein include devices, systems, and methods that provide measurements of objects based on a location of a surface of the objects. An exemplary process may include obtaining a three-dimensional (3D) representation of a physical environment that was generated based on depth data and light intensity image data, generating a 3D bounding box corresponding to an object in the physical environment based on the 3D representation, determining a class of the object based on the 3D semantic data, determining a location of a surface of the object based on the class of the object, the location determined by identifying a plane within the 3D bounding box having semantics in the 3D semantic data satisfying surface criteria for the object, and providing a measurement of the object, the measurement of the object determined based on the location of the surface of the object.Type: ApplicationFiled: December 1, 2022Publication date: March 30, 2023Inventors: Amit Jain, Aditya Sankar, Qi Shan, Alexandre Da Veiga, Shreyas V. Joshi
-
Publication number: 20230055232Abstract: A method includes receiving one or more signals that each indicate a device type for a respective remote device, identifying one or more visible devices in one or more images, matching a first device from the one or more visible devices with a first signal from the one or more signals based on a device type of the first device matching a device type for the first signal and based on a visible output of the first device, pairing the first device with a second device, and controlling a function of the first device using the second device.Type: ApplicationFiled: November 3, 2022Publication date: February 23, 2023Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
-
Patent number: 11574485Abstract: Various implementations disclosed herein include devices, systems, and methods that obtain a three-dimensional (3D) representation of a physical environment that was generated based on depth data and light intensity image data, generate a 3D bounding box corresponding to an object in the physical environment based on the 3D representation, classify the object based on the 3D bounding box and the 3D semantic data, and display a measurement of the object, where the measurement of the object is determined using one of a plurality of class-specific neural networks selected based on the classifying of the object.Type: GrantFiled: January 14, 2021Date of Patent: February 7, 2023Assignee: Apple Inc.Inventors: Amit Jain, Aditya Sankar, Qi Shan, Alexandre Da Veiga, Shreyas V. Joshi
-
Patent number: 11532227Abstract: A method includes obtaining a location and a device type for one or more remote devices, and identifying one or more visible devices in one or more images, the one or more visible devices having a location and a device type. The method also includes matching a first visible device from the one or more visible devices with a first remote device from the one or more remote devices based on a location and a device type of the first visible device matching a location and a device type of the first remote device, obtaining a user input, and controlling a function of the first remote device based on the user input.Type: GrantFiled: December 21, 2021Date of Patent: December 20, 2022Assignee: APPLE INC.Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
-
Publication number: 20220368875Abstract: Various implementations disclosed herein include devices, systems, and methods that that modify audio of played back AV content based on context in accordance with some implementations. In some implementations audio-visual content of a physical environment is obtained, and the audio-visual content includes visual content and audio content that includes a plurality of audio portions corresponding to the visual content. In some implementations, a context for presenting the audio-visual content is determined, and a temporal relationship between one or more audio portions of the plurality of audio portions and the visual content is determined based on the context. Then, synthesized audio-visual content is presented based on the temporal relationship.Type: ApplicationFiled: May 27, 2022Publication date: November 17, 2022Inventors: Shai Messingher Lang, Alexandre Da Veiga, Spencer H. Ray, Symeon Delikaris Manias
-
Patent number: 11410328Abstract: This disclosure relates to maintaining a feature point map. The maintaining can include selectively updating feature points in the feature point map based on an assigned classification of the feature points. For example, when a feature points is assigned a first classification, the feature point is updated whenever information indicates that the feature point should be updated. In such an example, when the feature point is assigned a second classification different from the first classification, the feature point forgoes being updated whenever information indicates that the feature point should be updated. A classification can be assigned to a feature point using a classification system on one or more pixels of an image corresponding to the feature point.Type: GrantFiled: April 29, 2020Date of Patent: August 9, 2022Assignee: Apple Inc.Inventors: Bruno M. Sommer, Alexandre da Veiga
-
Patent number: 11381797Abstract: Various implementations disclosed herein include devices, systems, and methods that that modify audio of played back AV content based on context in accordance with some implementations. In some implementations audio-visual content of a physical environment is obtained, and the audio-visual content includes visual content and audio content that includes a plurality of audio portions corresponding to the visual content. In some implementations, a context for presenting the audio-visual content is determined, and a temporal relationship between one or more audio portions of the plurality of audio portions and the visual content is determined based on the context. Then, synthesized audio-visual content is presented based on the temporal relationship.Type: GrantFiled: June 25, 2021Date of Patent: July 5, 2022Assignee: Apple Inc.Inventors: Shai Messingher Lang, Alexandre Da Veiga, Spencer H. Ray, Symeon Delikaris Manias
-
Publication number: 20220114882Abstract: A method includes obtaining a location and a device type for one or more remote devices, and identifying one or more visible devices in one or more images, the one or more visible devices having a location and a device type. The method also includes matching a first visible device from the one or more visible devices with a first remote device from the one or more remote devices based on a location and a device type of the first visible device matching a location and a device type of the first remote device, obtaining a user input, and controlling a function of the first remote device based on the user input.Type: ApplicationFiled: December 21, 2021Publication date: April 14, 2022Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
-
Publication number: 20220101058Abstract: Systems and methods for localization for mobile devices are described. Some implementations may include accessing motion data captured using one or more motion sensors; determining, based on the motion data, a coarse localization, wherein the coarse localization includes a first estimate of position; obtaining one or more feature point maps, wherein the feature point maps are associated with a position of the coarse localization; accessing images captured using one or more image sensors; determining, based on the images, a fine localization pose by localizing into a feature point map of the one or more feature point maps, wherein the fine localization pose includes a second estimate of position and an estimate of orientation; generating, based on the fine localization pose, a virtual object image including a view of a virtual object; and displaying the virtual object image.Type: ApplicationFiled: October 4, 2021Publication date: March 31, 2022Inventors: Bruno M. Sommer, Alexandre Da Veiga
-
Patent number: 11210932Abstract: A method includes identifying remote devices, at a host device, based on received signals that indicate locations and device types for the remote devices. The method also includes identifying visible devices in images of a location and matching a first visible device to a first remote device. The first visible device is matched with the first remote device based on presence of the first visible device within a search area of the images, the search area of the images is determined based on the location for the first remote device, the first visible device is matched with the first remote device based on the device type for the first remote device, and the first visible device is matched with the first remote device based on a machine recognizable indicator that is output by the first visible device. The method also includes pairing the first remote device with the host device.Type: GrantFiled: May 19, 2020Date of Patent: December 28, 2021Assignee: Apple Inc.Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
-
Publication number: 20210349676Abstract: In one implementation, a method of providing display device sharing and interactivity in simulated reality is performed at a first electronic device including one or more processors and a non-transitory memory. The method includes obtaining a gesture input to a first display device in communication with the first electronic device from a first user, where the first display device includes a first display. The method further includes transmitting a representation of the first display to a second electronic device in response to obtaining the gesture input. The method additionally includes receiving an input message directed to the first display device from the second electronic device, where the input message includes an input directive obtained by the second electronic device from a second user. The method also includes transmitting the input message to the first display device for execution by the first display device.Type: ApplicationFiled: September 3, 2019Publication date: November 11, 2021Inventors: Bruno M. Sommer, Alexandre Da Veiga, Ioana Negoita
-
Publication number: 20210321070Abstract: Various implementations disclosed herein include devices, systems, and methods that that modify audio of played back AV content based on context in accordance with some implementations. In some implementations audio-visual content of a physical environment is obtained, and the audio-visual content includes visual content and audio content that includes a plurality of audio portions corresponding to the visual content. In some implementations, a context for presenting the audio-visual content is determined, and a temporal relationship between one or more audio portions of the plurality of audio portions and the visual content is determined based on the context. Then, synthesized audio-visual content is presented based on the temporal relationship.Type: ApplicationFiled: June 25, 2021Publication date: October 14, 2021Inventors: Shai Messingher Lang, Alexandre Da Veiga, Spencer H. Ray, Symeon Delikaris Manias
-
Patent number: 11138472Abstract: Systems and methods for localization for mobile devices are described. Some implementations may include accessing motion data captured using one or more motion sensors; determining, based on the motion data, a coarse localization, wherein the coarse localization includes a first estimate of position; obtaining one or more feature point maps, wherein the feature point maps are associated with a position of the coarse localization; accessing images captured using one or more image sensors; determining, based on the images, a fine localization pose by localizing into a feature point map of the one or more feature point maps, wherein the fine localization pose includes a second estimate of position and an estimate of orientation; generating, based on the fine localization pose, a virtual object image including a view of a virtual object; and displaying the virtual object image.Type: GrantFiled: September 19, 2019Date of Patent: October 5, 2021Assignee: Apple Inc.Inventors: Bruno M. Sommer, Alexandre da Veiga
-
Publication number: 20210295548Abstract: Various implementations disclosed herein include devices, systems, and methods that determines a distance between a portion (e.g., top) of a person's head and the floor below as an estimate of the person's height. For example, an example process may include determining a first position on a head of a person in a three-dimensional (3D) coordinate system, the first position determined based on detecting a feature on the head based on a two-dimensional (2D) image of the person in a physical environment, determining a second position on a floor below the first position in the 3D coordinate system, and estimating a height based on determining a distance between the first position and the second position.Type: ApplicationFiled: February 23, 2021Publication date: September 23, 2021Inventors: Alexandre Da Veiga, George J. Hito, Timothy R. Pease
-
Publication number: 20210241477Abstract: In accordance with some embodiments, a technique that enables an electronic device with a camera to automatically gather and generate requisite data from the real-world environment to allow the electronic device to quickly and efficiently determine and provide accurate measurements of physical spaces and/or objects within the real-world environment is described.Type: ApplicationFiled: April 19, 2021Publication date: August 5, 2021Inventor: Alexandre da VEIGA