Patents by Inventor Daniel Ulbricht
Daniel Ulbricht has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11978248Abstract: Implementations disclosed herein provide systems and methods that match a current relationship model associated with a user's current environment to a prior relationship model for a prior environment to determine that the user is in the same environment. The current relationship model is compared with the prior relationship model based on matching characteristics of the current relationship model with characteristics of the prior relationship model.Type: GrantFiled: March 17, 2023Date of Patent: May 7, 2024Assignee: Apple Inc.Inventors: Angela Blechschmidt, Alexander S. Polichroniadis, Daniel Ulbricht
-
Publication number: 20240144590Abstract: In an exemplary process, a speech input including a referenced virtual object is received. Based on the speech input, a first reference set is obtained. The first reference set is then compared to a plurality of second reference sets. Based on the comparison, a second reference set from the plurality of second reference sets is obtained. The second reference set may be identified based on a matching score between the first reference set and the second reference set. An object is then identified based on the second reference set. Based on the identified object, the reference virtual object is displayed.Type: ApplicationFiled: February 25, 2022Publication date: May 2, 2024Inventors: Alkeshkumar M. PATEL, Saurabh ADYA, Shruti BHARGAVA, Angela BLECHSCHMIDT, Vikas R. NAIR, Alexander S. POLICHRONIADIS, Kendal SANDRIDGE, Daniel ULBRICHT, Hong YU
-
Patent number: 11972607Abstract: In one implementation, a method of generating a plane hypothesis is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes obtaining an image of a scene including a plurality of pixels. The method includes obtaining a plurality of points of a point cloud based on the image of the scene. The method includes obtaining an object classification set based on the image of the scene. Each element of the object classification set includes a plurality of pixels respectively associated with a corresponding object in the scene. The method includes detecting a plane within the scene by identifying a subset of the plurality of points of the point cloud that correspond to a particular element of the object classification set.Type: GrantFiled: February 18, 2023Date of Patent: April 30, 2024Assignee: APPLE INC.Inventors: Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Tanmay Batra, Eshan Verma, Amit Kumar Kc
-
Patent number: 11943679Abstract: Location mapping and navigation user interfaces may be generated and presented via mobile computing devices. A mobile device may detect its location and orientation using internal systems, and may capture image data using a device camera. The mobile device also may retrieve map information from a map server corresponding to the current location of the device. Using the image data captured at the device, the current location data, and the corresponding local map information, the mobile device may determine or update a current orientation reading for the device. Location errors and updated location data also may be determined for the device, and a map user interface may be generated and displayed on the mobile device using the updated device orientation and/or location data.Type: GrantFiled: July 8, 2022Date of Patent: March 26, 2024Assignee: Apple Inc.Inventors: Robert William Mayor, Isaac T. Miller, Adam S. Howell, Vinay R. Majjigi, Oliver Ruepp, Daniel Ulbricht, Oleg Naroditsky, Christian Lipski, Sean P. Cier, Hyojoon Bae, Saurabh Godha, Patrick J. Coleman
-
Publication number: 20240013487Abstract: In one implementation, a method includes: identifying a plurality of plot-effectuators and a plurality of environmental elements within a scene associated with a portion of video content; determining one or more spatial relationships between the plurality of plot-effectuators and the plurality of environmental elements within the scene; synthesizing a representation of the scene based at least in part on the one or more spatial relationships; extracting a plurality of action sequences corresponding to the plurality of plot-effectuators based at least in part on the portion of the video content; and generating a corresponding synthesized reality (SR) reconstruction of the scene by driving a plurality of digital assets, associated with the plurality of plot-effectuators, within the representation of the scene according to the plurality of action sequences.Type: ApplicationFiled: July 11, 2022Publication date: January 11, 2024Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Publication number: 20230394773Abstract: Generating a virtual representation of an interaction includes determining a potential user interaction with a physical object in a physical environment, determining an object type associated with the physical object, and obtaining an object-centric affordance region for the object type, wherein the object-centric affordance region indicates, for each of one or more regions of the object type, a likelihood of user contact. The object-centric affordance region is mapped to a geometry of the physical object to obtain an instance-specific affordance region, is used to render the virtual representation of the interaction with the physical object.Type: ApplicationFiled: June 7, 2023Publication date: December 7, 2023Inventors: Angela Blechschmidt, Gefen Kohavi, Daniel Ulbricht
-
Publication number: 20230351644Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.Type: ApplicationFiled: June 28, 2023Publication date: November 2, 2023Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Patent number: 11783558Abstract: Various implementations disclosed herein include devices, systems, and methods that uses object relationships represented in the scene graph to adjust the position of objects. For example, an example process may include obtaining a three-dimensional (3D) representation of a physical environment that was generated based on sensor data obtained during a scanning process, detecting positions of a set of objects in the physical environment based on the 3D representation, generating a scene graph for the 3D representation of the physical environment based on the detected positions of the set of objects, wherein the scene graph represents the set of objects and relationships between the objects, and determining a refined 3D representation of the physical environment by refining the position of at least one object in the set of objects based on the scene graph and an alignment rule associated with a relationship in the scene graph.Type: GrantFiled: February 23, 2022Date of Patent: October 10, 2023Assignee: Apple Inc.Inventors: Angela Blechschmidt, Daniel Ulbricht, Alexander S. Polichroniadis
-
Patent number: 11783552Abstract: In one implementation, a method of including a person in a CGR experience or excluding the person from the CGR experience is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes, while presenting a CGR experience, capturing an image of scene; detecting, in the image of the scene, a person; and determining an identity of the person. The method includes determining, based on the identity of the person, whether to include the person in the CGR experience or exclude the person from the CGR experience. The method includes presenting the CGR experience based on the determination.Type: GrantFiled: December 21, 2021Date of Patent: October 10, 2023Assignee: APPLE INC.Inventors: Daniel Ulbricht, Amit Kumar K C, Angela Blechschmidt, Chen-Yu Lee, Eshan Verma, Mohammad Haris Baig, Tanmay Batra
-
Patent number: 11727606Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.Type: GrantFiled: March 29, 2022Date of Patent: August 15, 2023Assignee: APPLE INC.Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Patent number: 11710283Abstract: Various implementations disclosed herein include devices, systems, and methods that enable faster and more efficient real-time physical object recognition, information retrieval, and updating of a CGR environment. In some implementations, the CGR environment is provided at a first device based on a classification of the physical object, image or video data including the physical object is transmitted by the first device to a second device, and the CGR environment is updated by the first device based on a response associated with the physical object received from the second device.Type: GrantFiled: October 22, 2021Date of Patent: July 25, 2023Assignee: Apple Inc.Inventors: Eshan Verma, Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Chen-Yu Lee, Tanmay Batra
-
Publication number: 20230206623Abstract: In one implementation, a method of generating a plane hypothesis is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes obtaining an image of a scene including a plurality of pixels. The method includes obtaining a plurality of points of a point cloud based on the image of the scene. The method includes obtaining an object classification set based on the image of the scene. Each element of the object classification set includes a plurality of pixels respectively associated with a corresponding object in the scene. The method includes detecting a plane within the scene by identifying a subset of the plurality of points of the point cloud that correspond to a particular element of the object classification set.Type: ApplicationFiled: February 18, 2023Publication date: June 29, 2023Inventors: Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Tanmay Batra, Eshan Verma, Amit Kumar KC
-
Patent number: 11647260Abstract: In one implementation, consumption of media content (such as video, audio, or text) is supplemented with an immersive synthesized reality (SR) map based on the media content. In various implementations described herein, the SR map includes a plurality of SR environment representations which, when selected by a user, cause display of a corresponding SR environment.Type: GrantFiled: April 26, 2022Date of Patent: May 9, 2023Assignee: APPLE INC.Inventors: Ian M. Richter, Daniel Ulbricht, Eshan Verma
-
Patent number: 11640708Abstract: Implementations disclosed herein provide systems and methods that match a current scene graph associated with a user's current environment to a prior scene graph for a prior environment to determine that the user is in the same environment. The current scene graph is compared with the prior scene graph based on matching the objects within the current scene graph with objects within the prior scene graph.Type: GrantFiled: April 12, 2021Date of Patent: May 2, 2023Assignee: Apple Inc.Inventors: Angela Blechschmidt, Alexander S. Polichroniadis, Daniel Ulbricht
-
Patent number: 11610414Abstract: A machine learning model is trained and used to perform a computer vision task such as semantic segmentation or normal direction prediction. The model uses a current image of a physical setting and input generated from three dimensional (3D) anchor points that store information determined from prior assessments of the physical setting. The 3D anchor points store previously-determined computer vision task information for the physical setting for particular 3D points locations in a 3D worlds space, e.g., an x, y, z coordinate system that is independent of image capture device pose. For example, 3D anchor points may store previously-determined semantic labels or normal directions for 3D points identified by simultaneous localization and mapping (SLAM) processes. The 3D anchor points are stored and used to generate input for the machine model as the model continues to reason about future images of the physical setting.Type: GrantFiled: February 20, 2020Date of Patent: March 21, 2023Assignee: Apple Inc.Inventors: Mohammad Haris Baig, Angela Blechschmidt, Daniel Ulbricht
-
Patent number: 11610397Abstract: In one implementation, a method of generating a plane hypothesis is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes obtaining an image of a scene including a plurality of pixels. The method includes obtaining a plurality of points of a point cloud based on the image of the scene. The method includes obtaining an object classification set based on the image of the scene. Each element of the object classification set includes a plurality of pixels respectively associated with a corresponding object in the scene. The method includes detecting a plane within the scene by identifying a subset of the plurality of points of the point cloud that correspond to a particular element of the object classification set.Type: GrantFiled: September 13, 2021Date of Patent: March 21, 2023Assignee: APPLE INC.Inventors: Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Tanmay Batra, Eshan Verma, Amit Kumar KC
-
Publication number: 20230048501Abstract: An exemplary process obtains sensor data corresponding to a physical environment including one or more physical objects. A physical property of the one or more physical objects is determined based on the sensor data. A presentation mode associated with a knowledge domain is determined. An extended reality environment including a view of the physical environment and a visualization selected based on the determined presentation mode is provided. The visualization includes virtual content associated with the knowledge domain. The virtual content is provided based on display characteristics specified by the presentation mode that depend upon the physical property of the one or more objects.Type: ApplicationFiled: August 3, 2022Publication date: February 16, 2023Inventors: Meghan C. Welles, Stacey L. Matthias, Daniel Ulbricht, Jim J. Tilander, Mariano Merchante, Sarune Baceviciute
-
Publication number: 20220345849Abstract: Location mapping and navigation user interfaces may be generated and presented via mobile computing devices. A mobile device may detect its location and orientation using internal systems, and may capture image data using a device camera. The mobile device also may retrieve map information from a map server corresponding to the current location of the device. Using the image data captured at the device, the current location data, and the corresponding local map information, the mobile device may determine or update a current orientation reading for the device. Location errors and updated location data also may be determined for the device, and a map user interface may be generated and displayed on the mobile device using the updated device orientation and/or location data.Type: ApplicationFiled: July 8, 2022Publication date: October 27, 2022Applicant: Apple Inc.Inventors: Robert William Mayor, Isaac T. Miller, Adam S. Howell, Vinay R. Majjigi, Oliver Ruepp, Daniel Ulbricht, Oleg Naroditsky, Christian Lipski, Sean P. Cier, Hyojoon Bae, Saurabh Godha, Patrick J. Coleman
-
Patent number: 11468275Abstract: A machine learning (ML) model is trained and used to produce a probability distribution associated with a computer vision task. The ML model uses a prior probability distribution associated with a particular image capture condition determined based on sensor data. For example, given that an image was captured by an image capture device at a particular height above the floor and angle relative to the vertical world axis, a prior probability distribution for that particular image capture device condition can be used in performing a computer vision task on the image. Accordingly, the machine learning model is given the image as input as well as the prior probability distribution for the particular image capture device condition. The use of the prior probability distribution can improve the accuracy, efficiency, or effectiveness of the ML learning model for the computer vison task.Type: GrantFiled: January 16, 2020Date of Patent: October 11, 2022Assignee: Apple Inc.Inventors: Angela Blechschmidt, Daniel Ulbricht, Mohammad Haris Baig
-
Patent number: 11430238Abstract: In one implementation, a method of generating a confidence value for a result from a primary task is performed at an image processing system. The method includes obtaining, by a feature extractor portion of the neural network, a set of feature maps for an image data frame; generating a contextual information vector associated with the image data frame based on results from one or more auxiliary tasks performed on the set of feature maps by an auxiliary task sub-network portion of the neural network; performing, by a primary task sub-network portion of the neural network, a primary task on the set of feature maps for the image data frame in order to generate a primary task result; and generating a confidence value based on the contextual information vector, wherein the confidence value corresponds to a reliability metric for the primary task result.Type: GrantFiled: April 2, 2020Date of Patent: August 30, 2022Assignee: Apple Inc.Inventors: Angela Blechschmidt, Mohammad Haris Baig, Daniel Ulbricht