Patents by Inventor Angela Blechschmidt

Angela Blechschmidt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11978248
    Abstract: Implementations disclosed herein provide systems and methods that match a current relationship model associated with a user's current environment to a prior relationship model for a prior environment to determine that the user is in the same environment. The current relationship model is compared with the prior relationship model based on matching characteristics of the current relationship model with characteristics of the prior relationship model.
    Type: Grant
    Filed: March 17, 2023
    Date of Patent: May 7, 2024
    Assignee: Apple Inc.
    Inventors: Angela Blechschmidt, Alexander S. Polichroniadis, Daniel Ulbricht
  • Publication number: 20240144590
    Abstract: In an exemplary process, a speech input including a referenced virtual object is received. Based on the speech input, a first reference set is obtained. The first reference set is then compared to a plurality of second reference sets. Based on the comparison, a second reference set from the plurality of second reference sets is obtained. The second reference set may be identified based on a matching score between the first reference set and the second reference set. An object is then identified based on the second reference set. Based on the identified object, the reference virtual object is displayed.
    Type: Application
    Filed: February 25, 2022
    Publication date: May 2, 2024
    Inventors: Alkeshkumar M. PATEL, Saurabh ADYA, Shruti BHARGAVA, Angela BLECHSCHMIDT, Vikas R. NAIR, Alexander S. POLICHRONIADIS, Kendal SANDRIDGE, Daniel ULBRICHT, Hong YU
  • Patent number: 11972607
    Abstract: In one implementation, a method of generating a plane hypothesis is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes obtaining an image of a scene including a plurality of pixels. The method includes obtaining a plurality of points of a point cloud based on the image of the scene. The method includes obtaining an object classification set based on the image of the scene. Each element of the object classification set includes a plurality of pixels respectively associated with a corresponding object in the scene. The method includes detecting a plane within the scene by identifying a subset of the plurality of points of the point cloud that correspond to a particular element of the object classification set.
    Type: Grant
    Filed: February 18, 2023
    Date of Patent: April 30, 2024
    Assignee: APPLE INC.
    Inventors: Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Tanmay Batra, Eshan Verma, Amit Kumar Kc
  • Publication number: 20230410540
    Abstract: Systems and processes for locating objects and environmental mapping are provided. For example, a device may receive an input including a reference to a first object, wherein the input includes a request for a location of the first object, and the first object is located in a physical environment. The device may determine a location of the first object, and in response to receiving the input, if first criteria are met, provide a description of the location, wherein a criterion of the one or more first criteria is met when the location is available, and the description includes a relationship between the first object and a reference within the physical environment. If second criteria are met, the device may forgo providing the description of the location, wherein a criterion of the one or more second criteria is met when the location is not available.
    Type: Application
    Filed: September 23, 2022
    Publication date: December 21, 2023
    Inventors: Afshin DEHGHAN, Angela BLECHSCHMIDT, Yuanzheng GONG, Feng TANG, Yang YANG, Zhihao ZHU, Monica Laura ZUENDORF
  • Publication number: 20230394773
    Abstract: Generating a virtual representation of an interaction includes determining a potential user interaction with a physical object in a physical environment, determining an object type associated with the physical object, and obtaining an object-centric affordance region for the object type, wherein the object-centric affordance region indicates, for each of one or more regions of the object type, a likelihood of user contact. The object-centric affordance region is mapped to a geometry of the physical object to obtain an instance-specific affordance region, is used to render the virtual representation of the interaction with the physical object.
    Type: Application
    Filed: June 7, 2023
    Publication date: December 7, 2023
    Inventors: Angela Blechschmidt, Gefen Kohavi, Daniel Ulbricht
  • Patent number: 11783558
    Abstract: Various implementations disclosed herein include devices, systems, and methods that uses object relationships represented in the scene graph to adjust the position of objects. For example, an example process may include obtaining a three-dimensional (3D) representation of a physical environment that was generated based on sensor data obtained during a scanning process, detecting positions of a set of objects in the physical environment based on the 3D representation, generating a scene graph for the 3D representation of the physical environment based on the detected positions of the set of objects, wherein the scene graph represents the set of objects and relationships between the objects, and determining a refined 3D representation of the physical environment by refining the position of at least one object in the set of objects based on the scene graph and an alignment rule associated with a relationship in the scene graph.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: October 10, 2023
    Assignee: Apple Inc.
    Inventors: Angela Blechschmidt, Daniel Ulbricht, Alexander S. Polichroniadis
  • Patent number: 11783552
    Abstract: In one implementation, a method of including a person in a CGR experience or excluding the person from the CGR experience is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes, while presenting a CGR experience, capturing an image of scene; detecting, in the image of the scene, a person; and determining an identity of the person. The method includes determining, based on the identity of the person, whether to include the person in the CGR experience or exclude the person from the CGR experience. The method includes presenting the CGR experience based on the determination.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: October 10, 2023
    Assignee: APPLE INC.
    Inventors: Daniel Ulbricht, Amit Kumar K C, Angela Blechschmidt, Chen-Yu Lee, Eshan Verma, Mohammad Haris Baig, Tanmay Batra
  • Publication number: 20230298266
    Abstract: In one implementation, a method of providing a portion of a three-dimensional scene model includes storing, in the non-transitory memory, a three-dimensional scene model of a physical environment including a plurality of points, wherein each of the plurality of points is associated with a set of coordinates in a three-dimensional space, wherein a subset of the plurality of points is associated with a hierarchical data set including a plurality of layers. The method includes receiving, from an objective-effectuator, a request for a portion of the three-dimensional scene model, wherein the portion of the three-dimensional scene model includes less than all of the plurality of points or less than all of the plurality of layers. The method includes obtaining, by the processor from the non-transitory memory, the portion of the three-dimensional scene model. The method includes providing, to the objective-effectuator, the portion of the three-dimensional scene model.
    Type: Application
    Filed: November 29, 2022
    Publication date: September 21, 2023
    Inventors: Payal Jotwani, Angela Blechschmidt
  • Patent number: 11710283
    Abstract: Various implementations disclosed herein include devices, systems, and methods that enable faster and more efficient real-time physical object recognition, information retrieval, and updating of a CGR environment. In some implementations, the CGR environment is provided at a first device based on a classification of the physical object, image or video data including the physical object is transmitted by the first device to a second device, and the CGR environment is updated by the first device based on a response associated with the physical object received from the second device.
    Type: Grant
    Filed: October 22, 2021
    Date of Patent: July 25, 2023
    Assignee: Apple Inc.
    Inventors: Eshan Verma, Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Chen-Yu Lee, Tanmay Batra
  • Publication number: 20230206623
    Abstract: In one implementation, a method of generating a plane hypothesis is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes obtaining an image of a scene including a plurality of pixels. The method includes obtaining a plurality of points of a point cloud based on the image of the scene. The method includes obtaining an object classification set based on the image of the scene. Each element of the object classification set includes a plurality of pixels respectively associated with a corresponding object in the scene. The method includes detecting a plane within the scene by identifying a subset of the plurality of points of the point cloud that correspond to a particular element of the object classification set.
    Type: Application
    Filed: February 18, 2023
    Publication date: June 29, 2023
    Inventors: Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Tanmay Batra, Eshan Verma, Amit Kumar KC
  • Patent number: 11640708
    Abstract: Implementations disclosed herein provide systems and methods that match a current scene graph associated with a user's current environment to a prior scene graph for a prior environment to determine that the user is in the same environment. The current scene graph is compared with the prior scene graph based on matching the objects within the current scene graph with objects within the prior scene graph.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: May 2, 2023
    Assignee: Apple Inc.
    Inventors: Angela Blechschmidt, Alexander S. Polichroniadis, Daniel Ulbricht
  • Patent number: 11610414
    Abstract: A machine learning model is trained and used to perform a computer vision task such as semantic segmentation or normal direction prediction. The model uses a current image of a physical setting and input generated from three dimensional (3D) anchor points that store information determined from prior assessments of the physical setting. The 3D anchor points store previously-determined computer vision task information for the physical setting for particular 3D points locations in a 3D worlds space, e.g., an x, y, z coordinate system that is independent of image capture device pose. For example, 3D anchor points may store previously-determined semantic labels or normal directions for 3D points identified by simultaneous localization and mapping (SLAM) processes. The 3D anchor points are stored and used to generate input for the machine model as the model continues to reason about future images of the physical setting.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: March 21, 2023
    Assignee: Apple Inc.
    Inventors: Mohammad Haris Baig, Angela Blechschmidt, Daniel Ulbricht
  • Patent number: 11610397
    Abstract: In one implementation, a method of generating a plane hypothesis is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes obtaining an image of a scene including a plurality of pixels. The method includes obtaining a plurality of points of a point cloud based on the image of the scene. The method includes obtaining an object classification set based on the image of the scene. Each element of the object classification set includes a plurality of pixels respectively associated with a corresponding object in the scene. The method includes detecting a plane within the scene by identifying a subset of the plurality of points of the point cloud that correspond to a particular element of the object classification set.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: March 21, 2023
    Assignee: APPLE INC.
    Inventors: Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Tanmay Batra, Eshan Verma, Amit Kumar KC
  • Patent number: 11468275
    Abstract: A machine learning (ML) model is trained and used to produce a probability distribution associated with a computer vision task. The ML model uses a prior probability distribution associated with a particular image capture condition determined based on sensor data. For example, given that an image was captured by an image capture device at a particular height above the floor and angle relative to the vertical world axis, a prior probability distribution for that particular image capture device condition can be used in performing a computer vision task on the image. Accordingly, the machine learning model is given the image as input as well as the prior probability distribution for the particular image capture device condition. The use of the prior probability distribution can improve the accuracy, efficiency, or effectiveness of the ML learning model for the computer vison task.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: October 11, 2022
    Assignee: Apple Inc.
    Inventors: Angela Blechschmidt, Daniel Ulbricht, Mohammad Haris Baig
  • Patent number: 11430238
    Abstract: In one implementation, a method of generating a confidence value for a result from a primary task is performed at an image processing system. The method includes obtaining, by a feature extractor portion of the neural network, a set of feature maps for an image data frame; generating a contextual information vector associated with the image data frame based on results from one or more auxiliary tasks performed on the set of feature maps by an auxiliary task sub-network portion of the neural network; performing, by a primary task sub-network portion of the neural network, a primary task on the set of feature maps for the image data frame in order to generate a primary task result; and generating a confidence value based on the contextual information vector, wherein the confidence value corresponds to a reliability metric for the primary task result.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: August 30, 2022
    Assignee: Apple Inc.
    Inventors: Angela Blechschmidt, Mohammad Haris Baig, Daniel Ulbricht
  • Patent number: 11315278
    Abstract: In one implementation, a method of estimating the orientation of an object in an image is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes obtaining an image of a scene including a plurality of pixels at a respective plurality of pixel locations and having a respective plurality of pixel values. The method includes determining a first set of pixels locations corresponding to a 2D boundary surrounding an object represented in the image and determining, based on the first set of pixel locations, a second set of pixel locations corresponding to a 3D boundary surrounding the object.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: April 26, 2022
    Assignee: APPLE INC.
    Inventors: Daniel Ulbricht, Amit Kumar K C, Angela Blechschmidt, Chen-Yu Lee, Eshan Verma, Mohammad Haris Baig, Tanmay Batra
  • Publication number: 20220114796
    Abstract: In one implementation, a method of including a person in a CGR experience or excluding the person from the CGR experience is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes, while presenting a CGR experience, capturing an image of scene; detecting, in the image of the scene, a person; and determining an identity of the person. The method includes determining, based on the identity of the person, whether to include the person in the CGR experience or exclude the person from the CGR experience. The method includes presenting the CGR experience based on the determination.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Inventors: Daniel Ulbricht, Amit Kumar K C, Angela Blechschmidt, Chen-Yu Lee, Eshan Verma, Mohammad Haris Baig, Tanmay Batra
  • Patent number: 11295529
    Abstract: In one implementation, a method of including a person in a CGR experience or excluding the person from the CGR experience is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes, while presenting a CGR experience, capturing an image of scene; detecting, in the image of the scene, a person; and determining an identity of the person. The method includes determining, based on the identity of the person, whether to include the person in the CGR experience or exclude the person from the CGR experience. The method includes presenting the CGR experience based on the determination.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: April 5, 2022
    Assignee: APPLE INC.
    Inventors: Daniel Ulbricht, Amit Kumar K C, Angela Blechschmidt, Chen-Yu Lee, Eshan Verma, Mohammad Haris Baig, Tanmay Batra
  • Publication number: 20220044486
    Abstract: Various implementations disclosed herein include devices, systems, and methods that enable faster and more efficient real-time physical object recognition, information retrieval, and updating of a CGR environment. In some implementations, the CGR environment is provided at a first device based on a classification of the physical object, image or video data including the physical object is transmitted by the first device to a second device, and the CGR environment is updated by the first device based on a response associated with the physical object received from the second device.
    Type: Application
    Filed: October 22, 2021
    Publication date: February 10, 2022
    Inventors: Eshan Verma, Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Chen-Yu Lee, Tanmay Batra
  • Publication number: 20210406541
    Abstract: In one implementation, a method of generating a plane hypothesis is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes obtaining an image of a scene including a plurality of pixels. The method includes obtaining a plurality of points of a point cloud based on the image of the scene. The method includes obtaining an object classification set based on the image of the scene. Each element of the object classification set includes a plurality of pixels respectively associated with a corresponding object in the scene. The method includes detecting a plane within the scene by identifying a subset of the plurality of points of the point cloud that correspond to a particular element of the object classification set.
    Type: Application
    Filed: September 13, 2021
    Publication date: December 30, 2021
    Inventors: Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Tanmay Batra, Eshan Verma, Amit Kumar KC