Patents by Inventor Behrooz Mahasseni

Behrooz Mahasseni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240038228
    Abstract: In some implementations, a method includes displaying, on a display, an environment that includes a representation of a virtual agent that is associated with a sensory characteristic. In some implementations, the method includes selecting, based on the sensory characteristic associated with the virtual agent, a subset of a plurality of sensors to provide sensor data for the virtual agent. In some implementations, the method includes providing the sensor data captured by the subset of the plurality of sensors to the virtual agent in order to reduce power consumption of the device. In some implementations, the method includes displaying a manipulation of the representation of the virtual agent based on an interpretation of the sensor data by the virtual agent.
    Type: Application
    Filed: July 26, 2023
    Publication date: February 1, 2024
    Inventors: Dan Feng, Behrooz Mahasseni, Bo Morgan, Daniel L. Kovacs, Mu Qiao
  • Patent number: 11869144
    Abstract: In some implementations, a device includes one or more sensors, one or more processors and a non-transitory memory. In some implementations, a method includes determining that a first portion of a physical environment is associated with a first saliency value and a second portion of the physical environment is associated with a second saliency value that is different from the first saliency value. In some implementations, the method includes obtaining, via the one or more sensors, environmental data corresponding to the physical environment. In some implementations, the method includes generating, based on the environmental data, a model of the physical environment by modeling the first portion with a first set of modeling features that is a function of the first saliency value and modeling the second portion with a second set of modeling features that is a function of the second saliency value.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: January 9, 2024
    Assignee: APPLE INC.
    Inventors: Payal Jotwani, Bo Morgan, Behrooz Mahasseni, Bradley W. Peebler, Dan Feng, Mark E. Drummond, Siva Chandra Mouli Sivapurapu
  • Patent number: 11797889
    Abstract: In one implementation, a method for modeling a behavior with synthetic training data. The method includes: obtaining source content that includes an entity performing one or more actions within an environment; generating a first environment characterization vector characterizing the environment; generating a first set of behavioral trajectories associated with the one or more actions of the entity based on the source content and the first characterization vector for the environment; generating a second environment characterization vector for the environment by perturbing the first environment characterization vector; generating a second set of behavioral trajectories associated with one or more potential actions of the entity based on the source content and the second characterization vector for the environment; and training a behavior model for a virtual agent based on the first and second sets of behavioral trajectories in order to imitate the entity.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: October 24, 2023
    Assignee: APPLE INC.
    Inventors: Edward S. Ahn, Siva Chandra Mouli Sivapurapu, Mark Drummond, Aashi Manglik, Shaun Budhram, Behrooz Mahasseni
  • Patent number: 11776192
    Abstract: In one implementation, a method for generating a blended animation. The method includes: obtaining a motion input vector for a current time period; generating a motion output vector and pose information for the current time period based on the motion input vector; selecting an animated motion from a bank of animated motions for the current time period that matches the pose information within a threshold tolerance value; obtaining a blending coefficients vector for the current time period; generating a blended animation for the current time period by blending the motion output vector with the animated motion based on the blending coefficients vector; and generating a reward signal for the blended animation for the current time period.
    Type: Grant
    Filed: January 27, 2023
    Date of Patent: October 3, 2023
    Assignee: APPLE INC.
    Inventors: Behrooz Mahasseni, Aashi Manglik, Mark Drummond, Edward S. Ahn, Shaun Budhram, Siva Chandra Mouli Sivapurapu
  • Patent number: 11710072
    Abstract: In one implementation, a method for inverse reinforcement learning for tailoring virtual agent behaviors to a specific user. The method includes: obtaining an initial behavior model for a virtual agent and an initial state for a virtual environment associated with the virtual agent, wherein the initial behavior model includes one or more tunable parameters; generating, based on the initial behavior model and the initial state for the virtual environment, a first set of behavioral trajectories for the virtual agent; obtaining a second set of behavioral trajectories from a source different from the initial behavior model; and generating an updated behavior model by adjusting at least one of the one or more tunable parameters of the initial behavior model as a function of the first and second sets of behavioral trajectories, wherein at least one of the first and second sets of behavioral trajectories are assigned different weights.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: July 25, 2023
    Inventors: Behrooz Mahasseni, Mark Drummond
  • Publication number: 20230169711
    Abstract: In one implementation, a method for generating a blended animation. The method includes: obtaining a motion input vector for a current time period; generating a motion output vector and pose information for the current time period based on the motion input vector; selecting an animated motion from a bank of animated motions for the current time period that matches the pose information within a threshold tolerance value; obtaining a blending coefficients vector for the current time period; generating a blended animation for the current time period by blending the motion output vector with the animated motion based on the blending coefficients vector; and generating a reward signal for the blended animation for the current time period.
    Type: Application
    Filed: January 27, 2023
    Publication date: June 1, 2023
    Inventors: Behrooz Mahasseni, Aashi Manglik, Mark Drummond, Edward S. Ahn, Shaun Budhram, Siva Chandra Mouli Sivapurapu
  • Publication number: 20230089049
    Abstract: In one implementation, a method of displaying content is performed at a device including a display, one or more processors, and non-transitory memory. The method includes scanning a first physical environment to detect a first physical object in the first physical environment and a second physical object in the first physical environment, wherein the first physical object meets at least one first object criterion and the second physical object meets at least one second object criterion. The method includes displaying, in association with the first physical environment, a virtual object moving along a first path from the first physical object to the second physical object.
    Type: Application
    Filed: June 29, 2022
    Publication date: March 23, 2023
    Inventors: Mark E. Drummond, Daniel L. Kovacs, Shaun D. Budhram, Edward Ahn, Behrooz Mahasseni, Aashi Manglik, Payal Jotwani, Mu Qiao, Bo Morgan, Noah Gamboa, Michael J. Gutensohn, Dan Feng, Siva Chandra Mouli Sivapurapu
  • Patent number: 11593982
    Abstract: In one implementation, a method for generating a blended animation. The method includes: obtaining a motion input vector for a current time period; generating a motion output vector and pose information for the current time period based on the motion input vector; selecting an animated motion from a bank of animated motions for the current time period that matches the pose information within a threshold tolerance value; obtaining a blending coefficients vector for the current time period; generating a blended animation for the current time period by blending the motion output vector with the animated motion based on the blending coefficients vector; and generating a reward signal for the blended animation for the current time period.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: February 28, 2023
    Assignee: APPLE INC.
    Inventors: Behrooz Mahasseni, Aashi Manglik, Mark Drummond, Edward S. Ahn, Shaun Budhram, Siva Chandra Mouli Sivapurapu
  • Patent number: 11574416
    Abstract: A method includes obtaining a set of images that correspond to a person. The method includes generating a body pose model of the person defined by a branched plurality of neural network systems. Each neural network system models a respective portion of the person between a first body-joint and a second body-joint as dependent on an adjacent portion of the person sharing the first body-joint. Providing the set of images of the respective portion to a first one and a second one of the neural network systems. The first one and second one correspond to adjacent body portions. The method includes determining, jointly by at least the first one and second one of the plurality of neural network systems pose information for the first respective body-joint and the second respective body-joint.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: February 7, 2023
    Assignee: APPLE INC.
    Inventors: Andreas N. Bigontina, Behrooz Mahasseni, Gutemberg B. Guerra Filho, Saumil B. Patel, Stefan Auer
  • Publication number: 20210312662
    Abstract: A method includes obtaining a set of images that correspond to a person. The method includes generating a body pose model of the person defined by a branched plurality of neural network systems. Each neural network system models a respective portion of the person between a first body-joint and a second body-joint as dependent on an adjacent portion of the person sharing the first body-joint. Providing the set of images of the respective portion to a first one and a second one of the neural network systems. The first one and second one correspond to adjacent body portions. The method includes determining, jointly by at least the first one and second one of the plurality of neural network systems pose information for the first respective body-joint and the second respective body-joint.
    Type: Application
    Filed: April 28, 2021
    Publication date: October 7, 2021
    Inventors: Andreas N. Bigontina, Behrooz Mahasseni, Gutemberg B. Guerra Filho, Saumil B. Patel, Stefan Auer
  • Patent number: 11062476
    Abstract: A method includes obtaining a set of images that correspond to a person. The method includes generating a body pose model of the person defined by a branched plurality of neural network systems. Each neural network system models a respective portion of the person between a first body-joint and a second body-joint as dependent on an adjacent portion of the person sharing the first body-joint. Providing the set of images of the respective portion to a first one and a second one of the neural network systems. The first one and second one correspond to adjacent body portions. The method includes determining, jointly by at least the first one and second one of the plurality of neural network systems pose information for the first respective body-joint and the second respective body-joint.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: July 13, 2021
    Assignee: APPLE INC.
    Inventors: Andreas N. Bigontina, Behrooz Mahasseni, Gutemberg B. Guerra Filho, Saumil B. Patel, Stefan Auer
  • Patent number: 10860859
    Abstract: Detection of activity in video content, and more particularly detecting in video start and end frames inclusive of an activity and a classification for the activity, is fundamental for video analytics including categorizing, searching, indexing, segmentation, and retrieval of videos. Existing activity detection processes rely on a large set of features and classifiers that exhaustively run over every time step of a video at multiple temporal scales, or as a small improvement computationally propose segments of the video on which to perform classification. These existing activity detection processes, however, are computationally expensive, particularly when trying to achieve activity detection accuracy, and moreover are not configurable for any particular time or computation budget. The present disclosure provides a time and/or computation budget-aware method for detecting activity in video that relies on a recurrent neural network implementing a learned policy.
    Type: Grant
    Filed: November 28, 2018
    Date of Patent: December 8, 2020
    Assignee: NVIDIA Corporation
    Inventors: Xiaodong Yang, Pavlo Molchanov, Jan Kautz, Behrooz Mahasseni
  • Publication number: 20190163978
    Abstract: Detection of activity in video content, and more particularly detecting in video start and end frames inclusive of an activity and a classification for the activity, is fundamental for video analytics including categorizing, searching, indexing, segmentation, and retrieval of videos. Existing activity detection processes rely on a large set of features and classifiers that exhaustively run over every time step of a video at multiple temporal scales, or as a small improvement computationally propose segments of the video on which to perform classification. These existing activity detection processes, however, are computationally expensive, particularly when trying to achieve activity detection accuracy, and moreover are not configurable for any particular time or computation budget. The present disclosure provides a time and/or computation budget-aware method for detecting activity in video that relies on a recurrent neural network implementing a learned policy.
    Type: Application
    Filed: November 28, 2018
    Publication date: May 30, 2019
    Inventors: Xiaodong Yang, Pavlo Molchanov, Jan Kautz, Behrooz Mahasseni