Patents by Inventor Alireza Dirafzoon

Alireza Dirafzoon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966701
    Abstract: In one embodiment, a method includes rendering a first output image comprising one or more augmented-reality (AR) objects for displays of an AR rendering device of an AR system associated with a first user. The method further includes accessing sensor signals associated with the first user. The one or more sensor signals may be captured by sensors of the AR system. The method further includes detecting a change in a context of the first user with respect to a real-world environment based on the sensor signals. The method further includes rendering a second output image comprising the AR objects for the displays of the AR rendering device. One or more of the AR objects may be adapted based on the detected change in the context of the first user.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: April 23, 2024
    Assignee: Meta Platforms, Inc.
    Inventors: Yiming Pu, Christopher E Balmes, Gabrielle Catherine Moskey, John Jacob Blakeley, Amy Lawson Bearman, Alireza Dirafzoon, Matthew Dan Feiszli, Ganesh Venkatesh, Babak Damavandi, Jiwen Ren, Chengyuan Yan, Guangqiang Dong
  • Patent number: 11580970
    Abstract: A method, an electronic device and computer readable medium for dialogue breakdown detection are provided. The method includes obtaining a verbal input from an audio sensor. The method also includes generating a reply to the verbal input. The method additionally includes identifying a local context from the verbal input and a global context from the verbal input, additional verbal inputs previously received by the audio sensor, and previous replies generated in response to the additional verbal inputs. The method further includes identifying a dialogue breakdown in response to determining that the reply does not correspond to the local context and the global context. In addition, the method includes generating sound corresponding to the reply through a speaker when the dialogue breakdown is not identified.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: February 14, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: JongHo Shin, Alireza Dirafzoon, Aviral Anshu
  • Publication number: 20220374130
    Abstract: In one embodiment, a method includes rendering a first output image comprising one or more augmented-reality (AR) objects for displays of an AR rendering device of an AR system associated with a first user. The method further includes accessing sensor signals associated with the first user. The one or more sensor signals may be captured by sensors of the AR system. The method further includes detecting a change in a context of the first user with respect to a real-world environment based on the sensor signals. The method further includes rendering a second output image comprising the AR objects for the displays of the AR rendering device. One or more of the AR objects may be adapted based on the detected change in the context of the first user.
    Type: Application
    Filed: August 2, 2021
    Publication date: November 24, 2022
    Inventors: Yiming Pu, Christopher E. Balmes, Gabrielle Catherine Moskey, John Jacob Blakeley, Amy Lawson Bearman, Alireza Dirafzoon, Matthew Dan Feiszli, Ganesh Venkatesh, Babak Damavandi, Jiwen Ren, Chengyuan Yan, Guangqiang Dong
  • Publication number: 20220366170
    Abstract: In one embodiment, a method includes accessing from a client system associated with a first user sensor signals captured by sensors of the client system, wherein the client system comprises a plurality of sensors, and wherein the sensors signals are accessed from the sensors based on cascading model policies, wherein each cascading model policy utilizes one or more of a respective cost or relevance associated with each sensor, detecting a change in a context of the first user associated with an activity of the first user based on machine-learning models and the sensor signals, wherein the change in the context of the first user satisfies a trigger condition associated with the activity, and responsive to the detected change in the context of the first user automatically capturing visual data by cameras of the client system.
    Type: Application
    Filed: August 4, 2021
    Publication date: November 17, 2022
    Inventors: Emily Wang, Yilei Li, Amy Lawson Bearman, Alireza Dirafzoon, Ruchir Srivastava
  • Patent number: 11106868
    Abstract: A method, an electronic device, and computer readable medium is provided. The method includes identifying a set of observable features associated with one or more users. The method also includes generating latent features from the set of observable features. The method additionally includes sorting the latent features into one or more clusters. Each of the one or more clusters represents verbal utterances of a group of users that share a portion of the latent features. The method further includes generating a language model that corresponds to a specific cluster of the one or more clusters. The language model represents a probability ranking of the verbal utterances that are associated with the group of users of the specific cluster.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: August 31, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Anil Yadav, Abdul Rafay Khalid, Alireza Dirafzoon, Mohammad Mahdi Moazzami, Pu Song, Zheng Zhou
  • Publication number: 20200321002
    Abstract: A method, an electronic device and computer readable medium for dialogue breakdown detection are provided. The method includes obtaining a verbal input from an audio sensor. The method also includes generating a reply to the verbal input. The method additionally includes identifying a local context from the verbal input and a global context from the verbal input, additional verbal inputs previously received by the audio sensor, and previous replies generated in response to the additional verbal inputs. The method further includes identifying a dialogue breakdown in response to determining that the reply does not correspond to the local context and the global context. In addition, the method includes generating sound corresponding to the reply through a speaker when the dialogue breakdown is not identified.
    Type: Application
    Filed: March 23, 2020
    Publication date: October 8, 2020
    Inventors: JongHo Shin, Alireza Dirafzoon, Aviral Anshu
  • Publication number: 20190279618
    Abstract: A method, an electronic device, and computer readable medium is provided. The method includes identifying a set of observable features associated with one or more users. The method also includes generating latent features from the set of observable features. The method additionally includes sorting the latent features into one or more clusters. Each of the one or more clusters represents verbal utterances of a group of users that share a portion of the latent features. The method further includes generating a language model that corresponds to a specific cluster of the one or more clusters. The language model represents a probability ranking of the verbal utterances that are associated with the group of users of the specific cluster.
    Type: Application
    Filed: December 20, 2018
    Publication date: September 12, 2019
    Inventors: Anil Yadav, Abdul Rafay Khalid, Alireza Dirafzoon, Mohammad Mahdi Moazzami, Pu Song, Zheng Zhou