Patents by Inventor Rahul Nallamothu

Rahul Nallamothu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135701
    Abstract: In one embodiment, a method, by one or more computing systems, includes determining, based on frames captured by a camera, a plurality of participants are located in an environment, locating, within a first frame, a first body region of a first participant of the plurality of participants, detecting, at a first time, appearance information of the first body region of the first participant, calculating, using one or more machine-learning models, a confidence score corresponding to a match between the appearance information of the first participant at the first time and one or more profiles of pre-registered participants, updating, using the one or more machine-learning models, the confidence score based on one or more additional appearance information detected within additional frames, determining whether the updated confidence score is above a predetermined threshold, and in response to determining the updated confidence score is above the predetermined threshold, authenticating the first participant.
    Type: Application
    Filed: October 23, 2022
    Publication date: April 25, 2024
    Inventors: Mahdi Salmani Rahimi, Rahul Nallamothu, Samuel Franklin Pepose
  • Patent number: 11647147
    Abstract: In one embodiment, a method includes an intelligent communication device detecting that a person is visible to a camera of the device, determining a first biometric characteristic of the person discernable by the device, associating the first biometric characteristic with a user identifier unique to the person, determining, while the person is identifiable based on the first biometric characteristic, a second biometric characteristic of the person discernable by the device, and associating the second biometric characteristic with the user identifier.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: May 9, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Eric W. Hwang, Saurabh Mitra, Jeffrey Zhang, Alan Liu, Rahul Nallamothu, Samuel Franklin Pepose
  • Publication number: 20220210341
    Abstract: In one embodiment, a method includes an intelligent communication device detecting that a person is visible to a camera of the device, determining a first biometric characteristic of the person discernable by the device, associating the first biometric characteristic with a user identifier unique to the person, determining, while the person is identifiable based on the first biometric characteristic, a second biometric characteristic of the person discernable by the device, and associating the second biometric characteristic with the user identifier.
    Type: Application
    Filed: December 28, 2020
    Publication date: June 30, 2022
    Inventors: Eric W. Hwang, Saurabh Mitra, Jeffrey Zhang, Alan Liu, Rahul Nallamothu, Samuel Franklin Pepose
  • Patent number: 10979669
    Abstract: In one embodiment, a method includes accessing foreground visual data that comprises a set of coordinate points that correspond to a plurality of surface points of a person in an environment; generating a bounding box for the set of coordinate points, wherein the bounding box comprises every coordinate point in the set of coordinate points; providing instructions to collect background visual data for an area in the environment that is outside of the bounding box; and providing the foreground visual data and the background visual data to an intelligent director associated with the computing device.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: April 13, 2021
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Patent number: 10659731
    Abstract: In one embodiment, a method includes accessing input data from one or more different input sources. The input sources include: one or more cameras, one or more microphones, and a social graph maintained by a social-networking system. Based on the input data, generating a current descriptive model for a current audio-video communication session that comprises one or more descriptive characteristics about (1) an environment associated with the current audio-video communication session, (2) one or more people within the environment, or (3) one or more contextual elements associated with the current audio-video communication session. The method also includes generating one or more instructions for the current audio-video communication session that are based the one or more descriptive characteristics; and sending the one or more instructions to a computing device associated with the one or more cameras and the one or more microphones.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: May 19, 2020
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Patent number: 10523864
    Abstract: In one embodiment, a method includes identifying, from a set of coordinate points that correspond to a plurality of surface points of a person in an environment, a coordinate point that corresponds to a facial feature of the person; generating a facial structure for a face of the person, wherein the facial structure: covers a plurality of facial features of the person; and substantially matches a pre-determined facial structure; generating a body skeletal structure for the person, wherein the body skeletal structure substantially matches a predetermined body skeletal structure and substantially aligns with the generated facial structure in at least one dimension of a two-dimensional coordinate plane; and associating the generated body skeletal structure and facial structure with the person in the environment.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: December 31, 2019
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Patent number: 10511808
    Abstract: In one embodiment, a method includes, at a first time during an audio-video communication session: (1) determining that a first participant is located in an environment; (2) locating a first body region of the first participant; (3) generating a first color histogram of the first body region that represents a first distribution of colors of the first body region. The method also includes, at a second time: (1) determining that a second participant is located in the environment; (2) locating a second body region of the second participant; (3) generating a second color histogram of the second body region that represents a second distribution of one or more colors of the second body region. The method also includes determining that the first and second color histograms are substantially alike, and based on the determination, determining that the first participant is the same as the second participant.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: December 17, 2019
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Publication number: 20190313013
    Abstract: In one embodiment, a method includes identifying, from a set of coordinate points that correspond to a plurality of surface points of a person in an environment, a coordinate point that corresponds to a facial feature of the person; generating a facial structure for a face of the person, wherein the facial structure: covers a plurality of facial features of the person; and substantially matches a pre-determined facial structure; generating a body skeletal structure for the person, wherein the body skeletal structure substantially matches a predetermined body skeletal structure and substantially aligns with the generated facial structure in at least one dimension of a two-dimensional coordinate plane; and associating the generated body skeletal structure and facial structure with the person in the environment.
    Type: Application
    Filed: October 5, 2018
    Publication date: October 10, 2019
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Publication number: 20190313056
    Abstract: In one embodiment, a method includes, at a first time during an audio-video communication session: (1) determining that a first participant is located in an environment; (2) locating a first body region of the first participant; (3) generating a first color histogram of the first body region that represents a first distribution of colors of the first body region. The method also includes, at a second time: (1) determining that a second participant is located in the environment; (2) locating a second body region of the second participant; (3) generating a second color histogram of the second body region that represents a second distribution of one or more colors of the second body region. The method also includes determining that the first and second color histograms are substantially alike, and based on the determination, determining that the first participant is the same as the second participant.
    Type: Application
    Filed: October 5, 2018
    Publication date: October 10, 2019
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Publication number: 20190313058
    Abstract: In one embodiment, a method includes accessing input data from one or more different input sources. The input sources include: one or more cameras, one or more microphones, and a social graph maintained by a social-networking system. Based on the input data, generating a current descriptive model for a current audio-video communication session that comprises one or more descriptive characteristics about (1) an environment associated with the current audio-video communication session, (2) one or more people within the environment, or (3) one or more contextual elements associated with the current audio-video communication session. The method also includes generating one or more instructions for the current audio-video communication session that are based the one or more descriptive characteristics; and sending the one or more instructions to a computing device associated with the one or more cameras and the one or more microphones.
    Type: Application
    Filed: October 5, 2018
    Publication date: October 10, 2019
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Publication number: 20190311480
    Abstract: In one embodiment, a method includes accessing foreground visual data that comprises a set of coordinate points that correspond to a plurality of surface points of a person in an environment; generating a bounding box for the set of coordinate points, wherein the bounding box comprises every coordinate point in the set of coordinate points; providing instructions to collect background visual data for an area in the environment that is outside of the bounding box; and providing the foreground visual data and the background visual data to an intelligent director associated with the computing device.
    Type: Application
    Filed: October 5, 2018
    Publication date: October 10, 2019
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq