Patents by Inventor Eric W. Hwang

Eric W. Hwang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11647147
    Abstract: In one embodiment, a method includes an intelligent communication device detecting that a person is visible to a camera of the device, determining a first biometric characteristic of the person discernable by the device, associating the first biometric characteristic with a user identifier unique to the person, determining, while the person is identifiable based on the first biometric characteristic, a second biometric characteristic of the person discernable by the device, and associating the second biometric characteristic with the user identifier.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: May 9, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Eric W. Hwang, Saurabh Mitra, Jeffrey Zhang, Alan Liu, Rahul Nallamothu, Samuel Franklin Pepose
  • Patent number: 11636571
    Abstract: A video system processes video frames from a wide angle camera to dewarp the video frames in a manner that preserves conformity of an object of interest. A crop region of a video frame corresponding to the object of interest is determined. An input parameter to a dewarping function is generated based on the detected crop region. The dewarping function is applied to the crop region using the input parameter to generate a dewarped video frame and the dewarped video frame is outputted. The input parameter may be generated in a manner that causes the dewarped video frame to have higher conformity and lower distortion in the region around the object of interest than in a region distant from the object of interest.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: April 25, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Josiah Vincent Vivona, Ted Taejin Kim, Eric W. Hwang
  • Publication number: 20220210341
    Abstract: In one embodiment, a method includes an intelligent communication device detecting that a person is visible to a camera of the device, determining a first biometric characteristic of the person discernable by the device, associating the first biometric characteristic with a user identifier unique to the person, determining, while the person is identifiable based on the first biometric characteristic, a second biometric characteristic of the person discernable by the device, and associating the second biometric characteristic with the user identifier.
    Type: Application
    Filed: December 28, 2020
    Publication date: June 30, 2022
    Inventors: Eric W. Hwang, Saurabh Mitra, Jeffrey Zhang, Alan Liu, Rahul Nallamothu, Samuel Franklin Pepose
  • Patent number: 11240284
    Abstract: The disclosed computer-implemented method may include (i) receiving a video stream for encoding, (ii) determining that the video stream is associated with an application, (iii) analyzing the video stream to label one or more regions of a frame within the video stream with a semantic category, (iv) determining, based at least in part on the application with which the video stream is associated, a prioritization of the semantic category, and (v) allocating encoding resources to one or more portions of the frame that comprise at least a part of the one or more regions of the frame based at least in part on the prioritization of the semantic category. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: February 1, 2022
    Assignee: Facebook, Inc.
    Inventors: Jose M Gonzalez, Eric W Hwang
  • Publication number: 20220021668
    Abstract: A computing device determines availability of device features based on confidence levels associated with predicted identities of an individual within a recognition range of the device. The computing device determines the one or more confidence levels based on captured recognition information including biometric data describing the individual. The computing device determines whether a given action associated with a device feature is available to an individual based on whether the confidence level satisfies authorization criteria corresponding to the action.
    Type: Application
    Filed: July 20, 2020
    Publication date: January 20, 2022
    Inventors: Nikhil Gautam, Eric W. Hwang, Arup Kanjilal
  • Patent number: 11087435
    Abstract: A video system processes video frames from a wide angle camera to dewarp the video frames in a manner that preserves conformity of an object of interest. A crop region of a video frame corresponding to the object of interest is determined. An input parameter to a dewarping function is generated based on the detected crop region. The dewarping function is applied to the crop region using the input parameter to generate a dewarped video frame and the dewarped video frame is outputted. The input parameter may be generated in a manner that causes the dewarped video frame to have higher conformity and lower distortion in the region around the object of interest than in a region distant from the object of interest.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: August 10, 2021
    Assignee: Facebook, Inc.
    Inventors: Josiah Vincent Vivona, Ted Taejin Kim, Eric W. Hwang
  • Patent number: 10979669
    Abstract: In one embodiment, a method includes accessing foreground visual data that comprises a set of coordinate points that correspond to a plurality of surface points of a person in an environment; generating a bounding box for the set of coordinate points, wherein the bounding box comprises every coordinate point in the set of coordinate points; providing instructions to collect background visual data for an area in the environment that is outside of the bounding box; and providing the foreground visual data and the background visual data to an intelligent director associated with the computing device.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: April 13, 2021
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Patent number: 10915776
    Abstract: Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. A user viewing video data captured by another user's client device identifies an object of interest in the video data to the other user's client device. The other user's client device modifies captured video data so a focal point of the captured video data is the object of interest and so the object of interest is magnified in the captured video data. Subsequently, the modified video data is transmitted to the client device of the user viewing the captured video data.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: February 9, 2021
    Assignee: Facebook, Inc.
    Inventors: Eric W. Hwang, Jason Francis Harrison, Timo Juhani Ahonen
  • Patent number: 10873697
    Abstract: Multiple users communicate over a network via client devices that include one or more cameras and a display to enable video messaging. At least one of the client devices modifies regions of video data captured by the client device's camera to more prominently identify the people within the video data. To identify a person, the client device disambiguates between actual people and static objects that may appear like people. The client device uses pose models to identify bounding boxes and applies a motion model to determine if a bounding box may include a person based on an amount of movement within the bounding box. If a threshold amount of movement is detected in a bounding box, the client device obtains a higher resolution portion of the scene including the bounding box and classifies whether the bounding box contains a person based on movement within the higher resolution video.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: December 22, 2020
    Assignee: Facebook, Inc.
    Inventors: Anshul Kumar Jain, Abhinav Garlapati, Eric W. Hwang
  • Patent number: 10848687
    Abstract: Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. Additionally, a sending client device captures and transmits video data to a receiving client, while receiving one or more video presentation settings of the receiving client device. The sending client device applies one or more models to the captured video data and compares output from the models to the video presentation settings of the receiving client device. Based on the comparison, the sending client device provides suggested modifications to one or more video presentation settings to the receiving client device. For example, the sending client device provides a suggestion to reorient a display device of the receiving client device.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: November 24, 2020
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Timo Juhani Ahonen, Eric W. Hwang, Belmer Perrella Garcia Negrillo
  • Patent number: 10838689
    Abstract: In one embodiment, a method includes receiving audio input during an audio-video communication session. The audio input is generated by a first sound source within an environment and a second sound source within the environment. The method includes receiving video input depicting the first sound source and the second sound source in the environment. The method includes identifying the first sound source and the second sound source using the audio input and the video input. The method includes predicting a first engagement metric for the first sound source and a second engagement metric for the second sound source based on the identifying. The method includes processing the audio input to generate an audio output signal based on a comparison of the first engagement metric and the second engagement metric. The method includes providing the audio output signal to a computing device associated with the audio-video communication session.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: November 17, 2020
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Shahid Razzaq, Eric W. Hwang
  • Patent number: 10757347
    Abstract: A client device receives video data and displays the video data via a display device. An overlay including content other than the video data is also displayed in a specific area of the display device and at least partially occludes the video data displayed within the specific area of the display device. The client device identifies coordinates of regions of interest within frames of the video data. When the client device determines that at least a threshold amount of a region of interest within the video data is displayed within the specific area of the display device, where the overlay is displayed, for at least a threshold amount of time, the client device increases a transparency of the overlay, repositions the overlay, of otherwise modifies the overlay to prevent the overlay from occluding the region of interest.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: August 25, 2020
    Assignee: Facebook, Inc.
    Inventors: Lawrence Alan Corwin, Eric W. Hwang, David Kaufman, Alan Liu
  • Patent number: 10659731
    Abstract: In one embodiment, a method includes accessing input data from one or more different input sources. The input sources include: one or more cameras, one or more microphones, and a social graph maintained by a social-networking system. Based on the input data, generating a current descriptive model for a current audio-video communication session that comprises one or more descriptive characteristics about (1) an environment associated with the current audio-video communication session, (2) one or more people within the environment, or (3) one or more contextual elements associated with the current audio-video communication session. The method also includes generating one or more instructions for the current audio-video communication session that are based the one or more descriptive characteristics; and sending the one or more instructions to a computing device associated with the one or more cameras and the one or more microphones.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: May 19, 2020
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Publication number: 20200112690
    Abstract: Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. Additionally, a sending client device captures and transmits video data to a receiving client, while receiving one or more video presentation settings of the receiving client device. The sending client device applies one or more models to the captured video data and compares output from the models to the video presentation settings of the receiving client device. Based on the comparison, the sending client device provides suggested modifications to one or more video presentation settings to the receiving client device. For example, the sending client device provides a suggestion to reorient a display device of the receiving client device.
    Type: Application
    Filed: October 5, 2018
    Publication date: April 9, 2020
    Inventors: Jason Francis Harrison, Timo Juhani Ahonen, Eric W. Hwang, Belmer Perrella Garcia Negrillo
  • Publication number: 20200110958
    Abstract: Various client devices include displays and one or more image capture devices configured to capture video data. Different users of an online system may authorize client devices to exchange information captured by their respective image capture devices. A user viewing video data captured by another user's client device identifies an object of interest in the video data to the other user's client device. The other user's client device modifies captured video data so a focal point of the captured video data is the object of interest and so the object of interest is magnified in the captured video data. Subsequently, the modified video data is transmitted to the client device of the user viewing the captured video data.
    Type: Application
    Filed: October 5, 2018
    Publication date: April 9, 2020
    Inventors: Eric W. Hwang, Jason Francis Harrison, Timo Juhani Ahonen
  • Publication number: 20200050420
    Abstract: In one embodiment, a method includes receiving audio input data from a microphone array of at least two microphones. The audio input data is generated by a first sound source at a first location and a second sound source at a second location. The method also includes calculating a first engagement metric for the first sound source and a second engagement metric for the second sound source. The first engagement metric approximates an interest level of a receiving user for the first sound source, and the second engagement metric approximates an interest level from the receiving user for the second sound source. The method also includes determining that the first engagement metric is greater than the second engagement metric, and processing the audio input data to generate an audio output signal. The audio output signal may amplify sound generated by the first sound source relative to the second sound source.
    Type: Application
    Filed: September 19, 2019
    Publication date: February 13, 2020
    Inventors: Jason Francis Harrison, Shahid Razzaq, Eric W. Hwang
  • Patent number: 10523864
    Abstract: In one embodiment, a method includes identifying, from a set of coordinate points that correspond to a plurality of surface points of a person in an environment, a coordinate point that corresponds to a facial feature of the person; generating a facial structure for a face of the person, wherein the facial structure: covers a plurality of facial features of the person; and substantially matches a pre-determined facial structure; generating a body skeletal structure for the person, wherein the body skeletal structure substantially matches a predetermined body skeletal structure and substantially aligns with the generated facial structure in at least one dimension of a two-dimensional coordinate plane; and associating the generated body skeletal structure and facial structure with the person in the environment.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: December 31, 2019
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Patent number: 10511808
    Abstract: In one embodiment, a method includes, at a first time during an audio-video communication session: (1) determining that a first participant is located in an environment; (2) locating a first body region of the first participant; (3) generating a first color histogram of the first body region that represents a first distribution of colors of the first body region. The method also includes, at a second time: (1) determining that a second participant is located in the environment; (2) locating a second body region of the second participant; (3) generating a second color histogram of the second body region that represents a second distribution of one or more colors of the second body region. The method also includes determining that the first and second color histograms are substantially alike, and based on the determination, determining that the first participant is the same as the second participant.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: December 17, 2019
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Eric W. Hwang, Rahul Nallamothu, Shahid Razzaq
  • Patent number: 10462422
    Abstract: In one embodiment, a method includes receiving audio input data from a microphone array of at least two microphones. The audio input data is generated by a first sound source at a first location and a second sound source at a second location. The method also includes calculating a first engagement metric for the first sound source and a second engagement metric for the second sound source. The first engagement metric approximates an interest level of a receiving user for the first sound source, and the second engagement metric approximates an interest level from the receiving user for the second sound source. The method also includes determining that the first engagement metric is greater than the second engagement metric, and processing the audio input data to generate an audio output signal. The audio output signal may amplify sound generated by the first sound source relative to the second sound source.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: October 29, 2019
    Assignee: Facebook, Inc.
    Inventors: Jason Francis Harrison, Shahid Razzaq, Eric W. Hwang
  • Publication number: 20190313054
    Abstract: In one embodiment, a method includes receiving audio input data from a microphone array of at least two microphones. The audio input data is generated by a first sound source at a first location and a second sound source at a second location. The method also includes calculating a first engagement metric for the first sound source and a second engagement metric for the second sound source. The first engagement metric approximates an interest level of a receiving user for the first sound source, and the second engagement metric approximates an interest level from the receiving user for the second sound source. The method also includes determining that the first engagement metric is greater than the second engagement metric, and processing the audio input data to generate an audio output signal. The audio output signal may amplify sound generated by the first sound source relative to the second sound source.
    Type: Application
    Filed: April 9, 2018
    Publication date: October 10, 2019
    Inventors: Jason Francis Harrison, Shahid Razzaq, Eric W. Hwang