Patents by Inventor Congxi Lu

Congxi Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230402052
    Abstract: An audio signal processing method is disclosed. The method comprises: providing an input audio signal comprising a plurality of input data frames offset from each other by a predetermined frame shift and each input data frame having a predetermined frame length; performing first windowing process on the plurality of input data frames in sequence with a first window function; performing predetermined signal processing on the input audio signal after the first windowing processing, and generating an output audio signal; wherein the output audio signal has a plurality of output data frames each having the predetermined frame length corresponding to the plurality of input data frames of the input audio signal; performing second windowing processing on the plurality of output data frames in sequence with a second window function; and outputting the plurality of output data frames after the second windowing processing by superimposing the plurality of output data frames with the predetermined frame shift.
    Type: Application
    Filed: October 8, 2021
    Publication date: December 14, 2023
    Inventors: Congxi LU, Linkai LI, Yufan YUAN, Hongcheng SUN
  • Publication number: 20220383558
    Abstract: In one embodiment, a method includes receiving an indication to apply a dynamic mask to an input video, wherein the dynamic mask comprises a plurality of graphical features according to a plurality of mask effects, and wherein the dynamic mask is applied based on one or more user selections specifying one or more of the mask effects, identifying a first user in an input image, and generating an output video comprising the graphical features rendered using the mask effects.
    Type: Application
    Filed: August 8, 2022
    Publication date: December 1, 2022
    Inventors: Maria Luz Caballero, Molly Jane Fowler, Congxi Lu, Charles Joseph Hodgkins, Daniel Moreno Cuellar
  • Patent number: 11443460
    Abstract: In one embodiment, a method includes identifying a first user in an input image, accessing social data of the first user in the input image, where social data comprises information from a social graph of an online social network, selecting, based on social data of the first user in the input image, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. The set of masks may comprise masks previously selected by friends of the first user within the online social network. The selected mask may be selected from a lookup table that maps the social data to the selected mask.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: September 13, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Maria Luz Caballero, Molly Jane Fowler, Congxi Lu, Charles Joseph Hodgkins, Daniel Alberto Cuellar
  • Patent number: 11270688
    Abstract: A deep neural network based audio processing method is provided. The method includes: obtaining a deep neural network based speech extraction model; receiving an audio input object having a speech portion and a non-speech portion, wherein the audio input object includes one or more audio data frames each having a set of audio data samples sampled at a predetermined sampling interval and represented in time domain data format; obtaining a user audiogram and a set of user gain compensation coefficients associated with the user audiogram; and inputting the audio input object and the set of user gain compensation coefficients into the trained speech extraction model to obtain an audio output result represented in time domain data format outputted by the trained speech extraction model, wherein the non-speech portion of the audio input object is at least partially attenuated in or removed from the audio output result.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: March 8, 2022
    Inventors: Congxi Lu, Linkai Li, Hongcheng Sun, Xinke Liu
  • Patent number: 11210854
    Abstract: Systems, methods, and non-transitory computer readable media can determine a placement in a camera view for displaying an augmented reality (AR) advertisement, where the camera view is associated with a computing device. An AR advertisement for a user associated with the computing device can be determined based on attributes associated with the user. Display of the AR advertisement can be caused at the determined placement in the camera view.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: December 28, 2021
    Assignee: Facebook, Inc.
    Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
  • Patent number: 11030440
    Abstract: Systems, methods, and non-transitory computer-readable media can identify a first user depicted in image content captured by a second user. It is determined that the first user should be obscured in the image content based on privacy settings. The image content is modified to obscure the first user.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: June 8, 2021
    Assignee: Facebook, Inc.
    Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
  • Publication number: 20210074266
    Abstract: A deep neural network based audio processing method is provided. The method includes: obtaining a deep neural network based speech extraction model; receiving an audio input object having a speech portion and a non-speech portion, wherein the audio input object includes one or more audio data frames each having a set of audio data samples sampled at a predetermined sampling interval and represented in time domain data format; obtaining a user audiogram and a set of user gain compensation coefficients associated with the user audiogram; and inputting the audio input object and the set of user gain compensation coefficients into the trained speech extraction model to obtain an audio output result represented in time domain data format outputted by the trained speech extraction model, wherein the non-speech portion of the audio input object is at least partially attenuated in or removed from the audio output result.
    Type: Application
    Filed: July 16, 2020
    Publication date: March 11, 2021
    Inventors: Congxi LU, Linkai LI, Hongcheng SUN, Xinke LIU
  • Publication number: 20200202579
    Abstract: In one embodiment, a method includes identifying a first user in an input image, accessing social data of the first user in the input image, where social data comprises information from a social graph of an online social network, selecting, based on social data of the first user in the input image, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. The set of masks may comprise masks previously selected by friends of the first user within the online social network. The selected mask may be selected from a lookup table that maps the social data to the selected mask.
    Type: Application
    Filed: March 5, 2020
    Publication date: June 25, 2020
    Inventors: Maria Luz Caballero, Molly Jane Fowler, Congxi Lu, Charles Joseph Hodgkins, Daniel Alberto Cuellar
  • Patent number: 10636175
    Abstract: In one embodiment, a method includes identifying an emotion associated with an identified first object in one or more input images, selecting, based on the emotion, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. Applying the mask includes generating graphical features based on the identified first object or a second object in the input images according to instructions specified by the mask effects, and incorporating the graphical features into an output image. The emotion may be identified based on graphical features of the identified first object. The graphical features of the identified object may include facial features. The selected mask may be selected from a lookup table that maps the identified emotion to the selected mask.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: April 28, 2020
    Assignee: Facebook, Inc.
    Inventors: Maria Luz Caballero, Molly Jane Fowler, Congxi Lu, Charles Joseph Hodgkins, Daniel Moreno Cuellar
  • Patent number: 10452898
    Abstract: Systems, methods, and non-transitory computer-readable media can identify one or more objects depicted in a camera view of a camera application displayed on a display of a user device. An augmented reality overlay is determined based on the one or more objects identified in the camera view. The camera view is modified based on the augmented reality overlay.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: October 22, 2019
    Assignee: Facebook, Inc.
    Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
  • Publication number: 20190171867
    Abstract: Systems, methods, and non-transitory computer-readable media can identify one or more objects depicted in a camera view of a camera application displayed on a display of a user device. An augmented reality overlay is determined based on the one or more objects identified in the camera view. The camera view is modified based on the augmented reality overlay.
    Type: Application
    Filed: February 1, 2019
    Publication date: June 6, 2019
    Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
  • Patent number: 10229312
    Abstract: Systems, methods, and non-transitory computer-readable media can identify one or more objects depicted in a camera view of a camera application displayed on a display of a user device. An augmented reality overlay is determined based on the one or more objects identified in the camera view. The camera view is modified based on the augmented reality overlay.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: March 12, 2019
    Assignee: Facebook, Inc.
    Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
  • Publication number: 20180189840
    Abstract: Systems, methods, and non-transitory computer readable media can determine a placement in a camera view for displaying an augmented reality (AR) advertisement, where the camera view is associated with a computing device. An AR advertisement for a user associated with the computing device can be determined based on attributes associated with the user. Display of the AR advertisement can be caused at the determined placement in the camera view.
    Type: Application
    Filed: December 20, 2017
    Publication date: July 5, 2018
    Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
  • Publication number: 20180190032
    Abstract: Systems, methods, and non-transitory computer-readable media can identify one or more objects depicted in a camera view of a camera application displayed on a display of a user device. An augmented reality overlay is determined based on the one or more objects identified in the camera view. The camera view is modified based on the augmented reality overlay.
    Type: Application
    Filed: December 20, 2017
    Publication date: July 5, 2018
    Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
  • Publication number: 20180190033
    Abstract: Systems, methods, and non-transitory computer readable media can obtain image data from a camera view associated with a computing device, where the image data is associated with an interior space. A portion of the image data for displaying one or more augmented reality (AR) content items can be determined. An AR content item to display in the camera view can be determined. The AR content item can be provided for presentation in the camera view based on the determined portion of the image data.
    Type: Application
    Filed: December 20, 2017
    Publication date: July 5, 2018
    Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
  • Publication number: 20180189552
    Abstract: Systems, methods, and non-transitory computer-readable media can identify a first user depicted in image content captured by a second user. It is determined that the first user should be obscured in the image content based on privacy settings. The image content is modified to obscure the first user.
    Type: Application
    Filed: December 20, 2017
    Publication date: July 5, 2018
    Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
  • Publication number: 20180182141
    Abstract: In one embodiment, a method includes identifying an emotion associated with an identified first object in one or more input images, selecting, based on the emotion, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. Applying the mask includes generating graphical features based on the identified first object or a second object in the input images according to instructions specified by the mask effects, and incorporating the graphical features into an output image. The emotion may be identified based on graphical features of the identified first object. The graphical features of the identified object may include facial features. The selected mask may be selected from a lookup table that maps the identified emotion to the selected mask.
    Type: Application
    Filed: December 22, 2016
    Publication date: June 28, 2018
    Inventors: Maria Luz Caballero, Molly Jane Fowler, Congxi Lu, Charles Joseph Hodgkins, Daniel Moreno Marrero