Patents by Inventor Congxi Lu
Congxi Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230402052Abstract: An audio signal processing method is disclosed. The method comprises: providing an input audio signal comprising a plurality of input data frames offset from each other by a predetermined frame shift and each input data frame having a predetermined frame length; performing first windowing process on the plurality of input data frames in sequence with a first window function; performing predetermined signal processing on the input audio signal after the first windowing processing, and generating an output audio signal; wherein the output audio signal has a plurality of output data frames each having the predetermined frame length corresponding to the plurality of input data frames of the input audio signal; performing second windowing processing on the plurality of output data frames in sequence with a second window function; and outputting the plurality of output data frames after the second windowing processing by superimposing the plurality of output data frames with the predetermined frame shift.Type: ApplicationFiled: October 8, 2021Publication date: December 14, 2023Inventors: Congxi LU, Linkai LI, Yufan YUAN, Hongcheng SUN
-
Publication number: 20220383558Abstract: In one embodiment, a method includes receiving an indication to apply a dynamic mask to an input video, wherein the dynamic mask comprises a plurality of graphical features according to a plurality of mask effects, and wherein the dynamic mask is applied based on one or more user selections specifying one or more of the mask effects, identifying a first user in an input image, and generating an output video comprising the graphical features rendered using the mask effects.Type: ApplicationFiled: August 8, 2022Publication date: December 1, 2022Inventors: Maria Luz Caballero, Molly Jane Fowler, Congxi Lu, Charles Joseph Hodgkins, Daniel Moreno Cuellar
-
Patent number: 11443460Abstract: In one embodiment, a method includes identifying a first user in an input image, accessing social data of the first user in the input image, where social data comprises information from a social graph of an online social network, selecting, based on social data of the first user in the input image, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. The set of masks may comprise masks previously selected by friends of the first user within the online social network. The selected mask may be selected from a lookup table that maps the social data to the selected mask.Type: GrantFiled: March 5, 2020Date of Patent: September 13, 2022Assignee: Meta Platforms, Inc.Inventors: Maria Luz Caballero, Molly Jane Fowler, Congxi Lu, Charles Joseph Hodgkins, Daniel Alberto Cuellar
-
Patent number: 11270688Abstract: A deep neural network based audio processing method is provided. The method includes: obtaining a deep neural network based speech extraction model; receiving an audio input object having a speech portion and a non-speech portion, wherein the audio input object includes one or more audio data frames each having a set of audio data samples sampled at a predetermined sampling interval and represented in time domain data format; obtaining a user audiogram and a set of user gain compensation coefficients associated with the user audiogram; and inputting the audio input object and the set of user gain compensation coefficients into the trained speech extraction model to obtain an audio output result represented in time domain data format outputted by the trained speech extraction model, wherein the non-speech portion of the audio input object is at least partially attenuated in or removed from the audio output result.Type: GrantFiled: July 16, 2020Date of Patent: March 8, 2022Inventors: Congxi Lu, Linkai Li, Hongcheng Sun, Xinke Liu
-
Patent number: 11210854Abstract: Systems, methods, and non-transitory computer readable media can determine a placement in a camera view for displaying an augmented reality (AR) advertisement, where the camera view is associated with a computing device. An AR advertisement for a user associated with the computing device can be determined based on attributes associated with the user. Display of the AR advertisement can be caused at the determined placement in the camera view.Type: GrantFiled: December 20, 2017Date of Patent: December 28, 2021Assignee: Facebook, Inc.Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
-
Patent number: 11030440Abstract: Systems, methods, and non-transitory computer-readable media can identify a first user depicted in image content captured by a second user. It is determined that the first user should be obscured in the image content based on privacy settings. The image content is modified to obscure the first user.Type: GrantFiled: December 20, 2017Date of Patent: June 8, 2021Assignee: Facebook, Inc.Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
-
Publication number: 20210074266Abstract: A deep neural network based audio processing method is provided. The method includes: obtaining a deep neural network based speech extraction model; receiving an audio input object having a speech portion and a non-speech portion, wherein the audio input object includes one or more audio data frames each having a set of audio data samples sampled at a predetermined sampling interval and represented in time domain data format; obtaining a user audiogram and a set of user gain compensation coefficients associated with the user audiogram; and inputting the audio input object and the set of user gain compensation coefficients into the trained speech extraction model to obtain an audio output result represented in time domain data format outputted by the trained speech extraction model, wherein the non-speech portion of the audio input object is at least partially attenuated in or removed from the audio output result.Type: ApplicationFiled: July 16, 2020Publication date: March 11, 2021Inventors: Congxi LU, Linkai LI, Hongcheng SUN, Xinke LIU
-
Publication number: 20200202579Abstract: In one embodiment, a method includes identifying a first user in an input image, accessing social data of the first user in the input image, where social data comprises information from a social graph of an online social network, selecting, based on social data of the first user in the input image, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. The set of masks may comprise masks previously selected by friends of the first user within the online social network. The selected mask may be selected from a lookup table that maps the social data to the selected mask.Type: ApplicationFiled: March 5, 2020Publication date: June 25, 2020Inventors: Maria Luz Caballero, Molly Jane Fowler, Congxi Lu, Charles Joseph Hodgkins, Daniel Alberto Cuellar
-
Patent number: 10636175Abstract: In one embodiment, a method includes identifying an emotion associated with an identified first object in one or more input images, selecting, based on the emotion, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. Applying the mask includes generating graphical features based on the identified first object or a second object in the input images according to instructions specified by the mask effects, and incorporating the graphical features into an output image. The emotion may be identified based on graphical features of the identified first object. The graphical features of the identified object may include facial features. The selected mask may be selected from a lookup table that maps the identified emotion to the selected mask.Type: GrantFiled: December 22, 2016Date of Patent: April 28, 2020Assignee: Facebook, Inc.Inventors: Maria Luz Caballero, Molly Jane Fowler, Congxi Lu, Charles Joseph Hodgkins, Daniel Moreno Cuellar
-
Patent number: 10452898Abstract: Systems, methods, and non-transitory computer-readable media can identify one or more objects depicted in a camera view of a camera application displayed on a display of a user device. An augmented reality overlay is determined based on the one or more objects identified in the camera view. The camera view is modified based on the augmented reality overlay.Type: GrantFiled: February 1, 2019Date of Patent: October 22, 2019Assignee: Facebook, Inc.Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
-
Publication number: 20190171867Abstract: Systems, methods, and non-transitory computer-readable media can identify one or more objects depicted in a camera view of a camera application displayed on a display of a user device. An augmented reality overlay is determined based on the one or more objects identified in the camera view. The camera view is modified based on the augmented reality overlay.Type: ApplicationFiled: February 1, 2019Publication date: June 6, 2019Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
-
Patent number: 10229312Abstract: Systems, methods, and non-transitory computer-readable media can identify one or more objects depicted in a camera view of a camera application displayed on a display of a user device. An augmented reality overlay is determined based on the one or more objects identified in the camera view. The camera view is modified based on the augmented reality overlay.Type: GrantFiled: December 20, 2017Date of Patent: March 12, 2019Assignee: Facebook, Inc.Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
-
Publication number: 20180189840Abstract: Systems, methods, and non-transitory computer readable media can determine a placement in a camera view for displaying an augmented reality (AR) advertisement, where the camera view is associated with a computing device. An AR advertisement for a user associated with the computing device can be determined based on attributes associated with the user. Display of the AR advertisement can be caused at the determined placement in the camera view.Type: ApplicationFiled: December 20, 2017Publication date: July 5, 2018Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
-
Publication number: 20180190032Abstract: Systems, methods, and non-transitory computer-readable media can identify one or more objects depicted in a camera view of a camera application displayed on a display of a user device. An augmented reality overlay is determined based on the one or more objects identified in the camera view. The camera view is modified based on the augmented reality overlay.Type: ApplicationFiled: December 20, 2017Publication date: July 5, 2018Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
-
Publication number: 20180190033Abstract: Systems, methods, and non-transitory computer readable media can obtain image data from a camera view associated with a computing device, where the image data is associated with an interior space. A portion of the image data for displaying one or more augmented reality (AR) content items can be determined. An AR content item to display in the camera view can be determined. The AR content item can be provided for presentation in the camera view based on the determined portion of the image data.Type: ApplicationFiled: December 20, 2017Publication date: July 5, 2018Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
-
Publication number: 20180189552Abstract: Systems, methods, and non-transitory computer-readable media can identify a first user depicted in image content captured by a second user. It is determined that the first user should be obscured in the image content based on privacy settings. The image content is modified to obscure the first user.Type: ApplicationFiled: December 20, 2017Publication date: July 5, 2018Inventors: John Samuel Barnett, Dantley Davis, Congxi Lu, Jonathan Morton, Peter Vajda, Joshua Charles Harris
-
Publication number: 20180182141Abstract: In one embodiment, a method includes identifying an emotion associated with an identified first object in one or more input images, selecting, based on the emotion, a mask from a set of masks, where the mask specifies one or more mask effects, and for each of the input images, applying the mask to the input image. Applying the mask includes generating graphical features based on the identified first object or a second object in the input images according to instructions specified by the mask effects, and incorporating the graphical features into an output image. The emotion may be identified based on graphical features of the identified first object. The graphical features of the identified object may include facial features. The selected mask may be selected from a lookup table that maps the identified emotion to the selected mask.Type: ApplicationFiled: December 22, 2016Publication date: June 28, 2018Inventors: Maria Luz Caballero, Molly Jane Fowler, Congxi Lu, Charles Joseph Hodgkins, Daniel Moreno Marrero