Patents by Inventor Chetan Parag Gupta

Chetan Parag Gupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230223026
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating contextually relevant transcripts of voice recordings based on social networking data. For instance, the disclosed systems receive a voice recording from a user corresponding to a message thread including the user and one or more co-users. The disclosed systems analyze acoustic features of the voice recording to generate transcription-text probabilities. The disclosed systems generate term weights for terms corresponding to objects associated with the user within a social networking system by analyzing user social networking data. Using the contextually aware term weights, the disclosed systems adjust the transcription-text probabilities. Based on the adjusted transcription-text probabilities, the disclosed systems generate a transcript of the voice recording for display within the message thread.
    Type: Application
    Filed: February 22, 2023
    Publication date: July 13, 2023
    Inventors: James Matthew Grichnik, Chetan Parag Gupta, Fuchun Peng, Yinan Zhang, Si Chen
  • Patent number: 11610588
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating contextually relevant transcripts of voice recordings based on social networking data. For instance, the disclosed systems receive a voice recording from a user corresponding to a message thread including the user and one or more co-users. The disclosed systems analyze acoustic features of the voice recording to generate transcription-text probabilities. The disclosed systems generate term weights for terms corresponding to objects associated with the user within a social networking system by analyzing user social networking data. Using the contextually aware term weights, the disclosed systems adjust the transcription-text probabilities. Based on the adjusted transcription-text probabilities, the disclosed systems generate a transcript of the voice recording for display within the message thread.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: March 21, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: James Matthew Grichnik, Chetan Parag Gupta, Fuchun Peng, Yinan Zhang, Si Chen
  • Patent number: 11386607
    Abstract: Systems, methods, and non-transitory computer-readable media can obtain information describing a set of views corresponding to a rendered environment, the views being captured based on a specified virtual camera configuration; determine at least one representation in which information describing the set of views is formatted; and output virtual reality content based at least in part on the at least one representation.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: July 12, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Chetan Parag Gupta, Simon Gareth Green
  • Patent number: 10824320
    Abstract: Systems, methods, and non-transitory computer-readable media can determine at least one request to access a content item, wherein the content item was composed using a set of camera feeds that capture at least one scene from a set of different positions. A viewport interface can be provided on a display screen of the computing device through which playback of the content item is presented, the viewport interface being configured to allow a user operating the computing device to virtually navigate the at least one scene by changing i) a direction of the viewport interface relative to the scene or ii) a zoom level of the viewport interface. A navigation indicator can be provided in the viewport interface, the navigation indicator being configured to visually indicate any changes to a respective direction and zoom level of the viewport interface during playback of the content item.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: November 3, 2020
    Assignee: Facebook, Inc.
    Inventors: Joyce Hsu, Charles Matthew Sutton, Jaime Leonardo Rovira, Anning Hu, Chetan Parag Gupta, Cliff Warren
  • Patent number: 10692187
    Abstract: Systems, methods, and non-transitory computer-readable media can determine that a content item is being presented through a display screen of the computing device. Information describing one or more salient points of interest that appear during presentation of the content item are determined, wherein the salient points of interest are predicted to be of interest to one or more users accessing the content item. The presentation of at least a first salient point of interest is enhanced during presentation of the content item based at least in part on the information.
    Type: Grant
    Filed: April 16, 2017
    Date of Patent: June 23, 2020
    Assignee: Facebook, Inc.
    Inventors: Evgeny V. Kuzyakov, Chetan Parag Gupta, Renbin Peng
  • Patent number: 10445614
    Abstract: Systems, methods, and non-transitory computer-readable media can generate a saliency prediction model for identifying salient points of interest that appear during presentation of content items, provide at least one frame of a content item to the saliency prediction model, and obtain information describing at least a first salient point of interest that appears in the at least one frame from the saliency prediction model, wherein the first salient point of interest is predicted to be of interest to one or more users accessing the content item.
    Type: Grant
    Filed: April 16, 2017
    Date of Patent: October 15, 2019
    Assignee: Facebook, Inc.
    Inventors: Renbin Peng, Evgeny V. Kuzyakov, Chetan Parag Gupta
  • Publication number: 20180302590
    Abstract: Systems, methods, and non-transitory computer-readable media can determine that a content item is being presented through a display screen of the computing device. Information describing one or more salient points of interest that appear during presentation of the content item are determined, wherein the salient points of interest are predicted to be of interest to one or more users accessing the content item. The presentation of at least a first salient point of interest is enhanced during presentation of the content item based at least in part on the information.
    Type: Application
    Filed: April 16, 2017
    Publication date: October 18, 2018
    Inventors: Evgeny V. Kuzyakov, Chetan Parag Gupta, Renbin Peng
  • Publication number: 20180300747
    Abstract: Systems, methods, and non-transitory computer-readable media can present a plurality of content items in a virtual reality content item. Tracking data associated with a plurality of users that access the virtual reality content item can be obtained. An analysis associated with the plurality of content items based on the tracking data can provided, wherein the analysis indicates one or more attributes associated with the plurality of users.
    Type: Application
    Filed: April 14, 2017
    Publication date: October 18, 2018
    Inventors: Evgeny V. Kuzyakov, Chetan Parag Gupta, Renbin Peng
  • Publication number: 20180300583
    Abstract: Systems, methods, and non-transitory computer-readable media can generate a saliency prediction model for identifying salient points of interest that appear during presentation of content items, provide at least one frame of a content item to the saliency prediction model, and obtain information describing at least a first salient point of interest that appears in the at least one frame from the saliency prediction model, wherein the first salient point of interest is predicted to be of interest to one or more users accessing the content item.
    Type: Application
    Filed: April 16, 2017
    Publication date: October 18, 2018
    Inventors: Renbin Peng, Evgeny V. Kuzyakov, Chetan Parag Gupta
  • Publication number: 20170316806
    Abstract: Systems, methods, and non-transitory computer-readable media can determine at least one request to access a content item, wherein the requested content item was composed using a set of camera feeds that capture one or more scenes from a set of different positions. Information describing an automated viewing mode for navigating at least some of the scenes in the requested content item is obtained. A viewport interface is provided on a display screen of the computing device through which playback of the requested content item is presented. The viewport interface is automatically navigated through at least some of the scenes during playback of the requested content item based at least in part on the automated viewing mode.
    Type: Application
    Filed: May 2, 2016
    Publication date: November 2, 2017
    Inventors: Cliff Warren, Charles Matthew Sutton, Chetan Parag Gupta, Joyce Hsu, Anning Hu, Zeyu Zeng
  • Publication number: 20170255372
    Abstract: Systems, methods, and non-transitory computer-readable media can determine at least one request to access a content item, wherein the content item was composed using a set of camera feeds that capture at least one scene from a set of different positions. A viewport interface can be provided on a display screen of the computing device through which playback of the content item is presented, the viewport interface being configured to allow a user operating the computing device to virtually navigate the at least one scene by changing i) a direction of the viewport interface relative to the scene or ii) a zoom level of the viewport interface. A navigation indicator can be provided in the viewport interface, the navigation indicator being configured to visually indicate any changes to a respective direction and zoom level of the viewport interface during playback of the content item.
    Type: Application
    Filed: March 7, 2016
    Publication date: September 7, 2017
    Inventors: Joyce Hsu, Charles Matthew Sutton, Jaime Leonardo Rovira, Anning Hu, Chetan Parag Gupta, Cliff Warren