Patents by Inventor Chetan Parag Gupta
Chetan Parag Gupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240338171Abstract: Systems and method for controlling an electronic device using a worn wrist-wearable device are disclosed. A method includes detecting, by a wrist-wearable device worn by a user, an in-air gesture performed by the user. The wrist-wearable device is communicatively coupled with one or more electronic devices. The method includes, in response to a determination that the in-air gesture is associated with a control command (i) determining an electronic device of the one or more electronic devices to perform the control command, and (ii) providing instructions to the electronic device selected to perform the control command. The instructions cause the electronic device to perform the control command. The method further includes providing an indication via the wrist-wearable device and/or the one or more electronic devices that the control command was performed.Type: ApplicationFiled: April 1, 2024Publication date: October 10, 2024Inventors: Swati Goel, Yfat Eyal, Dana Nicole Sasinowski, Chetan Parag Gupta, Ian Sebastian Murphy Bicking, Mina Fahmi
-
Publication number: 20240168567Abstract: The various implementations described herein include methods and systems for power-efficient processing of neuromuscular signals. In one aspect, a method includes: (i) obtaining a first set of neuromuscular signals; (ii) after determining, using a low-power detector, that the first set of neuromuscular signals require further processing to confirm that a predetermined in-air hand gesture has been performed: (a) processing the first set of neuromuscular signals using a high-power detector; and (b) in accordance with a determination that the processing indicates that the predetermined in-air hand gesture did occur, registering an occurrence of the predetermined in-air hand gesture; (iii) receiving a second set of neuromuscular signals; and (iv) after determining, using the low-power detector and not using the high-power detector, that a different predetermined in-air hand gesture was performed, performing an action in response to the different predetermined in-air hand gesture.Type: ApplicationFiled: September 19, 2023Publication date: May 23, 2024Inventors: Alexandre Barachant, Bijan Treister, Shan Chu, Igor Gurovski, Chetan Parag Gupta, Tahir Turan Caliskan, Pascal Alexander Bentioulis, Viswanath Sivakumar, Zhong Zhang, Ramzi Elkhater, Maciej Lazarewicz, Per-Erik Bergstrom, Peter Andrew Matsimanis, Chengyuan Yan
-
Publication number: 20230223026Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating contextually relevant transcripts of voice recordings based on social networking data. For instance, the disclosed systems receive a voice recording from a user corresponding to a message thread including the user and one or more co-users. The disclosed systems analyze acoustic features of the voice recording to generate transcription-text probabilities. The disclosed systems generate term weights for terms corresponding to objects associated with the user within a social networking system by analyzing user social networking data. Using the contextually aware term weights, the disclosed systems adjust the transcription-text probabilities. Based on the adjusted transcription-text probabilities, the disclosed systems generate a transcript of the voice recording for display within the message thread.Type: ApplicationFiled: February 22, 2023Publication date: July 13, 2023Inventors: James Matthew Grichnik, Chetan Parag Gupta, Fuchun Peng, Yinan Zhang, Si Chen
-
Patent number: 11610588Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating contextually relevant transcripts of voice recordings based on social networking data. For instance, the disclosed systems receive a voice recording from a user corresponding to a message thread including the user and one or more co-users. The disclosed systems analyze acoustic features of the voice recording to generate transcription-text probabilities. The disclosed systems generate term weights for terms corresponding to objects associated with the user within a social networking system by analyzing user social networking data. Using the contextually aware term weights, the disclosed systems adjust the transcription-text probabilities. Based on the adjusted transcription-text probabilities, the disclosed systems generate a transcript of the voice recording for display within the message thread.Type: GrantFiled: October 28, 2019Date of Patent: March 21, 2023Assignee: Meta Platforms, Inc.Inventors: James Matthew Grichnik, Chetan Parag Gupta, Fuchun Peng, Yinan Zhang, Si Chen
-
Patent number: 11386607Abstract: Systems, methods, and non-transitory computer-readable media can obtain information describing a set of views corresponding to a rendered environment, the views being captured based on a specified virtual camera configuration; determine at least one representation in which information describing the set of views is formatted; and output virtual reality content based at least in part on the at least one representation.Type: GrantFiled: April 16, 2018Date of Patent: July 12, 2022Assignee: Meta Platforms, Inc.Inventors: Chetan Parag Gupta, Simon Gareth Green
-
Patent number: 10824320Abstract: Systems, methods, and non-transitory computer-readable media can determine at least one request to access a content item, wherein the content item was composed using a set of camera feeds that capture at least one scene from a set of different positions. A viewport interface can be provided on a display screen of the computing device through which playback of the content item is presented, the viewport interface being configured to allow a user operating the computing device to virtually navigate the at least one scene by changing i) a direction of the viewport interface relative to the scene or ii) a zoom level of the viewport interface. A navigation indicator can be provided in the viewport interface, the navigation indicator being configured to visually indicate any changes to a respective direction and zoom level of the viewport interface during playback of the content item.Type: GrantFiled: March 7, 2016Date of Patent: November 3, 2020Assignee: Facebook, Inc.Inventors: Joyce Hsu, Charles Matthew Sutton, Jaime Leonardo Rovira, Anning Hu, Chetan Parag Gupta, Cliff Warren
-
Patent number: 10692187Abstract: Systems, methods, and non-transitory computer-readable media can determine that a content item is being presented through a display screen of the computing device. Information describing one or more salient points of interest that appear during presentation of the content item are determined, wherein the salient points of interest are predicted to be of interest to one or more users accessing the content item. The presentation of at least a first salient point of interest is enhanced during presentation of the content item based at least in part on the information.Type: GrantFiled: April 16, 2017Date of Patent: June 23, 2020Assignee: Facebook, Inc.Inventors: Evgeny V. Kuzyakov, Chetan Parag Gupta, Renbin Peng
-
Patent number: 10445614Abstract: Systems, methods, and non-transitory computer-readable media can generate a saliency prediction model for identifying salient points of interest that appear during presentation of content items, provide at least one frame of a content item to the saliency prediction model, and obtain information describing at least a first salient point of interest that appears in the at least one frame from the saliency prediction model, wherein the first salient point of interest is predicted to be of interest to one or more users accessing the content item.Type: GrantFiled: April 16, 2017Date of Patent: October 15, 2019Assignee: Facebook, Inc.Inventors: Renbin Peng, Evgeny V. Kuzyakov, Chetan Parag Gupta
-
Publication number: 20180300747Abstract: Systems, methods, and non-transitory computer-readable media can present a plurality of content items in a virtual reality content item. Tracking data associated with a plurality of users that access the virtual reality content item can be obtained. An analysis associated with the plurality of content items based on the tracking data can provided, wherein the analysis indicates one or more attributes associated with the plurality of users.Type: ApplicationFiled: April 14, 2017Publication date: October 18, 2018Inventors: Evgeny V. Kuzyakov, Chetan Parag Gupta, Renbin Peng
-
Publication number: 20180302590Abstract: Systems, methods, and non-transitory computer-readable media can determine that a content item is being presented through a display screen of the computing device. Information describing one or more salient points of interest that appear during presentation of the content item are determined, wherein the salient points of interest are predicted to be of interest to one or more users accessing the content item. The presentation of at least a first salient point of interest is enhanced during presentation of the content item based at least in part on the information.Type: ApplicationFiled: April 16, 2017Publication date: October 18, 2018Inventors: Evgeny V. Kuzyakov, Chetan Parag Gupta, Renbin Peng
-
Publication number: 20180300583Abstract: Systems, methods, and non-transitory computer-readable media can generate a saliency prediction model for identifying salient points of interest that appear during presentation of content items, provide at least one frame of a content item to the saliency prediction model, and obtain information describing at least a first salient point of interest that appears in the at least one frame from the saliency prediction model, wherein the first salient point of interest is predicted to be of interest to one or more users accessing the content item.Type: ApplicationFiled: April 16, 2017Publication date: October 18, 2018Inventors: Renbin Peng, Evgeny V. Kuzyakov, Chetan Parag Gupta
-
Publication number: 20170316806Abstract: Systems, methods, and non-transitory computer-readable media can determine at least one request to access a content item, wherein the requested content item was composed using a set of camera feeds that capture one or more scenes from a set of different positions. Information describing an automated viewing mode for navigating at least some of the scenes in the requested content item is obtained. A viewport interface is provided on a display screen of the computing device through which playback of the requested content item is presented. The viewport interface is automatically navigated through at least some of the scenes during playback of the requested content item based at least in part on the automated viewing mode.Type: ApplicationFiled: May 2, 2016Publication date: November 2, 2017Inventors: Cliff Warren, Charles Matthew Sutton, Chetan Parag Gupta, Joyce Hsu, Anning Hu, Zeyu Zeng
-
Publication number: 20170255372Abstract: Systems, methods, and non-transitory computer-readable media can determine at least one request to access a content item, wherein the content item was composed using a set of camera feeds that capture at least one scene from a set of different positions. A viewport interface can be provided on a display screen of the computing device through which playback of the content item is presented, the viewport interface being configured to allow a user operating the computing device to virtually navigate the at least one scene by changing i) a direction of the viewport interface relative to the scene or ii) a zoom level of the viewport interface. A navigation indicator can be provided in the viewport interface, the navigation indicator being configured to visually indicate any changes to a respective direction and zoom level of the viewport interface during playback of the content item.Type: ApplicationFiled: March 7, 2016Publication date: September 7, 2017Inventors: Joyce Hsu, Charles Matthew Sutton, Jaime Leonardo Rovira, Anning Hu, Chetan Parag Gupta, Cliff Warren