Patents by Inventor Thomas Ploetz
Thomas Ploetz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12142299Abstract: The present invention includes a system for improving the visual quality of recorded videos, especially screen recordings such as webinars. The system automatically detects the boundaries of individual tile elements on a screen recording, and then performs facial recognition in order to identify which tiles include a human face and for tracking that face even when the tiles are repositioned over the course of the video. The system uses liveness detection to determine which tiles are video tiles and which tiles are screenshare tiles and then automatically shifts the relative position of the video tiles and the screenshare tiles to create an improved aesthetic quality of the recorded videos. The system is further capable of automatically integrating a company's color branding or logos into the recorded videos.Type: GrantFiled: October 17, 2022Date of Patent: November 12, 2024Assignee: SALESTING, INC.Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz
-
Publication number: 20240281462Abstract: A system or process may generate a summarization of multimedia content by determining one or more salient moments therefrom. Multimedia content may be received and a plurality of frames and audio, visual, and metadata elements associated therewith are extracted from the multimedia content. A plurality of importance sub-scores may be generated for each frame of the multimedia content, each of the plurality of sub-scores being associated with a particular analytical modality. For each frame, the plurality of importance sub-scores associated therewith may be aggregated into an importance score. The frames may be ranked by importance and a plurality of top-ranked frames are identified and determined to satisfy an importance threshold. The plurality of top-ranked frames are sequentially arranged and merged into a plurality of moment candidates that are ranked for importance. A subset of top-ranked moment candidates are merged into a final summarization of the multimedia content.Type: ApplicationFiled: April 30, 2024Publication date: August 22, 2024Applicant: SalesTing, Inc.Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz
-
Publication number: 20240127862Abstract: The present invention includes a system for improving the visual quality of recorded videos, especially screen recordings such as webinars. The system automatically detects the boundaries of individual tile elements on a screen recording, and then performs facial recognition in order to identify which tiles include a human face and for tracking that face even when the tiles are repositioned over the course of the video. The system uses liveness detection to determine which tiles are video tiles and which tiles are screenshare tiles and then automatically shifts the relative position of the video tiles and the screenshare tiles to create an improved aesthetic quality of the recorded videos. The system is further capable of automatically integrating a company's color branding or logos into the recorded videos.Type: ApplicationFiled: October 17, 2022Publication date: April 18, 2024Applicant: SalesTing, Inc.Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz
-
Publication number: 20240070187Abstract: A system or process may generate a summarization of multimedia content by determining one or more salient moments therefrom. Multimedia content may be received and a plurality of frames and audio, visual, and metadata elements associated therewith are extracted from the multimedia content. A plurality of importance sub-scores may be generated for each frame of the multimedia content, each of the plurality of sub-scores being associated with a particular analytical modality. For each frame, the plurality of importance sub-scores associated therewith may be aggregated into an importance score. The frames may be ranked by importance and a plurality of top-ranked frames are identified and determined to satisfy an importance threshold. The plurality of top-ranked frames are sequentially arranged and merged into a plurality of moment candidates that are ranked for importance. A subset of top-ranked moment candidates are merged into a final summarization of the multimedia content.Type: ApplicationFiled: November 6, 2023Publication date: February 29, 2024Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz
-
Patent number: 11836181Abstract: A system or process may generate a summarization of multimedia content by determining one or more salient moments therefrom. Multimedia content may be received and a plurality of frames and audio, visual, and metadata elements associated therewith are extracted from the multimedia content. A plurality of importance sub-scores may be generated for each frame of the multimedia content, each of the plurality of sub-scores being associated with a particular analytical modality. For each frame, the plurality of importance sub-scores associated therewith may be aggregated into an importance score. The frames may be ranked by importance and a plurality of top-ranked frames are identified and determined to satisfy an importance threshold. The plurality of top-ranked frames are sequentially arranged and merged into a plurality of moment candidates that are ranked for importance. A subset of top-ranked moment candidates are merged into a final summarization of the multimedia content.Type: GrantFiled: May 22, 2020Date of Patent: December 5, 2023Assignee: SalesTing, Inc.Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz
-
Patent number: 11762474Abstract: A method including receiving sound data captured by a wearable device of the user, the sound data indicative of contact between a first portion of the user wearing the wearable device and a second portion of the user wearing the wearable device; receiving motion data captured by the wearable device of the user, the motion data indicative of at least a movement of the first portion of the user wearing the wearable device; and determining, by a processor, based at least in part on the sound data and the motion data, a user input associated with the contact between a first portion of the user wearing the wearable device and a second portion of the user wearing the wearable device and the movement of the first portion of the user wearing the wearable device.Type: GrantFiled: September 6, 2018Date of Patent: September 19, 2023Assignee: Georgia Tech Research CorporationInventors: Cheng Zhang, Gregory D. Abowd, Omer Inan, Pranav Kundra, Thomas Ploetz, Yiming Pu, Thad Eugene Starner, Anandghan Waghmare, Xiaoxuan Wang, Kenneth A. Cunnefare, Qiuyue Xue
-
Publication number: 20220066544Abstract: An exemplary virtual IMU extraction system and method are disclosed for human activity recognition (HAR) or classifier system that can estimate inertial measurement units (IMU) of a person in video data extracted from public repositories of video data having weakly labeled video content. The exemplary virtual IMU extraction system and method of the human activity recognition (HAR) or classifier system employ an automated processing pipeline (also referred to herein as “IMUTube”) that integrates computer vision and signal processing operations to convert video data of human activity into virtual streams of IMU data that represents accelerometer, gyroscope, or other inertial measurement unit estimation that can measure acceleration, inertia, motion, orientation, force, velocity, etc. at a different location on the body. In other embodiments, the automated processing pipeline can be used to generate high-quality virtual accelerometer data from a camera sensor.Type: ApplicationFiled: September 1, 2021Publication date: March 3, 2022Inventors: Hyeokhyen Kwon, Gregory D. Abowd, Harish Kashyap Haresamudram, Thomas Ploetz, Eu Gen Catherine Tong, Yan Gao, Nicholas Lane
-
Publication number: 20210109598Abstract: A method including receiving sound data captured by a wearable device of the user, the sound data indicative of contact between a first portion of the user wearing the wearable device and a second portion of the user wearing the wearable device; receiving motion data captured by the wearable device of the user, the motion data indicative of at least a movement of the first portion of the user wearing the wearable device; and determining, by a processor, based at least in part on the sound data and the motion data, a user input associated with the contact between a first portion of the user wearing the wearable device and a second portion of the user wearing the wearable device and the movement of the first portion of the user wearing the wearable device.Type: ApplicationFiled: September 6, 2018Publication date: April 15, 2021Inventors: Cheng Zhang, Gregory D. Abowd, Omer Inan, Pranav Kundra, Thomas Ploetz, Yiming Pu, Thad Eugene Starner, Anandghan Waghmare, Xiaoxuan Wang, Kenneth A. Cunnefare, Qiuyue Xue
-
Publication number: 20200372066Abstract: A system or process may generate a summarization of multimedia content by determining one or more salient moments therefrom. Multimedia content may be received and a plurality of frames and audio, visual, and metadata elements associated therewith are extracted from the multimedia content. A plurality of importance sub-scores may be generated for each frame of the multimedia content, each of the plurality of sub-scores being associated with a particular analytical modality. For each frame, the plurality of importance sub-scores associated therewith may be aggregated into an importance score. The frames may be ranked by importance and a plurality of top-ranked frames are identified and determined to satisfy an importance threshold. The plurality of top-ranked frames are sequentially arranged and merged into a plurality of moment candidates that are ranked for importance. A subset of top-ranked moment candidates are merged into a final summarization of the multimedia content.Type: ApplicationFiled: May 22, 2020Publication date: November 26, 2020Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz