Patents by Inventor Thomas Ploetz

Thomas Ploetz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127862
    Abstract: The present invention includes a system for improving the visual quality of recorded videos, especially screen recordings such as webinars. The system automatically detects the boundaries of individual tile elements on a screen recording, and then performs facial recognition in order to identify which tiles include a human face and for tracking that face even when the tiles are repositioned over the course of the video. The system uses liveness detection to determine which tiles are video tiles and which tiles are screenshare tiles and then automatically shifts the relative position of the video tiles and the screenshare tiles to create an improved aesthetic quality of the recorded videos. The system is further capable of automatically integrating a company's color branding or logos into the recorded videos.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Applicant: SalesTing, Inc.
    Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz
  • Publication number: 20240070187
    Abstract: A system or process may generate a summarization of multimedia content by determining one or more salient moments therefrom. Multimedia content may be received and a plurality of frames and audio, visual, and metadata elements associated therewith are extracted from the multimedia content. A plurality of importance sub-scores may be generated for each frame of the multimedia content, each of the plurality of sub-scores being associated with a particular analytical modality. For each frame, the plurality of importance sub-scores associated therewith may be aggregated into an importance score. The frames may be ranked by importance and a plurality of top-ranked frames are identified and determined to satisfy an importance threshold. The plurality of top-ranked frames are sequentially arranged and merged into a plurality of moment candidates that are ranked for importance. A subset of top-ranked moment candidates are merged into a final summarization of the multimedia content.
    Type: Application
    Filed: November 6, 2023
    Publication date: February 29, 2024
    Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz
  • Patent number: 11836181
    Abstract: A system or process may generate a summarization of multimedia content by determining one or more salient moments therefrom. Multimedia content may be received and a plurality of frames and audio, visual, and metadata elements associated therewith are extracted from the multimedia content. A plurality of importance sub-scores may be generated for each frame of the multimedia content, each of the plurality of sub-scores being associated with a particular analytical modality. For each frame, the plurality of importance sub-scores associated therewith may be aggregated into an importance score. The frames may be ranked by importance and a plurality of top-ranked frames are identified and determined to satisfy an importance threshold. The plurality of top-ranked frames are sequentially arranged and merged into a plurality of moment candidates that are ranked for importance. A subset of top-ranked moment candidates are merged into a final summarization of the multimedia content.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: December 5, 2023
    Assignee: SalesTing, Inc.
    Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz
  • Patent number: 11762474
    Abstract: A method including receiving sound data captured by a wearable device of the user, the sound data indicative of contact between a first portion of the user wearing the wearable device and a second portion of the user wearing the wearable device; receiving motion data captured by the wearable device of the user, the motion data indicative of at least a movement of the first portion of the user wearing the wearable device; and determining, by a processor, based at least in part on the sound data and the motion data, a user input associated with the contact between a first portion of the user wearing the wearable device and a second portion of the user wearing the wearable device and the movement of the first portion of the user wearing the wearable device.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: September 19, 2023
    Assignee: Georgia Tech Research Corporation
    Inventors: Cheng Zhang, Gregory D. Abowd, Omer Inan, Pranav Kundra, Thomas Ploetz, Yiming Pu, Thad Eugene Starner, Anandghan Waghmare, Xiaoxuan Wang, Kenneth A. Cunnefare, Qiuyue Xue
  • Publication number: 20220066544
    Abstract: An exemplary virtual IMU extraction system and method are disclosed for human activity recognition (HAR) or classifier system that can estimate inertial measurement units (IMU) of a person in video data extracted from public repositories of video data having weakly labeled video content. The exemplary virtual IMU extraction system and method of the human activity recognition (HAR) or classifier system employ an automated processing pipeline (also referred to herein as “IMUTube”) that integrates computer vision and signal processing operations to convert video data of human activity into virtual streams of IMU data that represents accelerometer, gyroscope, or other inertial measurement unit estimation that can measure acceleration, inertia, motion, orientation, force, velocity, etc. at a different location on the body. In other embodiments, the automated processing pipeline can be used to generate high-quality virtual accelerometer data from a camera sensor.
    Type: Application
    Filed: September 1, 2021
    Publication date: March 3, 2022
    Inventors: Hyeokhyen Kwon, Gregory D. Abowd, Harish Kashyap Haresamudram, Thomas Ploetz, Eu Gen Catherine Tong, Yan Gao, Nicholas Lane
  • Publication number: 20210109598
    Abstract: A method including receiving sound data captured by a wearable device of the user, the sound data indicative of contact between a first portion of the user wearing the wearable device and a second portion of the user wearing the wearable device; receiving motion data captured by the wearable device of the user, the motion data indicative of at least a movement of the first portion of the user wearing the wearable device; and determining, by a processor, based at least in part on the sound data and the motion data, a user input associated with the contact between a first portion of the user wearing the wearable device and a second portion of the user wearing the wearable device and the movement of the first portion of the user wearing the wearable device.
    Type: Application
    Filed: September 6, 2018
    Publication date: April 15, 2021
    Inventors: Cheng Zhang, Gregory D. Abowd, Omer Inan, Pranav Kundra, Thomas Ploetz, Yiming Pu, Thad Eugene Starner, Anandghan Waghmare, Xiaoxuan Wang, Kenneth A. Cunnefare, Qiuyue Xue
  • Publication number: 20200372066
    Abstract: A system or process may generate a summarization of multimedia content by determining one or more salient moments therefrom. Multimedia content may be received and a plurality of frames and audio, visual, and metadata elements associated therewith are extracted from the multimedia content. A plurality of importance sub-scores may be generated for each frame of the multimedia content, each of the plurality of sub-scores being associated with a particular analytical modality. For each frame, the plurality of importance sub-scores associated therewith may be aggregated into an importance score. The frames may be ranked by importance and a plurality of top-ranked frames are identified and determined to satisfy an importance threshold. The plurality of top-ranked frames are sequentially arranged and merged into a plurality of moment candidates that are ranked for importance. A subset of top-ranked moment candidates are merged into a final summarization of the multimedia content.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Piyush Saggi, Nitesh Chhajwani, Thomas Ploetz