Patents by Inventor Paul Kruszewski

Paul Kruszewski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240028896
    Abstract: An activity classifier system and method that classifies human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. The system also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. The system also has an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.
    Type: Application
    Filed: September 28, 2023
    Publication date: January 25, 2024
    Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhang, Paul A. Kruszewski
  • Patent number: 11783183
    Abstract: An activity classifier system and method that classifies human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. The system also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. The system also has an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: October 10, 2023
    Assignee: HINGE HEALTH, INC.
    Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhang, Paul A. Kruszewski
  • Publication number: 20220240638
    Abstract: An activity classifier system and method that classifies human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. The system also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. The system also has an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.
    Type: Application
    Filed: February 11, 2021
    Publication date: August 4, 2022
    Applicant: WRNCH INC.
    Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhang, Paul A. Kruszewski
  • Publication number: 20210264144
    Abstract: System and method for extracting human pose information from an image, comprising a feature extractor connected to a database, a convolutional neural network (CNN) with a plurality of CNN layers. Said system/method further comprising at least one of the following modules: a 2D body skeleton detector for determining 2D body skeleton information from the human-related image features; a body silhouette detector for determining body silhouette information from the human-related image features; a hand silhouette detector for determining hand silhouette detector from the human-related image features; a hand skeleton detector for determining hand skeleton from the human-related image features; a 3D body skeleton detector for determining 3D body skeleton from the human-related image features; and a facial keypoints detector for determining facial keypoints from the human-related image features.
    Type: Application
    Filed: June 27, 2019
    Publication date: August 26, 2021
    Applicant: WRNCH INC.
    Inventors: Dongwook Cho, Maggie Zhang, Paul Kruszewski
  • Publication number: 20210161266
    Abstract: An activity classifier system and method that classifies human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. The system also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. The system also has an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.
    Type: Application
    Filed: February 11, 2021
    Publication date: June 3, 2021
    Applicant: WRNCH INC.
    Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhang, Paul A. Kruszewski
  • Patent number: 10949658
    Abstract: This disclosure is directed to an activity classifier system, for classifying human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. It also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. There is also an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: March 16, 2021
    Assignee: WRNCH INC.
    Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhang, Paul A. Kruszewski
  • Publication number: 20190251340
    Abstract: This disclosure is directed to an activity classifier system, for classifying human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. It also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. There is also an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.
    Type: Application
    Filed: February 14, 2019
    Publication date: August 15, 2019
    Applicant: WRNCH Inc.
    Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhan, Paul A. Kruszewski
  • Publication number: 20120188232
    Abstract: A method and system for providing a computer-simulated environment for displaying a selected mannequin wearing a combination of selected garments. In one aspect, three-dimensional scenes containing mannequin and garment objects are created within a three-dimensional modeling environment, and a simulation is performed using a cloth simulator within the modeling environment to model the construction, draping, and collision of the garment with the mannequin. Rendering frames corresponding to a variety of garments, mannequins, garment dimensions, garment styles, wearing patterns, viewing angles, and other parameters, are then generated from which images can be rendered and displayed in accordance with user requests.
    Type: Application
    Filed: January 13, 2012
    Publication date: July 26, 2012
    Applicant: MY VIRTUAL MODEL INC.
    Inventors: Carlos SALDANHA, Andrea M. Froncioni, Paul A. Kruszewski, Gregory J. Saumier-Finch, Caroline M. Trudeau, Fadi G. Bachaalani, Nader Morcos, Sylvain B. Cote, Patrick R. Guevin, Jean-Francois B. St. Arnaud, Serge Veillet, Louise L. Guay
  • Publication number: 20110273444
    Abstract: A method and system for providing a computer-simulated environment for displaying a selected mannequin wearing a combination of selected garments. In one aspect, three-dimensional scenes containing mannequin and garment objects are created within a three-dimensional modeling environment, and a simulation is performed using a cloth simulator within the modeling environment to model the construction, draping, and collision of the garment with the mannequin. Rendering frames corresponding to a variety of garments, mannequins, garment dimensions, garment styles, wearing patterns, viewing angles, and other parameters, are then generated from which images can be rendered and displayed in accordance with user requests.
    Type: Application
    Filed: April 29, 2011
    Publication date: November 10, 2011
    Applicant: MY VIRTUAL MODEL INC.
    Inventors: Carlos SALDANHA, Andrea M. Froncioni, Paul A. Kruszewski, Gregory J. Saumier-Finch, Caroline M. Trudeau, Fadi G. Bachaalani, Nader Morcos, Sylvain B. Cote, Patrick R. Guevin, Jean-Francois B. St. Arnaud, Serge Veillet, Louise L. Guay
  • Publication number: 20100302275
    Abstract: A method and system for providing a computer-simulated environment for displaying a selected mannequin wearing a combination of selected garments. In one aspect, three-dimensional scenes containing mannequin and garment objects are created within a three-dimensional modeling environment, and a simulation is performed using a cloth simulator within the modeling environment to model the construction, draping, and collision of the garment with the mannequin. Rendering frames corresponding to a variety of garments, mannequins, garment dimensions, garment styles, wearing patterns, viewing angles, and other parameters, are then generated from which images can be rendered and displayed in accordance with user requests.
    Type: Application
    Filed: December 23, 2009
    Publication date: December 2, 2010
    Applicant: My Virtual Model Inc.
    Inventors: Carlos Saldanha, Andrea M. Froncioni, Paul A. Kruszewski, Gregory J. Saumier-Finch, Caroline M. Trudeau, Fadi G. Bachaalani, Nader Morcos, Sylvain B. Cote, Patrick R. Guevin, Jean-Francois B. St.Arnaud, Serge Veillet, Louise L. Guay
  • Patent number: 7663648
    Abstract: A method and system for providing a computer-simulated environment for displaying a selected mannequin wearing a combination of selected garments. In one aspect, three-dimensional scenes containing mannequin and garment objects are created within a three-dimensional modeling environment, and a simulation is performed using a cloth simulator within the modeling environment to model the construction, draping, and collision of the garment with the mannequin. Rendering frames corresponding to a variety of garments, mannequins, garment dimensions, garment styles, wearing patterns, viewing angles, and other parameters, are then generated from which images can be rendered and displayed in accordance with user requests.
    Type: Grant
    Filed: November 12, 1999
    Date of Patent: February 16, 2010
    Assignee: My Virtual Model Inc.
    Inventors: Carlos Saldanha, Andrea M. Froncioni, Paul A. Kruszewski, Gregory J. Saumier-Finch, Caroline M. Trudeau, Fadi G. Bachaalani, Nader Morcos, Sylvain B. Cote, Patrick R. Guevin, Jean-Francois B. St. Arnaud, Serge Veillet, Louise L. Guay
  • Publication number: 20050071306
    Abstract: The method for on-screen animation includes providing a digital world including image object elements and defining autonomous image entities (AIE). Each AIE may represent a character or an object that is characterized by i) attributes defining the AIE relatively to the image objects elements of the digital world, and ii) behaviours for modifying some of the attributes. Each AIE is associated to animation clips allowing representing the AIE in movement in the digital world. Virtual sensors allow the AIE to gather data information about image object elements or other AIE within the digital world. Decision trees are used for processing the data information resulting in selecting and triggering one of the animation cycle or selecting a new behaviour. A system embodying the above method is also provided. The method and system for on-screen animation of digital entities according to the present invention can be used for creating animation for movies, for video games, and for simulation.
    Type: Application
    Filed: February 3, 2004
    Publication date: March 31, 2005
    Inventors: Paul Kruszewski, Vincent Stephen-Ong, Muthana Kubba, Fred Dorosh, Julianna Lin, Nicolas Leonard, Greg Labute, Cory Kumm, Richard Norton