Patents by Inventor Paul Kruszewski
Paul Kruszewski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240028896Abstract: An activity classifier system and method that classifies human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. The system also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. The system also has an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.Type: ApplicationFiled: September 28, 2023Publication date: January 25, 2024Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhang, Paul A. Kruszewski
-
Patent number: 11783183Abstract: An activity classifier system and method that classifies human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. The system also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. The system also has an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.Type: GrantFiled: February 11, 2021Date of Patent: October 10, 2023Assignee: HINGE HEALTH, INC.Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhang, Paul A. Kruszewski
-
Publication number: 20220240638Abstract: An activity classifier system and method that classifies human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. The system also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. The system also has an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.Type: ApplicationFiled: February 11, 2021Publication date: August 4, 2022Applicant: WRNCH INC.Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhang, Paul A. Kruszewski
-
Publication number: 20210264144Abstract: System and method for extracting human pose information from an image, comprising a feature extractor connected to a database, a convolutional neural network (CNN) with a plurality of CNN layers. Said system/method further comprising at least one of the following modules: a 2D body skeleton detector for determining 2D body skeleton information from the human-related image features; a body silhouette detector for determining body silhouette information from the human-related image features; a hand silhouette detector for determining hand silhouette detector from the human-related image features; a hand skeleton detector for determining hand skeleton from the human-related image features; a 3D body skeleton detector for determining 3D body skeleton from the human-related image features; and a facial keypoints detector for determining facial keypoints from the human-related image features.Type: ApplicationFiled: June 27, 2019Publication date: August 26, 2021Applicant: WRNCH INC.Inventors: Dongwook Cho, Maggie Zhang, Paul Kruszewski
-
Publication number: 20210161266Abstract: An activity classifier system and method that classifies human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. The system also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. The system also has an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.Type: ApplicationFiled: February 11, 2021Publication date: June 3, 2021Applicant: WRNCH INC.Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhang, Paul A. Kruszewski
-
Patent number: 10949658Abstract: This disclosure is directed to an activity classifier system, for classifying human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. It also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. There is also an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.Type: GrantFiled: February 14, 2019Date of Patent: March 16, 2021Assignee: WRNCH INC.Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhang, Paul A. Kruszewski
-
Publication number: 20190251340Abstract: This disclosure is directed to an activity classifier system, for classifying human activities using 2D skeleton data. The system includes a skeleton preprocessor that transforms the 2D skeleton data into transformed skeleton data, the transformed skeleton data comprising scaled, relative joint positions and relative joint velocities. It also includes a gesture classifier comprising a first recurrent neural network that receives the transformed skeleton data, and is trained to identify the most probable of a plurality of gestures. There is also an action classifier comprising a second recurrent neural network that receives information from the first recurrent neural networks and is trained to identify the most probable of a plurality of actions.Type: ApplicationFiled: February 14, 2019Publication date: August 15, 2019Applicant: WRNCH Inc.Inventors: Colin J. Brown, Andrey Tolstikhin, Thomas D. Peters, Dongwook Cho, Maggie Zhan, Paul A. Kruszewski
-
Publication number: 20120188232Abstract: A method and system for providing a computer-simulated environment for displaying a selected mannequin wearing a combination of selected garments. In one aspect, three-dimensional scenes containing mannequin and garment objects are created within a three-dimensional modeling environment, and a simulation is performed using a cloth simulator within the modeling environment to model the construction, draping, and collision of the garment with the mannequin. Rendering frames corresponding to a variety of garments, mannequins, garment dimensions, garment styles, wearing patterns, viewing angles, and other parameters, are then generated from which images can be rendered and displayed in accordance with user requests.Type: ApplicationFiled: January 13, 2012Publication date: July 26, 2012Applicant: MY VIRTUAL MODEL INC.Inventors: Carlos SALDANHA, Andrea M. Froncioni, Paul A. Kruszewski, Gregory J. Saumier-Finch, Caroline M. Trudeau, Fadi G. Bachaalani, Nader Morcos, Sylvain B. Cote, Patrick R. Guevin, Jean-Francois B. St. Arnaud, Serge Veillet, Louise L. Guay
-
Publication number: 20110273444Abstract: A method and system for providing a computer-simulated environment for displaying a selected mannequin wearing a combination of selected garments. In one aspect, three-dimensional scenes containing mannequin and garment objects are created within a three-dimensional modeling environment, and a simulation is performed using a cloth simulator within the modeling environment to model the construction, draping, and collision of the garment with the mannequin. Rendering frames corresponding to a variety of garments, mannequins, garment dimensions, garment styles, wearing patterns, viewing angles, and other parameters, are then generated from which images can be rendered and displayed in accordance with user requests.Type: ApplicationFiled: April 29, 2011Publication date: November 10, 2011Applicant: MY VIRTUAL MODEL INC.Inventors: Carlos SALDANHA, Andrea M. Froncioni, Paul A. Kruszewski, Gregory J. Saumier-Finch, Caroline M. Trudeau, Fadi G. Bachaalani, Nader Morcos, Sylvain B. Cote, Patrick R. Guevin, Jean-Francois B. St. Arnaud, Serge Veillet, Louise L. Guay
-
Publication number: 20100302275Abstract: A method and system for providing a computer-simulated environment for displaying a selected mannequin wearing a combination of selected garments. In one aspect, three-dimensional scenes containing mannequin and garment objects are created within a three-dimensional modeling environment, and a simulation is performed using a cloth simulator within the modeling environment to model the construction, draping, and collision of the garment with the mannequin. Rendering frames corresponding to a variety of garments, mannequins, garment dimensions, garment styles, wearing patterns, viewing angles, and other parameters, are then generated from which images can be rendered and displayed in accordance with user requests.Type: ApplicationFiled: December 23, 2009Publication date: December 2, 2010Applicant: My Virtual Model Inc.Inventors: Carlos Saldanha, Andrea M. Froncioni, Paul A. Kruszewski, Gregory J. Saumier-Finch, Caroline M. Trudeau, Fadi G. Bachaalani, Nader Morcos, Sylvain B. Cote, Patrick R. Guevin, Jean-Francois B. St.Arnaud, Serge Veillet, Louise L. Guay
-
Patent number: 7663648Abstract: A method and system for providing a computer-simulated environment for displaying a selected mannequin wearing a combination of selected garments. In one aspect, three-dimensional scenes containing mannequin and garment objects are created within a three-dimensional modeling environment, and a simulation is performed using a cloth simulator within the modeling environment to model the construction, draping, and collision of the garment with the mannequin. Rendering frames corresponding to a variety of garments, mannequins, garment dimensions, garment styles, wearing patterns, viewing angles, and other parameters, are then generated from which images can be rendered and displayed in accordance with user requests.Type: GrantFiled: November 12, 1999Date of Patent: February 16, 2010Assignee: My Virtual Model Inc.Inventors: Carlos Saldanha, Andrea M. Froncioni, Paul A. Kruszewski, Gregory J. Saumier-Finch, Caroline M. Trudeau, Fadi G. Bachaalani, Nader Morcos, Sylvain B. Cote, Patrick R. Guevin, Jean-Francois B. St. Arnaud, Serge Veillet, Louise L. Guay
-
Publication number: 20050071306Abstract: The method for on-screen animation includes providing a digital world including image object elements and defining autonomous image entities (AIE). Each AIE may represent a character or an object that is characterized by i) attributes defining the AIE relatively to the image objects elements of the digital world, and ii) behaviours for modifying some of the attributes. Each AIE is associated to animation clips allowing representing the AIE in movement in the digital world. Virtual sensors allow the AIE to gather data information about image object elements or other AIE within the digital world. Decision trees are used for processing the data information resulting in selecting and triggering one of the animation cycle or selecting a new behaviour. A system embodying the above method is also provided. The method and system for on-screen animation of digital entities according to the present invention can be used for creating animation for movies, for video games, and for simulation.Type: ApplicationFiled: February 3, 2004Publication date: March 31, 2005Inventors: Paul Kruszewski, Vincent Stephen-Ong, Muthana Kubba, Fred Dorosh, Julianna Lin, Nicolas Leonard, Greg Labute, Cory Kumm, Richard Norton