Patents by Inventor Yu Hen Hu

Yu Hen Hu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11587361
    Abstract: A monitoring or tracking system may include an input port and a controller in communication with the input port. The input port may receive data from a data recorder. The data recorder is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive data via the input port and determine values for one or more dimensions of subject performing a task based on the data and determine a location of a hand of the subject performing the task based on the data. Further, the controller may be configured to determine one or both of trunk angle and trunk kinematics based on the received data. The controller may output via the output port assessment information.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: February 21, 2023
    Assignee: WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: Robert G. Radwin, Runyu L. Greene, Xuan Wang, Yu Hen Hu, Nicholas Difranco
  • Patent number: 11450148
    Abstract: A monitoring system or tracking system may include an input port and a controller in communication with the input port. The input port may receive video from one or more image capturing devices. The image capturing device is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive video via the input port and identify a subject within frames of the video relative to a background within the frames. Further, the controller may be configured to identify dimensions, posture, hand location, feet location, twisting position/angle, and/or other parameters of the identified subject in frames of the video and determine when the subject is performing a task. Based on the dimensions and/or other parameters identified or extracted from the video during the predetermined task, the controller may output via the output port assessment information.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: September 20, 2022
    Assignee: WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: Robert Radwin, Xuan Wang, Yu Hen Hu
  • Publication number: 20220110548
    Abstract: A monitoring or tracking system may include an input port and a controller in communication with the input port. The input port may receive data from a data recorder. The data recorder is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive data via the input port, the data being related to a subject lifting an object. Using the data, the controller may locate body parts of the subject while the subject is lifting the object and monitor body movements of the subject during the lift. Further, the controller may be configured to determine a value related to a load of the object based on the body movements monitored. The controller may output via the output port the value determined and/or lift assessment information for the subject.
    Type: Application
    Filed: October 8, 2021
    Publication date: April 14, 2022
    Applicant: WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: ROBERT RADWIN, YIN LI, RUNYU GREENE, FANGZHOU MU, YU HEN HU
  • Publication number: 20210142048
    Abstract: A monitoring or tracking system may include an input port and a controller in communication with the input port. The input port may receive data from a data recorder. The data recorder is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive data via the input port and determine values for one or more dimensions of subject performing a task based on the data and determine a location of a hand of the subject performing the task based on the data. Further, the controller may be configured to determine one or both of trunk angle and trunk kinematics based on the received data. The controller may output via the output port assessment information.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 13, 2021
    Applicant: WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: ROBERT G. RADWIN, RUNYU L. GREENE, XUAN WANG, YU HEN HU, NICHOLAS DIFRANCO
  • Patent number: 10810414
    Abstract: A monitoring system or tracking system may include an input port and a controller in communication with the input port. The input port may receive video from an image capturing device. The image capturing device is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive video via the input port and identify a subject within frames of the video relative to a background within the frames. Further, the controller may be configured to identify dimensions, posture, hand location, feet location, and/or other parameters of the identified subject in frames of the video and determine when the subject is performing a task. Based on the dimensions and/or other parameters identified or extracted from the video during the predetermined task, the controller may output via the output port assessment information.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: October 20, 2020
    Assignee: WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: Robert Radwin, Xuan Wang, Yu Hen Hu, Nicholas Difranco
  • Publication number: 20200279102
    Abstract: A monitoring system or tracking system may include an input port and a controller in communication with the input port. The input port may receive video from one or more image capturing devices. The image capturing device is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive video via the input port and identify a subject within frames of the video relative to a background within the frames. Further, the controller may be configured to identify dimensions, posture, hand location, feet location, twisting position/angle, and/or other parameters of the identified subject in frames of the video and determine when the subject is performing a task. Based on the dimensions and/or other parameters identified or extracted from the video during the predetermined task, the controller may output via the output port assessment information.
    Type: Application
    Filed: May 15, 2020
    Publication date: September 3, 2020
    Applicant: WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: ROBERT RADWIN, XUAN WANG, YU HEN HU
  • Patent number: 10482613
    Abstract: A monitoring system may include an input port, an output port, and a controller in communication with the input port and the output port. The input port may receive video from an image capturing device. The image capturing device is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive video via the input port and identify a subject within frames of the video relative to a background within the frames. Further, the controller may be configured to identify dimensions and/or other parameters of the identified subject in frames of the video and determine when the subject is performing a predetermined task. Based on the dimensions and/or other parameters identified or extracted from the video during the predetermined task, the controller may output via the output port assessment information.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: November 19, 2019
    Assignee: WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: Robert Radwin, Xuan Wang, Yu Hen Hu, Nicholas DiFranco
  • Publication number: 20190012794
    Abstract: A monitoring system may include an input port, an output port, and a controller in communication with the input port and the output port. The input port may receive video from an image capturing device. The image capturing device is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive video via the input port and identify a subject within frames of the video relative to a background within the frames. Further, the controller may be configured to identify dimensions and/or other parameters of the identified subject in frames of the video and determine when the subject is performing a predetermined task. Based on the dimensions and/or other parameters identified or extracted from the video during the predetermined task, the controller may output via the output port assessment information.
    Type: Application
    Filed: October 6, 2017
    Publication date: January 10, 2019
    Applicant: WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: Robert RADWIN, Xuan WANG, Yu Hen HU, Nicholas DIFRANCO
  • Publication number: 20190012531
    Abstract: A monitoring system or tracking system may include an input port and a controller in communication with the input port. The input port may receive video from an image capturing device. The image capturing device is optionally part of the monitoring system and in some cases includes at least part of the controller. The controller may be configured to receive video via the input port and identify a subject within frames of the video relative to a background within the frames. Further, the controller may be configured to identify dimensions, posture, hand location, feet location, and/or other parameters of the identified subject in frames of the video and determine when the subject is performing a task. Based on the dimensions and/or other parameters identified or extracted from the video during the predetermined task, the controller may output via the output port assessment information.
    Type: Application
    Filed: July 18, 2018
    Publication date: January 10, 2019
    Applicant: WISCONSIN ALUMNI RESEARCH FOUNDATION
    Inventors: ROBERT RADWIN, XUAN WANG, YU HEN HU, NICHOLAS DIFRANCO
  • Patent number: 9566004
    Abstract: Provided herein are systems and methods that use a video content analysis algorithm to measure and quantify repetitive motion activity of a designated body part, including velocity, acceleration, frequency, and duty cycle, without applying sensors or other instrumentation to the body, for the purpose of preventing repetitive motion injuries. In some embodiments, the video-based direct exposure assessment system uses marker-less video and a video content analysis algorithm. The video content analysis algorithm is able to recognize and identify the pattern of repetitive motion, through a process known as cyclic motion analysis. Determination of the cycle pattern provides parameters for determining a body part's activity level, and thereby allows determination of the body part's activity level.
    Type: Grant
    Filed: November 20, 2012
    Date of Patent: February 14, 2017
    Assignee: KINEVID, LLC.
    Inventors: Robert G. Radwin, Yu Hen Hu, Chia-Hsiung Chen, Thomas Y Yen
  • Patent number: 7646918
    Abstract: An image is analyzed to locate an object appearing in the image. A contour of that object is extracted from the image and normalized. Based on the normalized contour, one or more summation invariant values are determined and compared to templates comprising one or more summation invariants for each of one or more target objects. The determined summation invariants for the extracted object are compared to summation invariants for the target objects. When the summation invariants for the extracted object sufficiently match the summation invariants determined from an image of a target object, the extracted object is recognized as that target object. The summation invariants can be semi-local summation invariants determined for each point along the normalized contour, based on a number of points neighboring that point on the normalized contour. The semi-local summation invariants are determined as a function of the x and y coordinates of those points.
    Type: Grant
    Filed: January 10, 2006
    Date of Patent: January 12, 2010
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Wei-Yang Lin, Nigel Boston, Yu Hen Hu
  • Publication number: 20070071325
    Abstract: An image is analyzed to locate an object appearing in the image. A contour of that object is extracted from the image and normalized. Based on the normalized contour, one or more summation invariant values are determined and compared to templates comprising one or more summation invariants for each of one or more target objects. The determined summation invariants for the extracted object are compared to summation invariants for the target objects. When the summation invariants for the extracted object sufficiently match the summation invariants determined from an image of a target object, the extracted object is recognized as that target object. The summation invariants can be semi-local summation invariants determined for each point along the normalized contour, based on a number of points neighboring that point on the normalized contour. The semi-local summation invariants are determined as a function of the x and y coordinates of those points.
    Type: Application
    Filed: January 10, 2006
    Publication date: March 29, 2007
    Inventors: Wei-Yang Lin, Nigel Boston, Yu Hen Hu
  • Patent number: 6668097
    Abstract: An apparatus for post-processing of decompressed images having ringing artifacts identifies edges of the image such as may generate such artifacts and defines zones outside of those edges but conforming thereto in which ringing artifacts are to be expected. These zones may be modified according to a model of the human visual system and then filtered so as to reduce ringing artifacts. The filtered zones are spliced back into the image minimizing unnecessary modification of the image while reducing ringing artifacts.
    Type: Grant
    Filed: May 8, 2000
    Date of Patent: December 23, 2003
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Yu Hen Hu, Truong Q. Nguyen, Seyfullah H. Oguz
  • Patent number: 6304678
    Abstract: A technique for post-processing decoded compressed images to reduce decoding-related artifacts employs a maximum likelihood estimation of an original image f. The decoded image is modeled as a montage of “flat surfaces” of different intensities, where the number of flat surfaces and their intensities are generally different in different regions of the decoded image. The intensity of each pixel is conditionally adjusted to that of a corresponding flat surface in a window region surrounding the pixel. In a general algorithm, the flat surface model is fitted to the observed image by estimating the model parameters using the “k-means” algorithm and a hierarchical clustering algorithm. A cluster similarity measure (CSM) is used to determine the number of intensity clusters, and hence flat surfaces, in the model of a window region surrounding a pixel of interest. The pixel intensity is adjusted to an estimated value which is the mean intensity of the cluster in which the pixel falls.
    Type: Grant
    Filed: May 12, 2000
    Date of Patent: October 16, 2001
    Assignee: The Trustees of Boston University
    Inventors: Seungjoon Yang, Yu Hen Hu, Truong Q. Nguyen, Damon L. Tull
  • Patent number: 6101279
    Abstract: An data compression technique combines the benefits of block-wise processing, such as allows reduced buffer memory usage and improved speed through parallel techniques, with tree-type compression normally associated with wavelet-type compression techniques. Block artifacts in the reconstructed data at the partitions between blocks are minimized by use of lapping transformations.
    Type: Grant
    Filed: June 5, 1998
    Date of Patent: August 8, 2000
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Truong Q. Nguyen, Trac D. Tran, Yu Hen Hu