Patents by Inventor Yu Teng Tung

Yu Teng Tung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11690599
    Abstract: An object volume acquisition method of an ultrasonic image, for a probe of an ultrasonic system is disclosed.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: July 4, 2023
    Assignee: Qisda Corporation
    Inventors: Wei-Ting Xiao, Yu-Teng Tung
  • Patent number: 11284855
    Abstract: An ultrasound needle positioning system includes an ultrasound probe and a processor. The ultrasound probe is used to capture a plurality of sets of needle insertion images. Each set of needle insertion images includes a plurality of needle insertion images corresponding to a needle body at a predetermined insertion angle. The processor is coupled to the ultrasound probe, and is used to train a first convolutional neural network according to at least one set of needle insertion images in the plurality of sets of needle insertion images to generate needle body positioning information after the needle body is inserted. The needle body positioning information includes a reference position, a length, and/or a width corresponding to the needle body at at least one predetermined insertion angle.
    Type: Grant
    Filed: January 3, 2020
    Date of Patent: March 29, 2022
    Assignee: Qisda Corporation
    Inventors: Yu-Teng Tung, Wei-Shin Hung
  • Publication number: 20220087649
    Abstract: An object volume acquisition method of an ultrasonic image, for a probe of an ultrasonic system is disclosed.
    Type: Application
    Filed: April 29, 2021
    Publication date: March 24, 2022
    Inventors: Wei-Ting Xiao, Yu-Teng Tung
  • Publication number: 20200245969
    Abstract: An ultrasound needle positioning system includes an ultrasound probe and a processor. The ultrasound probe is used to capture a plurality of sets of needle insertion images. Each set of needle insertion images includes a plurality of needle insertion images corresponding to a needle body at a predetermined insertion angle. The processor is coupled to the ultrasound probe, and is used to train a first convolutional neural network according to at least one set of needle insertion images in the plurality of sets of needle insertion images to generate needle body positioning information after the needle body is inserted. The needle body positioning information includes a reference position, a length, and/or a width corresponding to the needle body at at least one predetermined insertion angle.
    Type: Application
    Filed: January 3, 2020
    Publication date: August 6, 2020
    Inventors: Yu-Teng Tung, Wei-Shin Hung
  • Publication number: 20190282205
    Abstract: An ultrasound imaging system includes an ultrasound probe, a filter, a first neural network and a processor. The ultrasound probe generates a target ultrasound image and a plurality of first reference ultrasound images by a plurality of first scanning parameters. The filter filters the target ultrasound image to generate a first filtered ultrasound image. The first neural network filters the target ultrasound image according to the first reference ultrasound images to generate a second filtered ultrasound image. The processor combines the first filtered ultrasound image and the second filtered ultrasound image to form a compound ultrasound image.
    Type: Application
    Filed: December 13, 2018
    Publication date: September 19, 2019
    Inventor: Yu-Teng Tung
  • Patent number: 9373028
    Abstract: A handwriting input system includes a handwriting pen, a handwriting plate and a processing unit. The handwriting pen includes a pen body, a first sensing unit and a first communication unit. The first sensing unit senses an action of the pen body to generate a sensing data. The handwriting plate includes a touch unit, a display unit and a second communication unit. The touch unit senses a contact trajectory and a contact time while the handwriting pen contacts the handwriting plate. The processing unit is selectively disposed in one of the handwriting pen and the handwriting plate. The processing unit calculates a tilt angle of the pen body according to the sensing data, determines a contact shape according to the tilt angle, determines an ink output amount according to the contact time, and determines a handwriting image according to the contact trajectory, the contact shape and the ink output amount.
    Type: Grant
    Filed: April 17, 2015
    Date of Patent: June 21, 2016
    Assignees: Qisda (Suzhou) Co., Ltd, Qisda Corporation
    Inventor: Yu-Teng Tung
  • Publication number: 20160034752
    Abstract: A handwriting input system includes a handwriting pen, a handwriting plate and a processing unit. The handwriting pen includes a pen body, a first sensing unit and a first communication unit. The first sensing unit senses an action of the pen body to generate a sensing data. The handwriting plate includes a touch unit, a display unit and a second communication unit. The touch unit senses a contact trajectory and a contact time while the handwriting pen contacts the handwriting plate. The processing unit is selectively disposed in one of the handwriting pen and the handwriting plate. The processing unit calculates a tilt angle of the pen body according to the sensing data, determines a contact shape according to the tilt angle, determines an ink output amount according to the contact time, and determines a handwriting image according to the contact trajectory, the contact shape and the ink output amount.
    Type: Application
    Filed: April 17, 2015
    Publication date: February 4, 2016
    Inventor: YU-TENG TUNG
  • Publication number: 20080071713
    Abstract: A power saving method for a mobile device is disclosed. Multiple user samples are generated. One behavior vector for each of the user samples is calculated. A neural network system is trained using the user samples and the corresponding behavior vectors. Multiple user events are collected. The user events are transformed to multiple behavior samples using a weighting transformation function. The behavior samples are classified into behavior sample groups. The behavior sample group comprising the most behavior samples is obtained. The behavior vector for the behavior sample group comprising the most behavior samples is calculated. The neural network system is trained using the behavior sample group comprising the most behavior samples and the corresponding behavior vector.
    Type: Application
    Filed: September 14, 2007
    Publication date: March 20, 2008
    Applicants: QISDA CORPORATION, BENQ CORPORATION
    Inventor: Yu Teng Tung