Patents by Inventor Yuanchun Shi

Yuanchun Shi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12154591
    Abstract: An electronic device configured with a microphone, a voice interaction wake-up method executed by an electronic device equipped with a microphone, and a computer-readable medium, the electronic device comprising a memory and a central processing unit, wherein the memory stores computer-executable instructions, and when executed by the central processing unit, the computer-executable instructions perform the following operations: analyzing a sound signal collected by a microphone, identifying whether the sound signal contains speech spoken by a person and whether it contains wind noise sounds generated by airflows hitting the microphone as a result of the speech spoken by the person, and in response to determining that the sound signal contains sound spoken by the person and contains wind noise sounds generated by airflows hitting the microphone as a result of the speech spoken by the user, processing the sound signal as speech input by the user.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: November 26, 2024
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Chun Yu, Yuanchun Shi
  • Patent number: 12112756
    Abstract: An interaction method triggered by a mouth-covering gesture and an intelligent electronic device are provided. The interaction method is applied to an intelligent electronic device arranged with a sensor. The intelligent electronic device includes a sensor system for capturing a signal of a user putting one hand on a mouth to make a mouth-covering gesture. The interaction method includes: processing the signal to determine whether the user puts the hand on the mouth to make the mouth-covering gesture; and in a case that the user puts the hand on the mouth to make the mouth-covering gesture, determining a mouth-covering gesture input mode as an input mode for controlling an interaction to trigger a control command or trigger another input mode, by executing a program on the intelligent electronic device.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: October 8, 2024
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Chun Yu, Yuanchun Shi
  • Publication number: 20220319538
    Abstract: An electronic device configured with a microphone, a voice interaction wake-up method executed by an electronic device equipped with a microphone, and a computer-readable medium, the electronic device comprising a memory and a central processing unit, wherein the memory stores computer-executable instructions, and when executed by the central processing unit, the computer-executable instructions perform the following operations: analyzing a sound signal collected by a microphone, identifying whether the sound signal contains speech spoken by a person and whether it contains wind noise sounds generated by airflows hitting the microphone as a result of the speech spoken by the person, and in response to determining that the sound signal contains sound spoken by the person and contains wind noise sounds generated by airflows hitting the microphone as a result of the speech spoken by the user, processing the sound signal as speech input by the user.
    Type: Application
    Filed: May 26, 2020
    Publication date: October 6, 2022
    Applicant: TSINGHUA UNIVERSITY
    Inventors: Chun YU, Yuanchun SHI
  • Publication number: 20220319520
    Abstract: An interaction method triggered by a mouth-covering gesture and an intelligent electronic device are provided. The interaction method is applied to an intelligent electronic device arranged with a sensor. The intelligent electronic device includes a sensor system for capturing a signal of a user putting one hand on a mouth to make a mouth-covering gesture. The interaction method includes: processing the signal to determine whether the user puts the hand on the mouth to make the mouth-covering gesture; and in a case that the user puts the hand on the mouth to make the mouth-covering gesture, determining a mouth-covering gesture input mode as an input mode for controlling an interaction to trigger a control command or trigger another input mode, by executing a program on the intelligent electronic device.
    Type: Application
    Filed: May 26, 2020
    Publication date: October 6, 2022
    Applicant: TSINGHUA UNIVERSITY
    Inventors: Chun YU, Yuanchun SHI
  • Patent number: 11216116
    Abstract: A control method is provided, including: obtaining input information, where the input information includes a capacitance signal and report point coordinates generated when a user performs an operation on a terminal screen; using report point coordinates in a previous frame as report point coordinates in a current frame if it is determined that a capacitance signal in the current frame and a capacitance signal in the previous frame that are in the input information meet a preset condition; or using report point coordinates in a previous frame as report point coordinates in a current frame if it is determined that the report point coordinates in the current frame and report point coordinates in a first frame that are in the input information meet a preset condition.
    Type: Grant
    Filed: October 15, 2018
    Date of Patent: January 4, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yuanchun Shi, Chun Yu, Lihang Pan, Xin Yi, Weigang Cai, Siju Wu, Xuan Zhou, Jie Xu
  • Patent number: 10948979
    Abstract: The present application provides methods and devices for determining an action or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to detecting a motion of a first part of a body of a user, acquiring, using a photoelectric sensor, target Doppler measurement information of the first part or a second part corresponding to the first part; determining target velocity related information corresponding to the target Doppler measurement information; and determining the first part or the action according to the target velocity related information and reference information, wherein the target velocity related information comprises target blood flow velocity information or target blood flow information. The methods and devices provide a new scheme for recognizing an action and/or an action part.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: March 16, 2021
    Assignee: BEIJING ZHIGU RUI TUO TECH CO., LTD.
    Inventors: Yuanchun Shi, Yuntao Wang, Chun Yu, Lin Du
  • Publication number: 20200371662
    Abstract: A report point output control method and apparatus includes performing feature detection on a capacitance hot spot to determine an eigenvalue of the capacitance hot spot, determining, based on the eigenvalue of the capacitance hot spot, whether a report point matching the capacitance hot spot is from an odd-form touch, and skipping outputting the report point when the report point is the report point generated by the odd-form touch. The eigenvalue includes at least one of a horizontal span, a longitudinal span, an eccentricity, a barycenter coordinate, a maximum capacitance value, an average shadow length, an upper left shadow area, or a lower right shadow area.
    Type: Application
    Filed: October 15, 2018
    Publication date: November 26, 2020
    Inventors: Yuanchun Shi, Chun Yu, Weijie Xu, Xin Yi, Siju Wu, Xuan Zhou, Jie Xu, Jingjin Xu
  • Publication number: 20200301560
    Abstract: A control method is provided, including: obtaining input information, where the input information includes a capacitance signal and report point coordinates generated when a user performs an operation on a terminal screen; using report point coordinates in a previous frame as report point coordinates in a current frame if it is determined that a capacitance signal in the current frame and a capacitance signal in the previous frame that are in the input information meet a preset condition; or using report point coordinates in a previous frame as report point coordinates in a current frame if it is determined that the report point coordinates in the current frame and report point coordinates in a first frame that are in the input information meet a preset condition.
    Type: Application
    Filed: October 15, 2018
    Publication date: September 24, 2020
    Inventors: Yuanchun SHI, Chun YU, Lihang PAN, Xin YI, Weigang CAI, Siju WU, Xuan ZHOU, Jie XU
  • Publication number: 20200142476
    Abstract: The present application provides methods and devices for determining an action or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to detecting a motion of a first part of a body of a user, acquiring, using a photoelectric sensor, target Doppler measurement information of the first part or a second part corresponding to the first part; determining target velocity related information corresponding to the target Doppler measurement information; and determining the first part or the action according to the target velocity related information and reference information, wherein the target velocity related information comprises target blood flow velocity information or target blood flow information. The methods and devices provide a new scheme for recognizing an action and/or an action part.
    Type: Application
    Filed: December 30, 2019
    Publication date: May 7, 2020
    Inventors: YUANCHUN SHI, YUNTAO WANG, CHUN YU, LIN DU
  • Patent number: 10591985
    Abstract: The present application provides methods and devices for determining an action and/or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to that a first part on a body of a user executes an action, acquiring target blood flow information of the first part or a second part corresponding to the first part; and determining the first part and/or the action according to the target blood flow information and reference information. The methods and devices provide a new scheme for recognizing an action and/or an action part.
    Type: Grant
    Filed: January 7, 2016
    Date of Patent: March 17, 2020
    Assignee: BEIJING ZHIGU RUI TUO TECH CO., LTD.
    Inventors: Yuanchun Shi, Yuntao Wang, Chun Yu, Lin Du
  • Patent number: 10521018
    Abstract: Embodiments of the present application disclose a human body-based interaction method and interaction apparatus. The method includes: acquiring phase change information of a second signal; the second signal being formed by a first signal through transmission of at least one transmission medium, the at least one transmission medium including the body of a user; and according to a first corresponding relationship between at least one phase change information and at least one motion and/or posture information of the user, determining motion and/or posture information of the user corresponding to the phase change information. At least one example embodiment of the present application determines motion and/or posture information of a user merely by using phase change information of a signal transmitted through the user's body, so that determination of the motion and/or posture information of the user is more convenient and accurate.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: December 31, 2019
    Assignee: Beijing Zhigu Rui Tuo Tech Co., Ltd
    Inventors: Yuanchun Shi, Yuntao Wang, Lin Du
  • Patent number: 10512404
    Abstract: The present application provides methods and devices for determining input information, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to a first part of a body of a user executing an action, acquiring target blood-flow information about the first part or a second part that corresponds to the first part; and determining input information according to the target blood-flow information and reference information. According to the methods and devices, the body of the user is used as an input interface, thus cause an interaction area to be increased, which helps to improve input efficiency and user experience.
    Type: Grant
    Filed: January 7, 2016
    Date of Patent: December 24, 2019
    Assignee: BEIJING ZHIGU RUI TUO TECH CO., LTD.
    Inventors: Yuanchun Shi, Yuntao Wang, Chun Yu, Lin Du
  • Patent number: 10292609
    Abstract: The present application provides methods and devices for determining an action and/or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to that a first part on a body of a user executes an action, acquiring target blood flow information of the first part or a second part corresponding to the first part; and determining the first part and/or the action according to the target blood flow information and reference information. The methods and devices disclosed provide a new scheme for recognizing an action and/or an action part.
    Type: Grant
    Filed: January 7, 2016
    Date of Patent: May 21, 2019
    Assignee: BEIJING ZHIGU RUI TECH CO., LTD.
    Inventors: Yuanchun Shi, Yuntao Wang, Chun Yu, Lin Du
  • Patent number: 10261577
    Abstract: The present application provides a method and device for determining input information, and relates to the field of a wearable device. The method comprises in response to a first part of a body of a user executing an action, acquiring target blood-flow information about the first part or a second part that corresponds to the first part; and determining input information according to the target blood-flow information and reference information. According to the method and device, the body of the user is used as an input interface, to cause an interaction area to be increased, which helps to improve input efficiency and user experience.
    Type: Grant
    Filed: January 7, 2016
    Date of Patent: April 16, 2019
    Assignee: Beijing Zhigu Rui Tuo Tech Co., Ltd.
    Inventors: Yuanchun Shi, Yuntao Wang, Chun Yu, Lin Du
  • Publication number: 20180049647
    Abstract: The present application provides methods and devices for determining input information, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to a first part of a body of a user executing an action, acquiring target blood-flow information about the first part or a second part that corresponds to the first part; and determining input information according to the target blood-flow information and reference information. According to the methods and devices, the body of the user is used as an input interface, thus cause an interaction area to be increased, which helps to improve input efficiency and user experience.
    Type: Application
    Filed: January 7, 2016
    Publication date: February 22, 2018
    Inventors: YUANCHUN SHI, YUNTAO WANG, CHUN YU, LIN DU
  • Publication number: 20180035903
    Abstract: The present application provides methods and devices for determining an action and/or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to that a first part on a body of a user executes an action, acquiring target blood flow information of the first part or a second part corresponding to the first part; and determining the first part and/or the action according to the target blood flow information and reference information. The methods and devices disclosed provide a new scheme for recognizing an action and/or an action part.
    Type: Application
    Filed: January 7, 2016
    Publication date: February 8, 2018
    Inventors: YUANCHUN SHI, YUNTAO WANG, CHUN YU, LIN DU
  • Publication number: 20180018016
    Abstract: The present application provides methods and devices for determining an action and/or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to that a first part on a body of a user executes an action, acquiring target blood flow information of the first part or a second part corresponding to the first part; and determining the first part and/or the action according to the target blood flow information and reference information. The methods and devices provide a new scheme for recognizing an action and/or an action part.
    Type: Application
    Filed: January 7, 2016
    Publication date: January 18, 2018
    Inventors: YUANCHUN SHI, YUNTAO WANG, CHUN YU, LIN DU
  • Publication number: 20180018015
    Abstract: The present application provides a method and device for determining input information, and relates to the field of a wearable device. The method comprises in response to a first part of a body of a user executing an action, acquiring target blood-flow information about the first part or a second part that corresponds to the first part; and determining input information according to the target blood-flow information and reference information. According to the method and device, the body of the user is used as an input interface, to cause an interaction area to be increased, which helps to improve input efficiency and user experience.
    Type: Application
    Filed: January 7, 2016
    Publication date: January 18, 2018
    Inventors: YUANCHUN SHI, YUNTAO WANG, CHUN YU, LIN DU
  • Publication number: 20170147079
    Abstract: Embodiments of the present application disclose a human body-based interaction method and interaction apparatus. The method includes: acquiring phase change information of a second signal; the second signal being formed by a first signal through transmission of at least one transmission medium, the at least one transmission medium including the body of a user; and according to a first corresponding relationship between at least one phase change information and at least one motion and/or posture information of the user, determining motion and/or posture information of the user corresponding to the phase change information. At least one example embodiment of the present application determines motion and/or posture information of a user merely by using phase change information of a signal transmitted through the user's body, so that determination of the motion and/or posture information of the user is more convenient and accurate.
    Type: Application
    Filed: December 29, 2014
    Publication date: May 25, 2017
    Inventors: Yuanchun SHI, Yuntao WANG, Lin DU
  • Patent number: 8269842
    Abstract: A system and method for using images captured from a digital camera to control navigation through a three-dimensional user interface. The sequence of images may be examined to identify feature points to be tracked through successive frames of the images captured by the camera. A plurality of classifiers may be used to discern shift from rotation gestures, based on expected behavior of feature points in the image when the camera is shifted or rotated in position. The various classifiers may generate voting values for shift and rotation gestures, and the system can use historical gesture information to assist in categorizing a current gesture.
    Type: Grant
    Filed: June 11, 2008
    Date of Patent: September 18, 2012
    Assignee: Nokia Corporation
    Inventors: Kongqiao Wang, Liang Zhang, Yuanchun Shi