Patents by Inventor Yuanchun Shi
Yuanchun Shi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12154591Abstract: An electronic device configured with a microphone, a voice interaction wake-up method executed by an electronic device equipped with a microphone, and a computer-readable medium, the electronic device comprising a memory and a central processing unit, wherein the memory stores computer-executable instructions, and when executed by the central processing unit, the computer-executable instructions perform the following operations: analyzing a sound signal collected by a microphone, identifying whether the sound signal contains speech spoken by a person and whether it contains wind noise sounds generated by airflows hitting the microphone as a result of the speech spoken by the person, and in response to determining that the sound signal contains sound spoken by the person and contains wind noise sounds generated by airflows hitting the microphone as a result of the speech spoken by the user, processing the sound signal as speech input by the user.Type: GrantFiled: May 26, 2020Date of Patent: November 26, 2024Assignee: TSINGHUA UNIVERSITYInventors: Chun Yu, Yuanchun Shi
-
Patent number: 12112756Abstract: An interaction method triggered by a mouth-covering gesture and an intelligent electronic device are provided. The interaction method is applied to an intelligent electronic device arranged with a sensor. The intelligent electronic device includes a sensor system for capturing a signal of a user putting one hand on a mouth to make a mouth-covering gesture. The interaction method includes: processing the signal to determine whether the user puts the hand on the mouth to make the mouth-covering gesture; and in a case that the user puts the hand on the mouth to make the mouth-covering gesture, determining a mouth-covering gesture input mode as an input mode for controlling an interaction to trigger a control command or trigger another input mode, by executing a program on the intelligent electronic device.Type: GrantFiled: May 26, 2020Date of Patent: October 8, 2024Assignee: TSINGHUA UNIVERSITYInventors: Chun Yu, Yuanchun Shi
-
Publication number: 20220319538Abstract: An electronic device configured with a microphone, a voice interaction wake-up method executed by an electronic device equipped with a microphone, and a computer-readable medium, the electronic device comprising a memory and a central processing unit, wherein the memory stores computer-executable instructions, and when executed by the central processing unit, the computer-executable instructions perform the following operations: analyzing a sound signal collected by a microphone, identifying whether the sound signal contains speech spoken by a person and whether it contains wind noise sounds generated by airflows hitting the microphone as a result of the speech spoken by the person, and in response to determining that the sound signal contains sound spoken by the person and contains wind noise sounds generated by airflows hitting the microphone as a result of the speech spoken by the user, processing the sound signal as speech input by the user.Type: ApplicationFiled: May 26, 2020Publication date: October 6, 2022Applicant: TSINGHUA UNIVERSITYInventors: Chun YU, Yuanchun SHI
-
Publication number: 20220319520Abstract: An interaction method triggered by a mouth-covering gesture and an intelligent electronic device are provided. The interaction method is applied to an intelligent electronic device arranged with a sensor. The intelligent electronic device includes a sensor system for capturing a signal of a user putting one hand on a mouth to make a mouth-covering gesture. The interaction method includes: processing the signal to determine whether the user puts the hand on the mouth to make the mouth-covering gesture; and in a case that the user puts the hand on the mouth to make the mouth-covering gesture, determining a mouth-covering gesture input mode as an input mode for controlling an interaction to trigger a control command or trigger another input mode, by executing a program on the intelligent electronic device.Type: ApplicationFiled: May 26, 2020Publication date: October 6, 2022Applicant: TSINGHUA UNIVERSITYInventors: Chun YU, Yuanchun SHI
-
Patent number: 11216116Abstract: A control method is provided, including: obtaining input information, where the input information includes a capacitance signal and report point coordinates generated when a user performs an operation on a terminal screen; using report point coordinates in a previous frame as report point coordinates in a current frame if it is determined that a capacitance signal in the current frame and a capacitance signal in the previous frame that are in the input information meet a preset condition; or using report point coordinates in a previous frame as report point coordinates in a current frame if it is determined that the report point coordinates in the current frame and report point coordinates in a first frame that are in the input information meet a preset condition.Type: GrantFiled: October 15, 2018Date of Patent: January 4, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Yuanchun Shi, Chun Yu, Lihang Pan, Xin Yi, Weigang Cai, Siju Wu, Xuan Zhou, Jie Xu
-
Patent number: 10948979Abstract: The present application provides methods and devices for determining an action or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to detecting a motion of a first part of a body of a user, acquiring, using a photoelectric sensor, target Doppler measurement information of the first part or a second part corresponding to the first part; determining target velocity related information corresponding to the target Doppler measurement information; and determining the first part or the action according to the target velocity related information and reference information, wherein the target velocity related information comprises target blood flow velocity information or target blood flow information. The methods and devices provide a new scheme for recognizing an action and/or an action part.Type: GrantFiled: December 30, 2019Date of Patent: March 16, 2021Assignee: BEIJING ZHIGU RUI TUO TECH CO., LTD.Inventors: Yuanchun Shi, Yuntao Wang, Chun Yu, Lin Du
-
Publication number: 20200371662Abstract: A report point output control method and apparatus includes performing feature detection on a capacitance hot spot to determine an eigenvalue of the capacitance hot spot, determining, based on the eigenvalue of the capacitance hot spot, whether a report point matching the capacitance hot spot is from an odd-form touch, and skipping outputting the report point when the report point is the report point generated by the odd-form touch. The eigenvalue includes at least one of a horizontal span, a longitudinal span, an eccentricity, a barycenter coordinate, a maximum capacitance value, an average shadow length, an upper left shadow area, or a lower right shadow area.Type: ApplicationFiled: October 15, 2018Publication date: November 26, 2020Inventors: Yuanchun Shi, Chun Yu, Weijie Xu, Xin Yi, Siju Wu, Xuan Zhou, Jie Xu, Jingjin Xu
-
Publication number: 20200301560Abstract: A control method is provided, including: obtaining input information, where the input information includes a capacitance signal and report point coordinates generated when a user performs an operation on a terminal screen; using report point coordinates in a previous frame as report point coordinates in a current frame if it is determined that a capacitance signal in the current frame and a capacitance signal in the previous frame that are in the input information meet a preset condition; or using report point coordinates in a previous frame as report point coordinates in a current frame if it is determined that the report point coordinates in the current frame and report point coordinates in a first frame that are in the input information meet a preset condition.Type: ApplicationFiled: October 15, 2018Publication date: September 24, 2020Inventors: Yuanchun SHI, Chun YU, Lihang PAN, Xin YI, Weigang CAI, Siju WU, Xuan ZHOU, Jie XU
-
Publication number: 20200142476Abstract: The present application provides methods and devices for determining an action or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to detecting a motion of a first part of a body of a user, acquiring, using a photoelectric sensor, target Doppler measurement information of the first part or a second part corresponding to the first part; determining target velocity related information corresponding to the target Doppler measurement information; and determining the first part or the action according to the target velocity related information and reference information, wherein the target velocity related information comprises target blood flow velocity information or target blood flow information. The methods and devices provide a new scheme for recognizing an action and/or an action part.Type: ApplicationFiled: December 30, 2019Publication date: May 7, 2020Inventors: YUANCHUN SHI, YUNTAO WANG, CHUN YU, LIN DU
-
Patent number: 10591985Abstract: The present application provides methods and devices for determining an action and/or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to that a first part on a body of a user executes an action, acquiring target blood flow information of the first part or a second part corresponding to the first part; and determining the first part and/or the action according to the target blood flow information and reference information. The methods and devices provide a new scheme for recognizing an action and/or an action part.Type: GrantFiled: January 7, 2016Date of Patent: March 17, 2020Assignee: BEIJING ZHIGU RUI TUO TECH CO., LTD.Inventors: Yuanchun Shi, Yuntao Wang, Chun Yu, Lin Du
-
Patent number: 10521018Abstract: Embodiments of the present application disclose a human body-based interaction method and interaction apparatus. The method includes: acquiring phase change information of a second signal; the second signal being formed by a first signal through transmission of at least one transmission medium, the at least one transmission medium including the body of a user; and according to a first corresponding relationship between at least one phase change information and at least one motion and/or posture information of the user, determining motion and/or posture information of the user corresponding to the phase change information. At least one example embodiment of the present application determines motion and/or posture information of a user merely by using phase change information of a signal transmitted through the user's body, so that determination of the motion and/or posture information of the user is more convenient and accurate.Type: GrantFiled: December 29, 2014Date of Patent: December 31, 2019Assignee: Beijing Zhigu Rui Tuo Tech Co., LtdInventors: Yuanchun Shi, Yuntao Wang, Lin Du
-
Patent number: 10512404Abstract: The present application provides methods and devices for determining input information, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to a first part of a body of a user executing an action, acquiring target blood-flow information about the first part or a second part that corresponds to the first part; and determining input information according to the target blood-flow information and reference information. According to the methods and devices, the body of the user is used as an input interface, thus cause an interaction area to be increased, which helps to improve input efficiency and user experience.Type: GrantFiled: January 7, 2016Date of Patent: December 24, 2019Assignee: BEIJING ZHIGU RUI TUO TECH CO., LTD.Inventors: Yuanchun Shi, Yuntao Wang, Chun Yu, Lin Du
-
Patent number: 10292609Abstract: The present application provides methods and devices for determining an action and/or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to that a first part on a body of a user executes an action, acquiring target blood flow information of the first part or a second part corresponding to the first part; and determining the first part and/or the action according to the target blood flow information and reference information. The methods and devices disclosed provide a new scheme for recognizing an action and/or an action part.Type: GrantFiled: January 7, 2016Date of Patent: May 21, 2019Assignee: BEIJING ZHIGU RUI TECH CO., LTD.Inventors: Yuanchun Shi, Yuntao Wang, Chun Yu, Lin Du
-
Patent number: 10261577Abstract: The present application provides a method and device for determining input information, and relates to the field of a wearable device. The method comprises in response to a first part of a body of a user executing an action, acquiring target blood-flow information about the first part or a second part that corresponds to the first part; and determining input information according to the target blood-flow information and reference information. According to the method and device, the body of the user is used as an input interface, to cause an interaction area to be increased, which helps to improve input efficiency and user experience.Type: GrantFiled: January 7, 2016Date of Patent: April 16, 2019Assignee: Beijing Zhigu Rui Tuo Tech Co., Ltd.Inventors: Yuanchun Shi, Yuntao Wang, Chun Yu, Lin Du
-
Publication number: 20180049647Abstract: The present application provides methods and devices for determining input information, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to a first part of a body of a user executing an action, acquiring target blood-flow information about the first part or a second part that corresponds to the first part; and determining input information according to the target blood-flow information and reference information. According to the methods and devices, the body of the user is used as an input interface, thus cause an interaction area to be increased, which helps to improve input efficiency and user experience.Type: ApplicationFiled: January 7, 2016Publication date: February 22, 2018Inventors: YUANCHUN SHI, YUNTAO WANG, CHUN YU, LIN DU
-
Publication number: 20180035903Abstract: The present application provides methods and devices for determining an action and/or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to that a first part on a body of a user executes an action, acquiring target blood flow information of the first part or a second part corresponding to the first part; and determining the first part and/or the action according to the target blood flow information and reference information. The methods and devices disclosed provide a new scheme for recognizing an action and/or an action part.Type: ApplicationFiled: January 7, 2016Publication date: February 8, 2018Inventors: YUANCHUN SHI, YUNTAO WANG, CHUN YU, LIN DU
-
Publication number: 20180018016Abstract: The present application provides methods and devices for determining an action and/or an action part, and generally relates to the field of wearable devices. A method disclosed herein comprises: in response to that a first part on a body of a user executes an action, acquiring target blood flow information of the first part or a second part corresponding to the first part; and determining the first part and/or the action according to the target blood flow information and reference information. The methods and devices provide a new scheme for recognizing an action and/or an action part.Type: ApplicationFiled: January 7, 2016Publication date: January 18, 2018Inventors: YUANCHUN SHI, YUNTAO WANG, CHUN YU, LIN DU
-
Publication number: 20180018015Abstract: The present application provides a method and device for determining input information, and relates to the field of a wearable device. The method comprises in response to a first part of a body of a user executing an action, acquiring target blood-flow information about the first part or a second part that corresponds to the first part; and determining input information according to the target blood-flow information and reference information. According to the method and device, the body of the user is used as an input interface, to cause an interaction area to be increased, which helps to improve input efficiency and user experience.Type: ApplicationFiled: January 7, 2016Publication date: January 18, 2018Inventors: YUANCHUN SHI, YUNTAO WANG, CHUN YU, LIN DU
-
Publication number: 20170147079Abstract: Embodiments of the present application disclose a human body-based interaction method and interaction apparatus. The method includes: acquiring phase change information of a second signal; the second signal being formed by a first signal through transmission of at least one transmission medium, the at least one transmission medium including the body of a user; and according to a first corresponding relationship between at least one phase change information and at least one motion and/or posture information of the user, determining motion and/or posture information of the user corresponding to the phase change information. At least one example embodiment of the present application determines motion and/or posture information of a user merely by using phase change information of a signal transmitted through the user's body, so that determination of the motion and/or posture information of the user is more convenient and accurate.Type: ApplicationFiled: December 29, 2014Publication date: May 25, 2017Inventors: Yuanchun SHI, Yuntao WANG, Lin DU
-
Patent number: 8269842Abstract: A system and method for using images captured from a digital camera to control navigation through a three-dimensional user interface. The sequence of images may be examined to identify feature points to be tracked through successive frames of the images captured by the camera. A plurality of classifiers may be used to discern shift from rotation gestures, based on expected behavior of feature points in the image when the camera is shifted or rotated in position. The various classifiers may generate voting values for shift and rotation gestures, and the system can use historical gesture information to assist in categorizing a current gesture.Type: GrantFiled: June 11, 2008Date of Patent: September 18, 2012Assignee: Nokia CorporationInventors: Kongqiao Wang, Liang Zhang, Yuanchun Shi