Patents by Inventor Chih-Pin Hsiao

Chih-Pin Hsiao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240149057
    Abstract: A device for treating a user's skin using plasma is provided. The device comprises a plasma generation assembly and a power supply. The plasma generation assembly comprises a discharge electrode including a first surface; a first dielectric material layer provided on the first surface of the discharge electrode and the first surface, a ground electrode surrounding the discharge electrode, and an insulation member spacing around the discharge electrode from the ground electrode. The power supply configured to apply power to the plasma generation assembly so that plasma is generated from the first surface of the discharge electrode to the ground electrode and between the first dielectric material layer and the user's skin.
    Type: Application
    Filed: November 4, 2022
    Publication date: May 9, 2024
    Inventors: HUI-FANG LI, YU-TING LIN, CHUN-HAO CHANG, CHIH-TUNG LIU, CHUN-PING HSIAO, YU-PIN CHENG
  • Patent number: 10809794
    Abstract: An example method is provided in according with one implementation of the present disclosure. The method includes identifying an intention of a user of a system in relation to a three-dimensional (3D) virtual object and selecting a 3D navigation mode from a plurality of 3D navigation modes based on the identified user intention. The plurality of 3D navigation modes includes at least a model navigation mode, a simple navigation mode, a driving navigation mode, a reaching navigation mode, and a multi-touch navigation mode. The method further includes transitioning the system to the selected 3D navigation mode.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: October 20, 2020
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Chih Pin Hsiao, Gregory William Cook, Jishang Wei, Mithra Vankipuram, Nelson L Chang
  • Patent number: 10359905
    Abstract: An example collaboration system is provided in according with one implementation of the present disclosure. The system includes a 3D display a 3D data visualization, at least two hand avatars of two different users, and a view field avatar. The system also includes a plurality of auxiliary computing devices and a behavior analysis engine to perform a behavior analysis of a user. The behavior analysis engine is to: determine an attention engagement level of the user, and determine a pose of the user in relation to the auxiliary computing device. The system further includes an intention analysis engine to determine an intention of the user in relation to the 3D visualization based on the user's attention engagement level and the user's pose, and a collaboration engine to implement an action with the 3D data visualization by using a hand avatar based on the user's intention and an identified gesture.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: July 23, 2019
    Assignee: ENTIT SOFTWARE LLC
    Inventors: Gregory W. Cook, Chih-Pin Hsiao, Jishang Wei, Mithra Vankipuram
  • Patent number: 10275113
    Abstract: An example system is provided in according with one implementation of the present disclosure. The system includes a 3D display displaying at least one three-dimensional (3D) visualization, an auxiliary computing device including a multi-touch display and a plurality of sensors, and a behavior analysis engine to perform a behavior analysis of a user by using data from the plurality of sensors. The behavior analysis engine is to: determine an attention engagement level of the user, and determine a pose of the user in relation to the auxiliary computing device. The system further includes an intention analysis engine to determine an intention of the user in relation to the at least one 3D visualization based on the user's attention engagement level and the user's pose, and an interaction mode engine to automatically adjust the system to an interaction mode based on the identified user intention.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: April 30, 2019
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Chih Pin Hsiao, Gregory William Cook, Jishang Wei, Mithra Vankipuram, Nelson L Chang
  • Publication number: 20180335925
    Abstract: An example system is provided in according with one implementation of the present disclosure, The system includes a 3D display displaying at least one three-dimensional (3D) visualization, an auxiliary computing device including a multi-touch display and a plurality of sensors, and a behavior analysis engine to perform a behavior analysis of a user by using data from the plurality of sensors. The behavior analysis engine is to: determine an attention engagement level of the user, and determine a pose of the user in relation to the auxiliary computing device. The system further includes an intention analysis engine to determine an intention of the user in relation to the at least one 3D visualization based on the users attention engagement level and the users pose, and an interaction mode engine to automatically adjust the system to an interaction mode based on the identified user intention.
    Type: Application
    Filed: December 19, 2014
    Publication date: November 22, 2018
    Inventors: CHIH PIN HSIAO, GREGORY WILLIAM COOK, JISHANG WEI, MITHRA VANKIPURAM, NELSON L CHANG
  • Publication number: 20170344220
    Abstract: An example collaboration system is provided in according with one implementation of the present disclosure. The system includes a SD display a 3D data visualization, at least two hand avatars of two different users, and a view field avatar. The system also includes a plurality of auxiliary computing devices and a behavior analysis engine to perform a behavior analysis of a user. The behavior analysis engine is to: determine an attention engagement level of the user, and determine a pose of the user in relation to the auxiliary computing device. The system further includes an intention analysis engine to determine an intention of the user in relation to the 3D visualization based on the user's attention engagement level and the user's pose, and a collaboration engine to implement an action with the 3D data visualization by using a hand avatar based on the user's intention and an identified gesture.
    Type: Application
    Filed: December 19, 2014
    Publication date: November 30, 2017
    Inventors: Gregory W. Cook, Chih-Pin Hsiao, Jishang Wei, Mithra Vankipuram
  • Publication number: 20170315615
    Abstract: An example method is provided in according with one implementation of the present disclosure. The method includes analyzing data related to at least one detected hand, performing a hand posture analysis to identify a hand posture of the at least one hand and a key point of the at least one hand for the identified hand posture, and performing a hand motion analysis to identify a hand motion by the at least one hand based on the hand posture and the key point. The hand posture is selected from a predefined group of hand postures and the hand motion is selected from a predefined group of hand motions. The method further includes selecting a gesture from a gesture library based on a combination of the hand posture and the hand motion of the at least one hand.
    Type: Application
    Filed: December 19, 2014
    Publication date: November 2, 2017
    Inventors: GREGORY WILLIAM COOK, JISHANG WEI, MITHRA VANKIPURAM, CHIH PIN HSIAO
  • Publication number: 20170293350
    Abstract: An example method is provided in according with one implementation of the present disclosure. The method includes identifying an intention of a user of a system in relation to a three-dimensional (3D) virtual object and selecting a 3D navigation mode from a plurality of 3D navigation modes based on the identified user intention. The plurality of 3D navigation modes includes at least a model navigation mode, a simple navigation mode, a driving navigation mode, a reaching navigation mode, and a multi-touch navigation mode. The method further includes transitioning the system to the selected 3D navigation mode.
    Type: Application
    Filed: December 19, 2014
    Publication date: October 12, 2017
    Inventors: CHIH PIN HSIAO, GREGORY WILLIAM COOK, JISHANG WEI, MITHRA VANKIPURAM, NELSON L. CHANG
  • Publication number: 20160063108
    Abstract: A system and method for facilitating an intelligent query for graphic pattern is provided. In example embodiments, a plurality of user interfaces is provided at a first device that is communicatively coupled to a second device having a visual analytics system. The plurality of user interfaces provides control of the visual analytics system of the second device at the first device. Sketch inputs are received via a sketch user interface of the plurality of user interfaces. The sketch inputs collectively generate a graphic pattern. A complex query is generated that includes the graphic pattern. The complex query is transmitted to the second device having the visual analytics system that performs a search for data that matches the complex query.
    Type: Application
    Filed: August 28, 2014
    Publication date: March 3, 2016
    Inventors: Chih-Sung Wu, Chih-Pin Hsiao, Jeng-Weei Lin, Na Cheng, Waqas Javed