Patents by Inventor Feng-Seng Chu

Feng-Seng Chu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11127181
    Abstract: An avatar facial expression generating system and a method of avatar facial expression generation are provided. In the method, multiple user data are obtained and related to the sensing result of a user from multiple data sources. Multiple first emotion decisions are determined, respectively, based on each user data. Whether an emotion collision occurs among the first emotion decisions is determined. The emotion collision is related that the corresponding emotion groups of the first emotion decisions are not matched with each other. A second emotion decision is determined from one or more emotion groups according to the determining result of the emotion collision. The first or second emotion decision is related to one emotion group. A facial expression of an avatar is generated based on the second emotion decision. Accordingly, a proper facial expression of the avatar could be presented.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: September 21, 2021
    Assignee: XRSPACE CO., LTD.
    Inventors: Feng-Seng Chu, Peter Chou
  • Patent number: 11107293
    Abstract: A head mounted display system includes a scanning unit and a processing unit. The scanning unit is configured to scan a real object in a real environment so as to generate a scanning result. The processing unit is coupled to the scanning unit. The processing unit is configured to identify the real object according to the scanning result of the scanning unit, determine at least one predetermined interactive characteristic according to an identification result of the processing unit, create a virtual object in a virtual environment corresponding to the real object in the real environment according to the scanning result of the scanning unit, and assign the at least one predetermined interactive characteristic to the virtual object in the virtual environment. Therefore, the present disclosure allows a user to manipulate the virtual object in different ways more naturally, which effectively improves the user's interactive experience.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: August 31, 2021
    Assignee: XRSpace CO., LTD.
    Inventors: Chih-Wen Wang, Chia-Ming Lu, Feng-Seng Chu, Wei-Shuo Chen
  • Patent number: 11087520
    Abstract: An avatar facial expression generating system and a method of avatar facial expression generation are provided. In the method, user data relating to sensing result of a user is obtained. A first and a second emotional configurations are determined. The first and second emotional configuration maintain during a first and a second duration, respectively. A transition emotional configuration is determined based on the first emotional configuration and the second emotional configuration, in which the transition emotional configuration maintains during a third duration. Facial expressions of an avatar are generated based on the first emotional configuration, the transition emotional configuration, and the second emotional configuration, respectively. The third duration exists between the first duration and the second duration. Accordingly, a normal facial expression on an avatar would be presented while encountering the emotion transformation.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: August 10, 2021
    Assignee: XRSPACE CO., LTD.
    Inventors: Wei-Zhe Hong, Ming-Yang Kung, Ting-Chieh Lin, Feng-Seng Chu
  • Publication number: 20200342682
    Abstract: A head mounted display system includes a scanning unit and a processing unit. The scanning unit is configured to scan a real object in a real environment so as to generate a scanning result. The processing unit is coupled to the scanning unit. The processing unit is configured to identify the real object according to the scanning result of the scanning unit, determine at least one predetermined interactive characteristic according to an identification result of the processing unit, create a virtual object in a virtual environment corresponding to the real object in the real environment according to the scanning result of the scanning unit, and assign the at least one predetermined interactive characteristic to the virtual object in the virtual environment. Therefore, the present disclosure allows a user to manipulate the virtual object in different ways more naturally, which effectively improves the user's interactive experience.
    Type: Application
    Filed: April 23, 2019
    Publication date: October 29, 2020
    Inventors: Chih-Wen Wang, Chia-Ming Lu, Feng-Seng Chu, Wei-Shuo Chen
  • Patent number: 10732725
    Abstract: A method of interactive display based on gesture recognition includes determining a plurality of gestures corresponding to a plurality of images, interpreting a predetermined combination of gestures among the plurality of gestures as a command, and displaying a scene in response to the command.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: August 4, 2020
    Assignee: XRSpace CO., LTD.
    Inventors: Peter Chou, Feng-Seng Chu, Yen-Hung Lin, Shih-Hao Ke, Jui-Chieh Chen
  • Publication number: 20200193667
    Abstract: An avatar facial expression generating system and a method of avatar facial expression generation are provided. In the method, multiple user data are obtained and related to the sensing result of a user from multiple data sources. Multiple first emotion decisions are determined, respectively, based on each user data. Whether an emotion collision occurs among the first emotion decisions is determined. The emotion collision is related that the corresponding emotion groups of the first emotion decisions are not matched with each other. A second emotion decision is determined from one or more emotion groups according to the determining result of the emotion collision. The first or second emotion decision is related to one emotion group. A facial expression of an avatar is generated based on the second emotion decision. Accordingly, a proper facial expression of the avatar could be presented.
    Type: Application
    Filed: February 27, 2020
    Publication date: June 18, 2020
    Applicant: XRSPACE CO., LTD.
    Inventors: Feng-Seng Chu, Peter Chou
  • Publication number: 20200186478
    Abstract: A task dispatching method, applied in an edge computing system comprising an edge computing device and a mobile device, is disclosed. The method comprises sending, by the mobile device, a resource inquiry message to the edge computing device through a connection, wherein the connection comprises a wireless connection, a one-way transmission latency of the connection is less than 10 milliseconds, and the resource inquiry message comprises an inquiry of which resource type the edge computing device is equipped with; sending, by the edge computing device, a resource response message corresponding to the resource inquiry message; and determining, by the mobile device, to dispatch the second type of computing task to the second computing device when the resource response message indicates that the edge computing device comprises the second computing device equipped with the second type of processor.
    Type: Application
    Filed: December 10, 2018
    Publication date: June 11, 2020
    Inventors: Peter Chou, Feng-Seng Chu
  • Publication number: 20200184675
    Abstract: A positioning method, applied in a reality presenting device, includes collecting a plurality of first images of a real environment and constructing a virtual environment corresponding to the real environment according to the plurality of first images; obtaining, by a reality presenting device, a second image of the real environment; computing an initial virtual position in the virtual environment corresponding to the second image according to the plurality of first images and the second image; and displaying, by the reality presenting device, the virtual environment in a perspective from the initial virtual position at a time which a specific application of the reality presenting device is initiated; wherein the initial virtual position is corresponding to an initial real location in the real environment at which the reality presenting device captures the second image.
    Type: Application
    Filed: December 9, 2018
    Publication date: June 11, 2020
    Inventors: Peter Chou, Shang-Chin Su, Meng-Hau Wu, Feng-Seng Chu
  • Patent number: 10678342
    Abstract: A method of virtual user interface interaction based on gesture recognition comprises detecting two hands in a plurality of images, recognizing each hand's gesture, projecting a virtual user interface on an open gesture hand when one hand is recognized with a point gesture and the other hand is recognized with an open gesture, tracking an index fingertip of the point gesture hand, determining whether the index fingertip of the point gesture hand is close to the open gesture hand within a predefined rule, interpreting a movement of the index fingertip of the point gesture hand as a click command when the index fingertip of the point gesture hand is close to the open gesture hand within the predefined rule, and in response to the click command, generating image data with a character object of the virtual user interface object.
    Type: Grant
    Filed: October 21, 2018
    Date of Patent: June 9, 2020
    Assignee: XRSpace CO., LTD.
    Inventors: Peter Chou, Feng-Seng Chu, Yen-Hung Lin, Shih-Hao Ke, Jui-Chieh Chen
  • Publication number: 20200125176
    Abstract: A method of virtual user interface interaction based on gesture recognition comprises detecting two hands in a plurality of images, recognizing each hand's gesture, projecting a virtual user interface on an open gesture hand when one hand is recognized with a point gesture and the other hand is recognized with an open gesture, tracking an index fingertip of the point gesture hand, determining whether the index fingertip of the point gesture hand is close to the open gesture hand within a predefined rule, interpreting a movement of the index fingertip of the point gesture hand as a click command when the index fingertip of the point gesture hand is close to the open gesture hand within the predefined rule, and in response to the click command, generating image data with a character object of the virtual user interface object.
    Type: Application
    Filed: October 21, 2018
    Publication date: April 23, 2020
    Inventors: Peter Chou, Feng-Seng Chu, Yen-Hung Lin, Shih-Hao Ke, Jui-Chieh Chen
  • Patent number: 10606345
    Abstract: A reality interactive responding system, comprising a first server configured to receive first input data from a user and to determine whether the first input data conform to any one of a plurality of variation conditions or not; and a second server coupled to the first server and configured to receive second input data from the first server when the first input data conform to any one of the plurality of variation conditions and to determine a plurality of interactions in response to the first input data from the user; wherein the first input data from the user are related to an action, a facial expression, a gaze, a text, a speech, a gesture, an emotion or a movement generated by the user.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: March 31, 2020
    Assignee: XRSpace CO., LTD.
    Inventors: Peter Chou, Feng-Seng Chu, Cheng-Wei Lee, Yu-Chen Lai, Chuan-Chang Wang
  • Publication number: 20200098155
    Abstract: An avatar establishing method applied in an avatar establishing device is provided. The avatar establishing method comprises receiving a picture including a face; obtaining a plurality of initial avatar parameters corresponding to the picture; receiving a plurality of adjustments inputted by a user; obtaining a plurality of adjusted avatar parameters according to the plurality of initial avatar parameters and the plurality of adjustments; and generating an adjusted avatar according to the plurality of adjusted avatar parameters.
    Type: Application
    Filed: September 25, 2018
    Publication date: March 26, 2020
    Inventors: Peter Chou, Feng-Seng Chu, Ting-Chieh Lin, Chuan-Chang Wang
  • Publication number: 20200097067
    Abstract: A reality interactive responding system, comprising a first server configured to receive first input data from a user and to determine whether the first input data conform to any one of a plurality of variation conditions or not; and a second server coupled to the first server and configured to receive second input data from the first server when the first input data conform to any one of the plurality of variation conditions and to determine a plurality of interactions in response to the first input data from the user; wherein the first input data from the user are related to an action, a facial expression, a gaze, a text, a speech, a gesture, an emotion or a movement generated by the user
    Type: Application
    Filed: September 25, 2018
    Publication date: March 26, 2020
    Inventors: Peter Chou, Feng-Seng Chu, Cheng-Wei Lee, Yu-Chen Lai, Chuan-Chang Wang
  • Publication number: 20200097091
    Abstract: A method of interactive display based on gesture recognition includes determining a plurality of gestures corresponding to a plurality of images, interpreting a predetermined combination of gestures among the plurality of gestures as a command, and displaying a scene in response to the command.
    Type: Application
    Filed: September 25, 2018
    Publication date: March 26, 2020
    Inventors: Peter Chou, Feng-Seng Chu, Yen-Hung Lin, Shih-Hao Ke, Jui-Chieh Chen
  • Publication number: 20200099634
    Abstract: An interactive responding method comprises receiving an input data from a user; generating an output data according to the input data retrieving a plurality of attributes from the output data; determining a plurality of interactions corresponding to the plurality of attributes of the output data; and displaying the plurality of interactions via a non-player character; wherein the input data and the output data are related to a text.
    Type: Application
    Filed: September 20, 2018
    Publication date: March 26, 2020
    Inventors: Peter Chou, Feng-Seng Chu, Cheng-Wei Lee
  • Publication number: 20200090392
    Abstract: A method of facial expression generation by data fusion for a computing device of a virtual reality system is disclosed. The method comprises obtaining facial information of a user from a plurality of data sources, wherein the plurality of data sources includes a real-time data detection and a data pre-configuration, mapping the facial information to facial expression parameters for simulating facial geometry model of the user, performing a fusion process according to the facial expression parameters, to generate fusing parameters associated to the facial expression parameters with weighting, and generating a facial expression of an avatar in the virtual reality system according to the fusing parameters.
    Type: Application
    Filed: September 19, 2018
    Publication date: March 19, 2020
    Inventors: Peter Chou, Feng-Seng Chu, Ting-Chieh Lin, Chuan-Chang Wang
  • Publication number: 20200090394
    Abstract: An avatar facial expression generating system and a method of avatar facial expression generation are provided. In the method, user data relating to sensing result of a user is obtained. A first and a second emotional configurations are determined. The first and second emotional configuration maintain during a first and a second duration, respectively. A transition emotional configuration is determined based on the first emotional configuration and the second emotional configuration, in which the transition emotional configuration maintains during a third duration. Facial expressions of an avatar are generated based on the first emotional configuration, the transition emotional configuration, and the second emotional configuration, respectively. The third duration exists between the first duration and the second duration. Accordingly, a normal facial expression on an avatar would be presented while encountering the emotion transformation.
    Type: Application
    Filed: October 17, 2019
    Publication date: March 19, 2020
    Applicant: XRSPACE CO., LTD.
    Inventors: Wei-Zhe Hong, Ming-Yang Kung, Ting-Chieh Lin, Feng-Seng Chu
  • Patent number: 10581273
    Abstract: The present invention provides a mobile apparatus with wireless charging function and an accessory apparatus with wireless charging function capable of allowing a wireless power receiver (PRX) to communicate with a CPU of a mobile device, so as to solve the problem mentioned above. The mobile apparatus comprises: a processing circuit and a wireless power receiver (PRX). The PRX is coupled to the processing circuit and wirelessly connected to a wireless power transmitter (PTX) of a wireless charger, for receiving a wireless power, wherein the PRX receives a first authentication request signal from the PTX and then sends a second authentication request signal to the processing circuit, and the processing circuit sends a second authentication information to the PRX, and the PRX sends a first authentication information to the PTX.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: March 3, 2020
    Assignee: HTC Corporation
    Inventor: Feng-Seng Chu
  • Patent number: 10312746
    Abstract: An operating method includes receiving a wireless power signal from a power providing equipment; receiving a request from the power providing equipment; and under a condition that a battery level of a battery of the mobile device is less than a threshold value, transmitting an unavailable message to the power providing equipment, so that the power providing equipment charges the battery of the mobile device by utilizing the wireless power signal according to the unavailable message.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: June 4, 2019
    Assignee: HTC Corporation
    Inventor: Feng-Seng Chu
  • Patent number: 10116168
    Abstract: A wireless power transmitter device that includes a transmitter circuit, a transmitter coil, a transmitter communication unit and a transmitter control unit is provided. The transmitter circuit generates a transmitting current. The transmitter coil receives the transmitting current to generate an electromagnetic field to induce a receiving current in a wireless power receiver device. The transmitter communication unit is configured to receive a report of a received power of the wireless power receiver device therefrom. The transmitter control unit receives the report of the received power and determines whether a frequency splitting phenomena occurs according to the received power. When the frequency splitting phenomena occurs, the transmitter control unit adjusts at least one of a configuration of the transmitter coil and a configuration of the transmitter circuit or adjusts a transmitting frequency of the transmitting current.
    Type: Grant
    Filed: September 10, 2015
    Date of Patent: October 30, 2018
    Assignee: HTC Corporation
    Inventor: Feng-Seng Chu