Patents by Inventor Szu Wen FAN

Szu Wen FAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240184497
    Abstract: Methods and systems for preventing motion sickness via postural analysis are provided. In response to a determination that the portable electronic device is in motion, that the portable electronic device is being used for visual activities, and that the user of the portable electronic device has an unhealthy posture, a time to implement an intervention is determined. The intervention is subsequently performed at the determined time.
    Type: Application
    Filed: December 6, 2022
    Publication date: June 6, 2024
    Inventors: Szu Wen FAN, Yuan DENG, Juntao YE
  • Patent number: 11794766
    Abstract: The present disclosure describes systems and methods for providing driver assistance. A current vehicle state of a vehicle at a current timestep and an environment map representing an environment of the vehicle at least at the current timestep are obtained. Augmented reality feedback is generated including a virtual vehicle representing a predicted future vehicle state, where the predicted future vehicle state is predicted for a given future timestep based on at least one of the current vehicle state or the current environment. The generated augmented reality feedback including the virtual vehicle is outputted to be displayed by a display device.
    Type: Grant
    Filed: October 14, 2021
    Date of Patent: October 24, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Taslim Arefin Khan, Szu Wen Fan, Wei Li
  • Patent number: 11797100
    Abstract: Systems and methods of generate a classification of touch events are disclosed. A system may detect a first touch event, the first touch event being effected by a touch effector upon a touch receiver. The system may then determine a first touch gesture associated with the first touch event, determine an orientation associated with the touch effector, determine an orientation of the touch receiver, calculate a first relative orientation between the orientation associated with the touch effector and the orientation of the touch receiver, and generate a classification of the first touch event based on the first touch gesture and the first relative orientation.
    Type: Grant
    Filed: September 23, 2022
    Date of Patent: October 24, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Szu Wen Fan
  • Patent number: 11688148
    Abstract: Methods and systems for selecting an object or location in an extended reality (XR) environment or physical environment are described. A first origin, including a first position and a first direction, and a second origin, including a second position and a second direction, are obtained by at least one sensor. An intersection of a first ray, casted from the first origin, and a second ray, casted from the second origin, is determined. A selected object or selected location is identified, based on the determined intersection. An identification of the selected object or the selected location is outputted.
    Type: Grant
    Filed: September 8, 2022
    Date of Patent: June 27, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Szu Wen Fan, Taslim Arefin Khan, Wei Li
  • Patent number: 11640700
    Abstract: Methods and systems for rendering a virtual overlay in an extended reality (XR) environment are described. Image data is received, representing a current field of view (FOV) in the XR environment. A spatial boundary is defined in the current FOV, based on user input. An image label representing a region of interest (ROI) within the defined spatial boundary, and one or more object labels representing one or more objects within the defined spatial boundary are generated. At least one relevant virtual object is identified. The virtual object is relevant to a semantic meaning of the ROI based on the image label and/or the one or more object labels. Identification of the at least one relevant virtual object is outputted, to be renderable in the XR environment.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: May 2, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Changqing Zou, Yumna Wassaf Akhtar, Szu Wen Fan, Jianpeng Xu, Wei Li
  • Publication number: 20230121388
    Abstract: The present disclosure describes systems and methods for providing driver assistance. A current vehicle state of a vehicle at a current timestep and an environment map representing an environment of the vehicle at least at the current timestep are obtained. Augmented reality feedback is generated including a virtual vehicle representing a predicted future vehicle state, where the predicted future vehicle state is predicted for a given future timestep based on at least one of the current vehicle state or the current environment. The generated augmented reality feedback including the virtual vehicle is outputted to be displayed by a display device.
    Type: Application
    Filed: October 14, 2021
    Publication date: April 20, 2023
    Inventors: Taslim Arefin KHAN, Szu Wen FAN, Wei LI
  • Publication number: 20230013860
    Abstract: Methods and systems for selecting an object or location in an extended reality (XR) environment or physical environment are described. A first origin, including a first position and a first direction, and a second origin, including a second position and a second direction, are obtained by at least one sensor. An intersection of a first ray, casted from the first origin, and a second ray, casted from the second origin, is determined. A selected object or selected location is identified, based on the determined intersection. An identification of the selected object or the selected location is outputted.
    Type: Application
    Filed: September 8, 2022
    Publication date: January 19, 2023
    Inventors: Szu Wen FAN, Taslim Arefin KHAN, Wei LI
  • Patent number: 11475642
    Abstract: Methods and systems for selecting an object or location in an extended reality (XR) environment or physical environment are described. A first origin, including a first position and a first direction, and a second origin, including a second position and a second direction, are obtained by at least one sensor. An intersection of a first ray, casted from the first origin, and a second ray, casted from the second origin, is determined. A selected object or selected location is identified, based on the determined intersection. An identification of the selected object or the selected location is outputted.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: October 18, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Szu Wen Fan, Taslim Arefin Khan, Wei Li
  • Publication number: 20220326967
    Abstract: Devices, methods, systems, and media are described for providing an extended screen distributed user interface in an augmented reality environment. GUI layout information for laying out a conventional 2D GUI is processed in order to generate an extended screen DUI for display partially on a 2D display device and partially on one or more virtual screens of an AR environment viewed using an AR display device, such as a head mounted display. GUI elements are laid out in the DUI based on a primary modality of the GUI element (input or output), and/or based on spatial dependencies between GUI elements encoded in the GUI layout information. Methods for switching focus between two software application instances displayed in the DUI are also disclosed.
    Type: Application
    Filed: April 12, 2021
    Publication date: October 13, 2022
    Inventors: Taslim Arefin KHAN, Szu Wen FAN, Changqing ZOU, Jianpeng XU, Wei LI
  • Publication number: 20220277166
    Abstract: Methods and systems for rendering a virtual overlay in an extended reality (XR) environment are described. Image data is received, representing a current field of view (FOV) in the XR environment. A spatial boundary is defined in the current FOV, based on user input. An image label representing a region of interest (ROI) within the defined spatial boundary, and one or more object labels representing one or more objects within the defined spatial boundary are generated. At least one relevant virtual object is identified. The virtual object is relevant to a semantic meaning of the ROI based on the image label and/or the one or more object labels. Identification of the at least one relevant virtual object is outputted, to be renderable in the XR environment.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Changqing ZOU, Yumna Wassaf AKHTAR, Szu Wen FAN, Xu JIANPENG, Wei LI
  • Publication number: 20220198756
    Abstract: Methods and systems for selecting an object or location in an extended reality (XR) environment or physical environment are described. A first origin, including a first position and a first direction, and a second origin, including a second position and a second direction, are obtained by at least one sensor. An intersection of a first ray, casted from the first origin, and a second ray, casted from the second origin, is determined. A selected object or selected location is identified, based on the determined intersection. An identification of the selected object or the selected location is outputted.
    Type: Application
    Filed: December 18, 2020
    Publication date: June 23, 2022
    Inventors: Szu Wen FAN, Taslim Arefin KHAN, Wei LI
  • Patent number: 11327630
    Abstract: Devices, methods, systems, and media are described for selecting virtual objects for user interaction in an extended reality environment. Distant virtual objects are brought closer to the user within a virtual 3D space to situate the selected virtual object in virtual proximity to the user's hand for direct manipulation. A virtual object is selected by the user based on movements of the user's hand and/or head that are correlated or associated with an intent to select a specific virtual object within the virtual 3D space. As the user's hand moves in a way that is consistent with this intent, the virtual object is brought closer to the user's hand within the virtual 3D space. To predict the user's intent, hand and head trajectory data may be compared to a library of kinematic trajectory templates to identify a best-matched trajectory template.
    Type: Grant
    Filed: February 4, 2021
    Date of Patent: May 10, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Taslim Arefin Khan, Szu Wen Fan, Changqing Zou, Wei Li
  • Publication number: 20210349308
    Abstract: Systems and methods for processing an omnidirectional video (ODV) in virtual reality are provided. The method may include: recording virtual reality field of view (VRFOV) data corresponding to the ODV displayed by a VR display device, where the ODV has a plurality of ODV frames in chronological order, each of the ODV frames including ODV image data and a unique ODV frame timestamp, the VRFOV data representing, for each ODV frame, spatial parameters for a subset of the ODV image data corresponding to a field of view (FOV) presented by the VR display device and an ODV frame identifier for the ODV frame; for each ODV frame in the plurality of ODV frames, extracting the subset of the ODV image data indicated in the VRFOV data to generate a respective regular field of view (RFOV) video frame; and storing the generated RFOV video frames as a video file.
    Type: Application
    Filed: April 29, 2021
    Publication date: November 11, 2021
    Inventors: Szu Wen FAN, Hengguang ZHOU, Qiang XU, Wei LI