Patents by Inventor Lidan ZHANG

Lidan ZHANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119653
    Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled, at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor.
    Type: Application
    Filed: December 19, 2023
    Publication date: April 11, 2024
    Applicant: Tahoe Research, Ltd.
    Inventors: Minje PARK, Tae-Hoon KIM, Myung-Ho JU, Jihyeon YI, Xiaolu SHEN, Lidan ZHANG, Qiang LI
  • Patent number: 11943710
    Abstract: An electronic device for managing gateways comprises: a processor; and a computer-readable storage medium which contains executable instructions which, when executed by the processor, causes the electronic device to: determine whether the current time falls within the specified sleep time interval; in response to determining that the current time falls within the specified sleep time interval, determine whether the first extender node among one or a plurality of extender nodes is in an idle connection state, wherein the idle connection state includes: the first extender node being not connected to any client, or the first extender node being only connected to a sleeping client; and sending a sleep command to the first extender node based at least in part on the idle connection state of the first extender node, wherein the sleep command instructs the first extender node to power off a wireless network interface of the first extender node.
    Type: Grant
    Filed: September 9, 2021
    Date of Patent: March 26, 2024
    Assignee: ARRIS ENTERPRISES LLC
    Inventors: Lidan Chen, Ruilu Zeng, Ju Li, Bo Chen, Yu Zhang
  • Patent number: 11887231
    Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled, at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: January 30, 2024
    Assignee: Tahoe Research, Ltd.
    Inventors: Minje Park, Tae-Hoon Kim, Myung-Ho Ju, Jihyeon Yi, Xiaolu Shen, Lidan Zhang, Qiang Li
  • Publication number: 20230410487
    Abstract: Performing online learning for a model to detect unseen actions in an action recognition system is disclosed. The method includes extracting semantic features in a semantic domain from semantic action labels, transforming the semantic features from the semantic domain into mixed features in a mixed domain, and storing the mixed features in a feature database. The method further includes extracting visual features in a visual domain from a video stream and determining if the visual features indicate an unseen action in the video stream. If no unseen action is determined, applying an offline classification model to the visual features to identify seen actions, assigning identifiers to the identified seen actions, transforming the visual features from the visual domain into mixed features in the mixed domain, and storing the mixed features and seen action identifiers in the feature database.
    Type: Application
    Filed: November 30, 2020
    Publication date: December 21, 2023
    Applicant: Intel Corporation
    Inventors: Lidan Zhang, Qi She, Ping Guo, Yimin Zhang
  • Publication number: 20230406353
    Abstract: System and techniques for vehicle operation safety model (VOSM) grade measurement are described herein. A data set of parameter measurements-defined by the VOSM-of multiple vehicles are obtained. A statistical value is then derived from a portion of the parameter measurements. A measurement from a subject vehicle is obtained that corresponds to the portion of the parameter measurements from which the statistical value was derived. The measurement is then compared to the statistical value to produce a safety grade for the subject vehicle.
    Type: Application
    Filed: November 19, 2021
    Publication date: December 21, 2023
    Inventors: Qianying Zhu, Lidan Zhang, Xiangbin Wu, Xinxin Zhang, Fei Li, Ping Guo
  • Publication number: 20230406331
    Abstract: System and techniques for test verification of a control system (e.g., a vehicle safety system) with a vehicle operation safety model (VOSM) such as Responsibility Sensitive Safety (RSS) are described. In an example, using test scenarios to measure performance of VOSM includes: defining safety condition parameters of a VOSM for use in a test scenario configured to test performance of a safety system; generating a test scenario, using the safety condition parameters, the test scenario generated as a steady state test, a dynamic test, or a stress test; executing the test scenario with a test simulator, to produce test results for the safety system; measuring real-time kinematics of the safety system, during execution of the test scenario, based on compliance with the safety condition parameters; and producing a parameter rating for performance of the safety system with the VOSM, based on the test results and the measured real-time kinematics.
    Type: Application
    Filed: September 6, 2023
    Publication date: December 21, 2023
    Inventors: Qianying Zhu, Lidan Zhang, Xiangbin Wu, Xinxin Zhang, Fei Li
  • Publication number: 20230401911
    Abstract: Various aspects of methods, systems, and use cases for critical scenario identification and extraction from vehicle operations are described. In an example, an approach for lightweight analysis and detection includes capturing data from sensors associated with (e.g., located within, or integrated into) a vehicle, detecting the occurrence of a critical scenario, extracting data from the sensors in response to detecting the occurrence of the critical scenario, and outputting the extracted data. The critical scenario may be specifically detected based on a comparison of the operation of the vehicle to at least one requirement specified by a vehicle operation safety model. Reconstruction and further data processing may be performed on the extracted data, such as with the creation of a simulation from extracted data that is communicated to a remote service.
    Type: Application
    Filed: November 19, 2021
    Publication date: December 14, 2023
    Inventors: Qianying Zhu, Lidan Zhang, Xiangbin Wu, Xinxin Zhang, Fei Li
  • Patent number: 11841935
    Abstract: Example gesture matching mechanisms are disclosed herein. An example machine readable storage device or disc includes instructions that, when executed, cause programmable circuitry to at least: prompt a user to perform gestures to register the user, randomly select at least one of the gestures for authentication of the user, prompt the user to perform the at least one selected gesture, translate the gesture into an animated avatar for display at a display device, the animated avatar including a face, analyze performance of the gesture by the user, and authenticate the user based on the performance of the gesture.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: December 12, 2023
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaolu Shen, Lidan Zhang, Jose E. Lorenzo, Qiang Li, Steven Holmes, Xiaofeng Tong, Yangzhou Du, Mary Smiley, Alok Mishra
  • Publication number: 20230391360
    Abstract: Various aspects of methods, systems, and use cases for safety logging in a vehicle are described. In an example, an approach for data logging in a vehicle includes use of logging triggers, public and private data buckets, and defined data formats, for data provided during autonomous vehicle operation. Data logging operations may be triggered in response to safety conditions, such as detecting a dangerous situation from a failure of the vehicle to comply with safety criteria of a vehicle operational safety model. Data logging operations may include logging data in response to detection of the dangerous situation, including storage of a first portion of data in a public data store, and storage of a second portion of privacy-sensitive data in a private data store, where the data stored in the private data store is encrypted, and where access to the private data store is controlled.
    Type: Application
    Filed: November 19, 2021
    Publication date: December 7, 2023
    Inventors: Qianying Zhu, Lidan Zhang, Xiangbin Wu, Xinxin Zhang, Fei Li
  • Patent number: 11772666
    Abstract: System and techniques for test scenario verification, for a simulation of an autonomous vehicle safety action, are described. In an example, measuring performance of a test scenario used in testing an autonomous driving safety requirement includes: defining a test environment for a test scenario that tests compliance with a safety requirement including a minimum safe distance requirement; identifying test procedures to use in the test scenario that define actions for testing the minimum safe distance requirement; identifying test parameters to use with the identified test procedures, such as velocity, amount of braking, timing of braking, and rate of acceleration or deceleration; and creating the test scenario for use in an autonomous driving test simulator. Use of the test scenario includes applying the identified test procedures and the identified test parameters to identify a response of a test vehicle to the minimum safe distance requirement.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: October 3, 2023
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Qianying Zhu, Lidan Zhang, Xiangbin Wu, Xinxin Zhang, Fei Li
  • Publication number: 20230206612
    Abstract: Disclosed herein are systems, methods, and devices for using adaptive learning to identify objects. An object-identifying device performs a first object identification based on one or more features of a first modality of an object retrieved from an image frame including the object and a first database including first modality identification features. A second object identification is performed based on one or more features of a second modality of the object retrieved from the image frame and a second database including second modality identification features. The second database is updated by adaptively learning a new second modality identification feature according to a first identification result of the first object identification. The second object identification is trained with the updated second database and determines a final identification result by integrating a first identification result of the first object identification and a second identification result of the second object identification.
    Type: Application
    Filed: June 24, 2020
    Publication date: June 29, 2023
    Inventors: Sangeeta Ghangam MANEPALLI, Siew Wen CHIN, Ping GUO, Qi SHE, Yingzhe SHEN, Lidan ZHANG, Yimin ZHANG
  • Publication number: 20230043905
    Abstract: System and techniques for test scenario verification, for a simulation of an autonomous vehicle safety action, are described. In an example, measuring performance of a test scenario used in testing an autonomous driving safety requirement includes: defining a test environment for a test scenario that tests compliance with a safety requirement including a minimum safe distance requirement; identifying test procedures to use in the test scenario that define actions for testing the minimum safe distance requirement; identifying test parameters to use with the identified test procedures, such as velocity, amount of braking, timing of braking, and rate of acceleration or deceleration; and creating the test scenario for use in an autonomous driving test simulator. Use of the test scenario includes applying the identified test procedures and the identified test parameters to identify a response of a test vehicle to the minimum safe distance requirement.
    Type: Application
    Filed: December 17, 2021
    Publication date: February 9, 2023
    Inventors: Qianying Zhu, Lidan Zhang, Xiangbin Wu, Xinxin Zhang, Fei Li
  • Publication number: 20230019957
    Abstract: Example gesture matching mechanisms are disclosed herein. An example machine readable storage device or disc includes instructions that, when executed, cause programmable circuitry to at least: prompt a user to perform gestures to register the user, randomly select at least one of the gestures for authentication of the user, prompt the user to perform the at least one selected gesture, translate the gesture into an animated avatar for display at a display device, the animated avatar including a face, analyze performance of the gesture by the user, and authenticate the user based on the performance of the gesture.
    Type: Application
    Filed: September 19, 2022
    Publication date: January 19, 2023
    Inventors: Wenlong LI, Xiaolu SHEN, Lidan ZHANG, Jose E. LORENZO, Qiang LI, Steven HOLMES, Xiaofeng TONG, Yangzhou DU, Mary SMILEY, Alok MISHRA
  • Patent number: 11526704
    Abstract: A system, article, and method of neural network object recognition for image processing includes customizing a training database and adapting an instance segmentation neural network used to perform the customization.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: December 13, 2022
    Assignee: Intel Corporation
    Inventors: Ping Guo, Lidan Zhang, Haibing Ren, Yimin Zhang
  • Publication number: 20220383549
    Abstract: A multi-mode three-dimensional scanning method includes: obtaining intrinsic parameters and extrinsic parameters of a calibrated camera in different scanning modes, and upon switching between the different scanning modes, triggering a change of parameters of the camera to the intrinsic parameters and the extrinsic parameters in a corresponding scanning mode; and a user selecting to execute a laser-based scanning mode, a speckle-based scanning mode or a transition scanning mode according to a scanning requirement. In the continual fusion and conversion during the whole scanning process, the speckle reconstruction and the laser line reconstruction are unified to the same coordinate system, and the surface point cloud of the object being scanned is output. The present disclosure also provides a multi-mode three-dimensional scanning system.
    Type: Application
    Filed: December 17, 2020
    Publication date: December 1, 2022
    Applicant: SCANTECH (HANGZHOU) CO., LTD.
    Inventors: Jun Zheng, Shangjian Chen, Jiangfeng Wang, Lidan Zhang
  • Patent number: 11449592
    Abstract: An example apparatus is disclosed herein that includes a memory and at least one processor. The at least one processor is to execute instructions to: select a gesture from a database, the gesture including a sequence of poses; translate the selected gesture into an animated avatar performing the selected gesture for display at a display device; display a prompt for the user to perform the selected gesture performed by the animated avatar; capture an image of the user performing the selected gesture; and perform a comparison between a gesture performed by the user in the captured image and the selected gesture to determine whether there is a match between the gesture performed by the user and the selected gesture.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: September 20, 2022
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaolu Shen, Lidan Zhang, Jose E. Lorenzo, Qiang Li, Steven Holmes, Xiaofeng Tong, Yangzhou Du, Mary Smiley, Alok Mishra
  • Publication number: 20220292867
    Abstract: Systems, methods, apparatuses, and computer program products to provide stochastic trajectory prediction using social graph networks. An operation may comprise determining a first feature vector describing destination features of a first person depicted in an image, generating a directed graph for the image based on all people depicted in the image, determining, for the first person, a second feature vector based on the directed graph and the destination features, sampling a value of a latent variable from a learned prior distribution, the latent variable to correspond to a first time interval, and generating, based on the sampled value and the feature vectors by a hierarchical long short-term memory (LSTM) executing on a processor, an output vector comprising a direction of movement and a speed of the direction of movement of the first person at a second time interval, subsequent to the first time interval.
    Type: Application
    Filed: September 16, 2019
    Publication date: September 15, 2022
    Applicant: INTEL CORPORATION
    Inventors: Lidan ZHANG, Qi She, Ping Guo
  • Patent number: 11383144
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: July 12, 2022
    Assignee: Intel Corporation
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
  • Publication number: 20220138555
    Abstract: Examples methods, apparatus, and articles of manufacture corresponding to a spectral nonlocal block have been disclosed. An example apparatus includes a first convolution filter to perform a first convolution using input features and first weighted kernels to generate first weighted input features, the input features corresponding to data of a neural network; an affinity matrix generator to: perform a second convolution using the input features and second weighted kernels to generate second weighted input features; perform a third convolution using the input features and third weighted kernels to generate third weighted input features; and generate an affinity matrix based on the second and third weighted input features; a second convolution filter to perform a fourth convolution using the first weighted input features and fourth weighted kernels to generate fourth weighted input features; and a accumulator to transmit output features corresponding to a spectral nonlocal operator.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 5, 2022
    Inventors: Lidan Zhang, Lei Zhu, Qi She, Ping Guo
  • Publication number: 20210312642
    Abstract: A long-term object tracker employs a continuous learning framework to overcome drift in the tracking position of a tracked object. The continuous learning framework consists of a continuous learning module that accumulates samples of the tracked object to improve the accuracy of object tracking over extended periods of time. The continuous learning module can include a sample pre-processor to refine a location of a candidate object found during object tracking, and a cropper to crop a portion of a frame containing a tracked object as a sample and to insert the sample into a continuous learning database to support future tracking.
    Type: Application
    Filed: January 3, 2019
    Publication date: October 7, 2021
    Inventors: Lidan ZHANG, Ping GUO, Haibing REN, Yimin ZHANG