Patents by Inventor Lidan ZHANG
Lidan ZHANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230043905Abstract: System and techniques for test scenario verification, for a simulation of an autonomous vehicle safety action, are described. In an example, measuring performance of a test scenario used in testing an autonomous driving safety requirement includes: defining a test environment for a test scenario that tests compliance with a safety requirement including a minimum safe distance requirement; identifying test procedures to use in the test scenario that define actions for testing the minimum safe distance requirement; identifying test parameters to use with the identified test procedures, such as velocity, amount of braking, timing of braking, and rate of acceleration or deceleration; and creating the test scenario for use in an autonomous driving test simulator. Use of the test scenario includes applying the identified test procedures and the identified test parameters to identify a response of a test vehicle to the minimum safe distance requirement.Type: ApplicationFiled: December 17, 2021Publication date: February 9, 2023Inventors: Qianying Zhu, Lidan Zhang, Xiangbin Wu, Xinxin Zhang, Fei Li
-
Publication number: 20230019957Abstract: Example gesture matching mechanisms are disclosed herein. An example machine readable storage device or disc includes instructions that, when executed, cause programmable circuitry to at least: prompt a user to perform gestures to register the user, randomly select at least one of the gestures for authentication of the user, prompt the user to perform the at least one selected gesture, translate the gesture into an animated avatar for display at a display device, the animated avatar including a face, analyze performance of the gesture by the user, and authenticate the user based on the performance of the gesture.Type: ApplicationFiled: September 19, 2022Publication date: January 19, 2023Inventors: Wenlong LI, Xiaolu SHEN, Lidan ZHANG, Jose E. LORENZO, Qiang LI, Steven HOLMES, Xiaofeng TONG, Yangzhou DU, Mary SMILEY, Alok MISHRA
-
Patent number: 11526704Abstract: A system, article, and method of neural network object recognition for image processing includes customizing a training database and adapting an instance segmentation neural network used to perform the customization.Type: GrantFiled: October 26, 2018Date of Patent: December 13, 2022Assignee: Intel CorporationInventors: Ping Guo, Lidan Zhang, Haibing Ren, Yimin Zhang
-
Publication number: 20220383549Abstract: A multi-mode three-dimensional scanning method includes: obtaining intrinsic parameters and extrinsic parameters of a calibrated camera in different scanning modes, and upon switching between the different scanning modes, triggering a change of parameters of the camera to the intrinsic parameters and the extrinsic parameters in a corresponding scanning mode; and a user selecting to execute a laser-based scanning mode, a speckle-based scanning mode or a transition scanning mode according to a scanning requirement. In the continual fusion and conversion during the whole scanning process, the speckle reconstruction and the laser line reconstruction are unified to the same coordinate system, and the surface point cloud of the object being scanned is output. The present disclosure also provides a multi-mode three-dimensional scanning system.Type: ApplicationFiled: December 17, 2020Publication date: December 1, 2022Applicant: SCANTECH (HANGZHOU) CO., LTD.Inventors: Jun Zheng, Shangjian Chen, Jiangfeng Wang, Lidan Zhang
-
Patent number: 11449592Abstract: An example apparatus is disclosed herein that includes a memory and at least one processor. The at least one processor is to execute instructions to: select a gesture from a database, the gesture including a sequence of poses; translate the selected gesture into an animated avatar performing the selected gesture for display at a display device; display a prompt for the user to perform the selected gesture performed by the animated avatar; capture an image of the user performing the selected gesture; and perform a comparison between a gesture performed by the user in the captured image and the selected gesture to determine whether there is a match between the gesture performed by the user and the selected gesture.Type: GrantFiled: October 8, 2020Date of Patent: September 20, 2022Assignee: Intel CorporationInventors: Wenlong Li, Xiaolu Shen, Lidan Zhang, Jose E. Lorenzo, Qiang Li, Steven Holmes, Xiaofeng Tong, Yangzhou Du, Mary Smiley, Alok Mishra
-
Publication number: 20220292867Abstract: Systems, methods, apparatuses, and computer program products to provide stochastic trajectory prediction using social graph networks. An operation may comprise determining a first feature vector describing destination features of a first person depicted in an image, generating a directed graph for the image based on all people depicted in the image, determining, for the first person, a second feature vector based on the directed graph and the destination features, sampling a value of a latent variable from a learned prior distribution, the latent variable to correspond to a first time interval, and generating, based on the sampled value and the feature vectors by a hierarchical long short-term memory (LSTM) executing on a processor, an output vector comprising a direction of movement and a speed of the direction of movement of the first person at a second time interval, subsequent to the first time interval.Type: ApplicationFiled: September 16, 2019Publication date: September 15, 2022Applicant: INTEL CORPORATIONInventors: Lidan ZHANG, Qi She, Ping Guo
-
Patent number: 11383144Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.Type: GrantFiled: November 9, 2020Date of Patent: July 12, 2022Assignee: Intel CorporationInventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
-
Publication number: 20220138555Abstract: Examples methods, apparatus, and articles of manufacture corresponding to a spectral nonlocal block have been disclosed. An example apparatus includes a first convolution filter to perform a first convolution using input features and first weighted kernels to generate first weighted input features, the input features corresponding to data of a neural network; an affinity matrix generator to: perform a second convolution using the input features and second weighted kernels to generate second weighted input features; perform a third convolution using the input features and third weighted kernels to generate third weighted input features; and generate an affinity matrix based on the second and third weighted input features; a second convolution filter to perform a fourth convolution using the first weighted input features and fourth weighted kernels to generate fourth weighted input features; and a accumulator to transmit output features corresponding to a spectral nonlocal operator.Type: ApplicationFiled: November 3, 2020Publication date: May 5, 2022Inventors: Lidan Zhang, Lei Zhu, Qi She, Ping Guo
-
Publication number: 20210312642Abstract: A long-term object tracker employs a continuous learning framework to overcome drift in the tracking position of a tracked object. The continuous learning framework consists of a continuous learning module that accumulates samples of the tracked object to improve the accuracy of object tracking over extended periods of time. The continuous learning module can include a sample pre-processor to refine a location of a candidate object found during object tracking, and a cropper to crop a portion of a frame containing a tracked object as a sample and to insert the sample into a continuous learning database to support future tracking.Type: ApplicationFiled: January 3, 2019Publication date: October 7, 2021Inventors: Lidan ZHANG, Ping GUO, Haibing REN, Yimin ZHANG
-
Publication number: 20210248427Abstract: A system, article, and method of neural network object recognition for image processing includes customizing a training database and adapting an instance segmentation neural network used to perform the customization.Type: ApplicationFiled: October 26, 2018Publication date: August 12, 2021Applicant: Intel CorporationInventors: Ping GUO, Lidan ZHANG, Haibing REN, Yimin ZHANG
-
Publication number: 20210069571Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.Type: ApplicationFiled: November 9, 2020Publication date: March 11, 2021Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
-
Publication number: 20210026941Abstract: An example apparatus is disclosed herein that includes a memory and at least one processor. The at least one processor is to execute instructions to: select a gesture from a database, the gesture including a sequence of poses; translate the selected gesture into an animated avatar performing the selected gesture for display at a display device; display a prompt for the user to perform the selected gesture performed by the animated avatar; capture an image of the user performing the selected gesture; and perform a comparison between a gesture performed by the user in the captured image and the selected gesture to determine whether there is a match between the gesture performed by the user and the selected gesture.Type: ApplicationFiled: October 8, 2020Publication date: January 28, 2021Inventors: Wenlong LI, Xiaolu SHEN, Lidan ZHANG, Jose E. LORENZO, Qiang LI, Steven HOLMES, Xiaofeng TONG, Yangzhou DU, Mary SMILEY, Alok MISHRA
-
Patent number: 10828549Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.Type: GrantFiled: December 30, 2016Date of Patent: November 10, 2020Assignee: Intel CorporationInventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
-
Patent number: 10803157Abstract: A mechanism is described to facilitate gesture matching according to one embodiment. A method of embodiments, as described herein, includes selecting a gesture from a database during an authentication phase, translating the selected gesture into an animated avatar, displaying the avatar, prompting a user to perform the selected gesture, capturing a real-time image of the user and comparing the gesture performed by the user in the captured image to the selected gesture to determine whether there is a match.Type: GrantFiled: March 28, 2015Date of Patent: October 13, 2020Assignee: Intel CorporationInventors: Wenlong Li, Xiaolu Shen, Lidan Zhang, Jose E. Lorenzo, Qiang Li, Steven Holmes, Xiaofeng Tong, Yangzhou Du, Mary Smiley, Alok Mishra
-
Patent number: 10776980Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.Type: GrantFiled: January 7, 2019Date of Patent: September 15, 2020Assignee: Intel CorporationInventors: Shaohui Jiao, Xiaolu Shen, Lidan Zhang, Qiang Li, Wenlong Li
-
Publication number: 20200193864Abstract: Systems and techniques for sensor-derived swing hit and direction detection are described herein. A set of sensor values may be compressed into a first lower dimension (2105). Features may be extracted from the compressed set of sensor values (2110). The features may be clustered into a set of clusters (2115). A swing action may be detected based on a distance between members of the set of clusters (2120).Type: ApplicationFiled: September 8, 2017Publication date: June 18, 2020Inventors: Yikai Fang, Xiaofeng Tong, Lidan Zhang, Qiang Eric Li, Wenlong Li
-
Publication number: 20200051306Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor.Type: ApplicationFiled: October 17, 2019Publication date: February 13, 2020Applicant: INTEL CORPORATIONInventors: Minje Park, Tae-Hoon Kim, Myung-Ho Ju, Jihyeon Yi, Xiaolu Shen, Lidan Zhang, Qiang Li
-
Patent number: 10475225Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor.Type: GrantFiled: December 18, 2015Date of Patent: November 12, 2019Assignee: Intel CorporationInventors: Minje Park, Tae-Hoon Kim, Myung-Ho Ju, Jihyeon Yi, Xiaolu Shen, Lidan Zhang, Qiang Li
-
Publication number: 20190213774Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.Type: ApplicationFiled: January 7, 2019Publication date: July 11, 2019Inventors: Shaohui JIAO, Xiaolu SHEN, Lidan ZHANG, Qiang LI, Wenlong LI
-
Patent number: 10176619Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.Type: GrantFiled: July 30, 2015Date of Patent: January 8, 2019Assignee: Intel CorporationInventors: Shaohui Jiao, Xiaolu Shen, Lidan Zhang, Qiang Li, Wenlong Li