Patents by Inventor Yikai Fang
Yikai Fang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11967086Abstract: A method for trajectory generation based on player tracking is described herein. The method includes determining a temporal association for a first player in a captured field of view and determining a spatial association for the first player. The method also includes deriving a global player identification based on the temporal association and the spatial association and generating a trajectory based on the global player identification.Type: GrantFiled: July 31, 2019Date of Patent: April 23, 2024Assignee: INTEL CORPORATIONInventors: Yikai Fang, Qiang Li, Wenlong Li, Chenning Liu, Chen Ling, Hongzhi Tao, Yumeng Wang, Hang Zheng
-
Publication number: 20240104744Abstract: A mechanism is described for facilitating real-time multi-view detection of objects in multi-camera environments, according to one embodiment. A method of embodiments, as described herein, includes mapping first lines associated with objects to a ground plane; and forming clusters of second lines corresponding to the first lines such that an intersection point in a cluster represents a position of an object on the ground plane.Type: ApplicationFiled: October 23, 2023Publication date: March 28, 2024Applicant: Intel CorporationInventors: Qiang Li, Xiaofeng Tong, Yikai Fang, Chen Ling, Wenlong Li
-
Publication number: 20240005701Abstract: An example apparatus includes processor circuitry to extract features from image data obtained from a plurality of cameras, the extraction of features performed using a plurality of sequential neural network layers; in response to each of the plurality of sequential neural network layer extracting the features, identify the extracted features in a torso region of the image data via a plurality of attention modules; estimate body landmarks from image data to localize an area; generate an upper heatmap mask based on a geometric center of the image data; calculate a loss function for the image data based on a cross-entropy loss, a pixel-wise loss, and a triplet loss determined from the extracted features and the generated heatmap mask; select lowest correlated classes based on calculated correlations between pairs of a plurality of classes; and calculate voting scores for groups associated with the lowest correlated classes.Type: ApplicationFiled: September 23, 2021Publication date: January 4, 2024Inventors: Chenning Liu, Qiang Li, Wenlong Li, Yikai Fang, Hang Zheng, Jiansheng Chen
-
Patent number: 11861907Abstract: Methods, systems and apparatuses may provide for technology that selects a player from a plurality of players based on an automated analysis of two-dimensional (2D) video data associated with a plurality of cameras, wherein the selected player is nearest to a projectile depicted in the 2D video data. The technology may also track a location of the selected player over a subsequent plurality of frames in the 2D video data and estimate a location of the projectile based on the location of the selected player over the subsequent plurality of frames.Type: GrantFiled: August 13, 2019Date of Patent: January 2, 2024Assignee: Intel CorporationInventors: Yikai Fang, Qiang Li, Wenlong Li, Haihua Lin, Chen Ling, Ming Lu, Hongzhi Tao, Xiaofeng Tong, Yumeng Wang
-
Patent number: 11842496Abstract: A mechanism is described for facilitating real-time multi-view detection of objects in multi-camera environments, according to one embodiment. A method of embodiments, as described herein, includes mapping first lines associated with objects to a ground plane; and forming clusters of second lines corresponding to the first lines such that an intersection point in a cluster represents a position of an object on the ground plane.Type: GrantFiled: September 26, 2018Date of Patent: December 12, 2023Assignee: Intel CorporationInventors: Qiang Li, Xiaofeng Tong, Yikai Fang, Chen Ling, Wenlong Li
-
Patent number: 11763467Abstract: A multi-camera architecture for detecting and tracking a ball in real-time. The multi-camera architecture includes network interface circuitry to receive a plurality of real-time videos taken from a plurality of high-resolution cameras. Each of the high-resolution cameras simultaneously captures a sports event, wherein each of the plurality of high-resolution cameras includes a viewpoint that covers an entire playing field where the sports event is played. The multi-camera architecture further includes one or more processors coupled to the network interface circuitry and one or more memory devices coupled to the one or more processors.Type: GrantFiled: September 28, 2018Date of Patent: September 19, 2023Assignee: Intel CorporationInventors: Xiaofeng Tong, Chen Ling, Ming Lu, Qiang Li, Wenlong Li, Yikai Fang, Yumeng Wang
-
Publication number: 20230237801Abstract: Techniques related to performing object or person association or correspondence in multi-view video are discussed. Such techniques include determining correspondences at a particular time instance based on separately optimizing correspondence sub-matrices for distance sub-matrices based on two-way minimum distance pairs between frame pairs, generating and fusing tracklets across time instances, and adjusting correspondence, after such tracklet processing, via elimination of outlier object positions and rearrangement of object correspondence.Type: ApplicationFiled: July 30, 2020Publication date: July 27, 2023Applicant: Intel CorporationInventors: Longwei FANG, Qiang LI, Wenlong LI, Yikai FANG, Hang ZHENG
-
Patent number: 11587279Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.Type: GrantFiled: February 28, 2022Date of Patent: February 21, 2023Assignee: INTEL CORPORATIONInventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
-
Patent number: 11514704Abstract: A video capture and processing system includes a memory configured to store a pose database. The pose database includes poses that indicate a start or stoppage in an event. The system also includes a processor operatively coupled to the memory. The processor is configured to generate a pose of an individual in a video frame of captured video of the event. The pose can be three-dimensional pose or a two-dimensional pose. The processor is also configured to determine, based on the pose database, whether the pose of the individual indicates a start or a stoppage in the event. The processor is further configured to control an upload of video of the event based on the determination of whether the pose indicates the start or the stoppage in the event.Type: GrantFiled: October 2, 2018Date of Patent: November 29, 2022Assignee: Intel CorporationInventors: Chen Ling, Ming Lu, Qiang Li, Wenlong Li, Xiaofeng Tong, Yikai Fang, Yumeng Wang
-
Publication number: 20220292825Abstract: Methods, systems and apparatuses may provide for technology that selects a player from a plurality of players based on an automated analysis of two-dimensional (2D) video data associated with a plurality of cameras, wherein the selected player is nearest to a projectile depicted in the 2D video data. The technology may also track a location of the selected player over a subsequent plurality of frames in the 2D video data and estimate a location of the projectile based on the location of the selected player over the subsequent plurality of frames.Type: ApplicationFiled: August 13, 2019Publication date: September 15, 2022Applicant: Intel CorporationInventors: Yikai Fang, Qiang Li, Wenlong Li, Haihua Lin, Chen Ling, Ming Lu, Hongzhi Tao, Xiaofeng Tong, Yumeng Wang
-
Publication number: 20220262142Abstract: Methods, systems and apparatuses may provide for technology that obtains multi-camera video data including a first 2D image corresponding to a first camera and a second 2D image corresponding to a second camera. The technology may also identify an association between a first instance of a 3D object in the first 2D image and a second instance of the 3D object in the second 2D image, and automatically generate a 3D bounding box around the 3D object based on the association between the first instance and the second instance.Type: ApplicationFiled: August 14, 2019Publication date: August 18, 2022Applicant: Intel CorporationInventors: Qiang Li, Yikai Fang, Wenlong Li, Chen Ling, Ofer Shkedi, Xiaofeng Tong
-
Publication number: 20220237845Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.Type: ApplicationFiled: February 28, 2022Publication date: July 28, 2022Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
-
Patent number: 11383144Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.Type: GrantFiled: November 9, 2020Date of Patent: July 12, 2022Assignee: Intel CorporationInventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
-
Publication number: 20220198684Abstract: A method for trajectory generation based on player tracking is described herein. The method includes determining a temporal association for a first player in a captured field of view and determining a spatial association for the first player. The method also includes deriving a global player identification based on the temporal association and the spatial association and generating a trajectory based on the global player identification.Type: ApplicationFiled: July 31, 2019Publication date: June 23, 2022Inventors: Yikai Fang, Qiang Li, Wenlong Li, Chenning Liu, Chen Ling, Hongzhi Tao, Yumeng Wang, Hang Zheng
-
Patent number: 11295502Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.Type: GrantFiled: August 7, 2020Date of Patent: April 5, 2022Assignee: Intel CorporationInventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
-
Patent number: 11257282Abstract: Methods and apparatus to detect collision of a virtual camera with objects in a three-dimensional volumetric model are disclosed herein. An example virtual camera system disclosed herein includes cameras to obtain images of a scene in an environment. The example virtual camera system also includes a virtual camera generator to create a 3D volumetric model of the scene based on the images, identify a 3D location of a virtual camera to be disposed in the 3D volumetric model, and detect whether a collision occurs between the virtual camera and one or more objects in the 3D volumetric model.Type: GrantFiled: December 24, 2018Date of Patent: February 22, 2022Assignee: Intel CorporationInventors: Xiaofeng Tong, Qiang Li, Wenlong Li, Yikai Fang, Ofer Shkedi
-
Publication number: 20210279896Abstract: A multi-camera architecture for detecting and tracking a ball in real-time. The multi-camera architecture includes network interface circuitry to receive a plurality of real-time videos taken from a plurality of high-resolution cameras. Each of the high-resolution cameras simultaneously captures a sports event, wherein each of the plurality of high-resolution cameras includes a viewpoint that covers an entire playing field where the sports event is played. The multi-camera architecture further includes one or more processors coupled to the network interface circuitry and one or more memory devices coupled to the one or more processors.Type: ApplicationFiled: September 28, 2018Publication date: September 9, 2021Applicant: INTEL CORPORATIONInventors: Xiaofeng Tong, Chen Ling, Ming Lu, Qiang Li, Wenlong Li, Yikai Fang, Yumeng Wang
-
Publication number: 20210281887Abstract: Methods, systems and apparatuses may provide for technology that identifies a player captured in a multi-camera video feed of a game that involves the identified player and estimates a first field of view from the perspective of the identified player for a selected from of the multi-camera video feed. Additionally, the technology automatically generates, based on the first field of view, a camera path for a replay of the selected frame from the perspective of the identified player. In one example, the technology also determines a trajectory of a projectile captured in the multi-camera video feed, estimates, based on the trajectory, a second field of view from the perspective of the projectile, and automatically generates, based on the second field of view, a replay of one or more selected frames of the multi-camera video feed from the perspective of the projectile.Type: ApplicationFiled: November 12, 2018Publication date: September 9, 2021Inventors: Xiaofeng Tong, Eli Turiel, Qiang Li, Wenlong Li, Yikai Fang, Doron Houminer, Asaf Shiloni
-
Publication number: 20210256245Abstract: A mechanism is described for facilitating real-time multi-view detection of objects in multi-camera environments, according to one embodiment. A method of embodiments, as described herein, includes mapping first lines associated with objects to a ground plane; and forming clusters of second lines corresponding to the first lines such that an intersection point in a cluster represents a position of an object on the ground plane.Type: ApplicationFiled: September 26, 2018Publication date: August 19, 2021Applicant: Intel CorporationInventors: Qiang Li, Xiaofeng Tong, Yikai Fang, Chen Ling, Wenlong Li
-
Publication number: 20210241518Abstract: Methods and apparatus to detect collision of a virtual camera with objects in a three-dimensional volumetric model are disclosed herein. An example virtual camera system disclosed herein includes cameras to obtain images of a scene in an environment. The example virtual camera system also includes a virtual camera generator to create a 3D volumetric model of the scene based on the images, identify a 3D location of a virtual camera to be disposed in the 3D volumetric model, and detect whether a collision occurs between the virtual camera and one or more objects in the 3D volumetric model.Type: ApplicationFiled: December 24, 2018Publication date: August 5, 2021Inventors: Xiaofeng Tong, Qiang Li, Wenlong Li, Yikai Fang, Ofer Shkedi