Patents by Inventor Yimin Zhang

Yimin Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180200889
    Abstract: Movement of a user in a user space is mapped to a first portion of a command action stream sent to a telerobot in a telerobot space. An immersive feedback stream is provided by the telerobot to the user. Upon movement of user into or proximate to a margin of user space, the first portion of the command action stream may be suspended. The user may re-orient in the user space, and may then continue to move, with movement mapping re-engaged and resumption of transmission of a second portion of command action stream. In this way, user may control a telerobot via movement mapping, even though user space and telerobot space may not be the same size.
    Type: Application
    Filed: May 11, 2016
    Publication date: July 19, 2018
    Inventors: Sirui Yang, Yimin Zhang
  • Patent number: 9965673
    Abstract: Techniques are disclosed that involve face detection. For instance, face detection tasks may be decomposed into sets of one or more sub-tasks. In turn the sub-tasks of the sets may be allocated across multiple image frames. This allocation may be based on a multiple layer, quad-tree approach. In addition, face tracking tasks may be performed.
    Type: Grant
    Filed: April 11, 2011
    Date of Patent: May 8, 2018
    Assignee: INTEL CORPORATION
    Inventors: Yangzhou Du, Jianguo Li, Ang Liu, Tao Wang, Yimin Zhang
  • Patent number: 9936165
    Abstract: A video communication system that replaces actual live images of the participating users with animated avatars. A method may include initiating communication between a first user device and a remote user device; receiving selection of a new avatar to represent a user of the first user device; identifying a new avatar file for the new avatar in an avatar database associated with the first user device; determining that the new avatar file is not present in a remote avatar database associated with the remote user device; and transmitting the new avatar file to the remote avatar database in response to determining that the new avatar file is not present in the remote avatar database.
    Type: Grant
    Filed: September 6, 2012
    Date of Patent: April 3, 2018
    Assignee: INTEL CORPORATION
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Wei Hu, Yimin Zhang
  • Patent number: 9886622
    Abstract: Technologies for generating an avatar with a facial expression corresponding to a facial expression of a user include capturing a reference user image of the user on a computing device when the user is expressing a reference facial expression for registration. The computing device generates reference facial measurement data based on the captured reference user image and compares the reference facial measurement data with facial measurement data of a corresponding reference expression of the avatar to generate facial comparison data. After a user has been registered, the computing device captures a real-time facial expression of the user and generates real-time facial measurement data based on the captured real-time image. The computing device applies the facial comparison data to the real-time facial measurement data to generate modified expression data, which is used to generate an avatar with a facial expression corresponding with the facial expression of the user.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: February 6, 2018
    Assignee: Intel Corporation
    Inventors: Yangzhou Du, Wenlong Li, Wei Hu, Xiaofeng Tong, Yimin Zhang
  • Publication number: 20170328169
    Abstract: This invention discloses a downhole occluder for oil wells. A sealing socket and a positioning flange with sealing surfaces are set up inside an oil pipe. An oil sucker rod pump is installed inside the tubular column which mounts on top of the sealing socket. A sealing surface of a sealing flange on the rod pump mounts with a sealing surface of the sealing socket on the tubular column. Outer oil inlet borings are placed at the wall of oil tubular column underneath the positioning flange. An inner pipe is installed inside this section of tubular column. Place inner oil inlet borings on the inner pipe wall corresponding to outer oil inlet borings on the tubular column. Sealing grooves are placed underneath the inner oil inlet borings on the inner pipe wall. Sealing rings are placed in the sealing grooves to seal the inner wall of tubular column. Top of inner pipe is connected to oil sucker rod pump through connecting coupler. Bottom end of tubular column is occluded.
    Type: Application
    Filed: May 30, 2017
    Publication date: November 16, 2017
    Applicant: KARAMAY SHENGLI PLATEAU MACHINERY LIMITED COMPANY
    Inventors: Xiangsheng LV, Gangyao LI, Zhongxiang ZHAO, Haijun LUAN, Yimin ZHANG, Junfeng LV, Tingde FENG
  • Publication number: 20170310934
    Abstract: A video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, determining facial characteristics from the face, including eye movement and eyelid movement of a user indicative of direction of user gaze and blinking, respectively, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
    Type: Application
    Filed: July 7, 2017
    Publication date: October 26, 2017
    Applicant: Intel Corporation
    Inventors: YANGZHOU DU, WENLONG LI, XIAOFENG TONG, WEI HU, YIMIN ZHANG
  • Publication number: 20170193684
    Abstract: Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently.
    Type: Application
    Filed: October 11, 2016
    Publication date: July 6, 2017
    Inventors: Yangzhou Du, Wenlong Li, Xiaofeng Tong, Wei Hu, Yimin Zhang
  • Publication number: 20170132529
    Abstract: According to one embodiment of the invention, a method includes generating a person-name Information Gain (IG)-Tree and a relation IG-Tree from annotated data. The method also includes tagging and partial parsing of an input document. The names of the persons are extracted within the input document using the person-name IG-tree. Additionally, names of organizations are extracted within the input document. The method also includes extracting entity names that are not names of persons and organizations within the input document. Further, the relations between the identified entity names are extracted using the relation-IG-tree.
    Type: Application
    Filed: August 23, 2016
    Publication date: May 11, 2017
    Inventors: Yimin Zhang, Joe F. Zhou
  • Publication number: 20170111615
    Abstract: Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
    Type: Application
    Filed: December 30, 2016
    Publication date: April 20, 2017
    Applicant: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Qiang Eric Li, Yimin Zhang, Wei Hu, John G. Tennant, Hui A. Li
  • Publication number: 20170111614
    Abstract: Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar; initiating communication; detecting a user input; identifying the user input; identifying an animation command based on the user input; generating avatar parameters; and transmitting at least one of the animation command and the avatar parameters.
    Type: Application
    Filed: December 30, 2016
    Publication date: April 20, 2017
    Applicant: Intel Corporation
    Inventors: XIAOFENG TONG, WENLONG LI, YANGZHOU DU, WEI HU, YIMIN ZHANG
  • Publication number: 20170111616
    Abstract: Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
    Type: Application
    Filed: December 30, 2016
    Publication date: April 20, 2017
    Applicant: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Qiang Eric Li, Yimin Zhang, Wei Hu, John G. Tennant, Hui A. Li
  • Patent number: 9580342
    Abstract: The present disclosure relates to methods for cooperative control of microcystis aeruginosa by using chub, bighead, catfish and daphnia, and belongs to the technical field of water treatment. The microcystis aeruginosa in a water body is controlled using a food chain relationship; the chub, the bighead, the catfish and the daphnia can directly eat the microcystis aeruginosa in a filtering manner; and meanwhile, the chub, the bighead and the catfish also can intake the daphnia, so as to indirectly consume the microcystis aeruginosa in the water body, wherein the fishes, the daphnia and the microcystis aeruginosa form the food chain relationship; in addition, the microcystis aeruginosa at the upper, middle and lower layers and a microcystis aeruginosa hypopus at the bottom layer can be controlled by fully utilizing the vertical spatial distribution difference of the three fishes in the water body.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: February 28, 2017
    Assignee: Nanjing Institute of Environmental Sciences, Ministry of Environmental Protection
    Inventors: Yimin Zhang, Han Wu, Yuexiang Gao, Fei Yang, Longmian Wang, Chuang Zhou
  • Publication number: 20170054945
    Abstract: Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
    Type: Application
    Filed: June 16, 2016
    Publication date: February 23, 2017
    Applicant: Intel Corporation
    Inventors: WENLONG LI, XIAOFENG TONG, YANGZHOU DU, QIANG ERIC LI, YIMIN ZHANG, WEI HU, JOHN G TENNANT, HUI A LI
  • Publication number: 20170039751
    Abstract: Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar; initiating communication; detecting a user input; identifying the user input; identifying an animation command based on the user input; generating avatar parameters; and transmitting at least one of the animation command and the avatar parameters.
    Type: Application
    Filed: June 16, 2016
    Publication date: February 9, 2017
    Applicant: Intel Corporation
    Inventors: XIAOFENG TONG, WENLONG LI, YANGZHOU DU, WEI HU, YIMIN ZHANG
  • Publication number: 20160376179
    Abstract: The present disclosure relates to methods for cooperative control of microcystis aeruginosa by using chub, bighead, catfish and daphnia, and belongs to the technical field of water treatment. The microcystis aeruginosa in a water body is controlled using a food chain relationship; the chub, the bighead, the catfish and the daphnia can directly eat the microcystis aeruginosa in a filtering manner; and meanwhile, the chub, the bighead and the catfish also can intake the daphnia, so as to indirectly consume the microcystis aeruginosa in the water body, wherein the fishes, the daphnia and the microcystis aeruginosa form the food chain relationship; in addition, the microcystis aeruginosa at the upper, middle and lower layers and a microcystis aeruginosa hypopus at the bottom layer can be controlled by fully utilizing the vertical spatial distribution difference of the three fishes in the water body.
    Type: Application
    Filed: June 24, 2015
    Publication date: December 29, 2016
    Inventors: Yimin Zhang, Han Wu, Yuexiang Gao, Fei Yang, Longmian Wang, Chuang Zhou
  • Patent number: 9489567
    Abstract: Methods, apparatuses, and articles associated with facial tracking and recognition are disclosed. In embodiments, facial images may be detected in video or still images and tracked. After normalization of the facial images, feature data may be extracted from selected regions of the faces to compare to associated feature data in known faces. The selected regions may be determined using a boosting machine learning processes over a set of known images. After extraction, individual two-class comparisons may be performed between corresponding feature data from regions on the tested facial images and from the known facial image. The individual two-class classifications may then be combined to determine a similarity score for the tested face and the known face. If the similarity score exceeds a threshold, an identification of the known face may be output or otherwise used. Additionally, tracking with voting may be performed on faces detected in video.
    Type: Grant
    Filed: April 11, 2011
    Date of Patent: November 8, 2016
    Assignee: Intel Corporation
    Inventors: Tao Wang, Jianguo Li, Yangzhou Du, Qiang Li, Yimin Zhang
  • Publication number: 20160321307
    Abstract: Embodiments of a graphical mapping interface and method are provided herein for creating and displaying a schema map, which may be used by a data transformation system to perform a data transformation between at least one source schema and at least one target schema. According to one embodiment, the graphical mapping interface may generally comprise a main map window and a mini-map window. The main map window comprises a source schema region, which is adapted for displaying a graphical representation of a primary source schema defining a structure of a primary data source. The mini-map window is adapted for creating a mapping between one or more nodes of the primary source schema and one or more nodes of an intermediate target schema, and displaying a graphical representation of the mapping within the mini-map window.
    Type: Application
    Filed: July 7, 2016
    Publication date: November 3, 2016
    Inventors: Paul C. Dingman, William G. Bunton, Kathryn E. Van Dyken, Laurence T. Yogman, Yimin Zhang
  • Patent number: 9471829
    Abstract: Detecting facial landmarks in a face detected in an image may be performed by first cropping a face rectangle region of the detected face in the image and generating an integral image based at least in part on the face rectangle region. Next, a cascade classifier may be executed for each facial landmark of the face rectangle region to produce one response image for each facial landmark based at least in part on the integral image. A plurality of Active Shape Model (ASM) initializations may be set up. ASM searching may be performed for each of the ASM initializations based at least in part on the response images, each ASM search resulting in a search result having a cost. Finally, a search result of the ASM searches having a lowest cost function may be selected, the selected search result indicating locations of the facial landmarks in the image.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: October 18, 2016
    Assignee: Intel Corporation
    Inventors: Ang Liu, Yangzhou Du, Tao Wang, Jianguo Li, Qiang Li, Yimin Zhang
  • Patent number: 9466142
    Abstract: Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently.
    Type: Grant
    Filed: December 17, 2012
    Date of Patent: October 11, 2016
    Assignee: Intel Corporation
    Inventors: Yangzhou Du, Wenlong Li, Xiaofeng Tong, Wei Hu, Yimin Zhang
  • Patent number: 9430742
    Abstract: According to one embodiment of the invention, a method includes generating a person-name Information Gain (IG)-Tree and a relation IG-Tree from annotated data. The method also includes tagging and partial parsing of an input document. The names of the persons are extracted within the input document using the person-name IG-tree. Additionally, names of organizations are extracted within the input document. The method also includes extracting entity names that are not names of persons and organizations within the input document. Further, the relations between the identified entity names are extracted using the relation-IG-tree.
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: August 30, 2016
    Assignee: Intel Corporation
    Inventors: Yimin Zhang, Joe F. Zhou