Patents by Inventor Huifang Sun

Huifang Sun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240148850
    Abstract: The present disclosure provides an N protein epitope mutation marker for preparing an epitope deletion-marked vaccine strain of type II porcine reproductive and respiratory syndrome virus (PRRSV) and use thereof, belonging to the technical field of biological products. In the mutation marker, one or more amino acids are mutated based on an epitope sequence at positions 92 to 103 of a C-terminal of an N protein of the type II PRRSV; and the epitope mutation marker has an amino acid sequence shown in SEQ ID NO: 1, where X1 is selected from the group consisting of T, P, and A; and X2 is selected from the group consisting of V and A.
    Type: Application
    Filed: March 17, 2022
    Publication date: May 9, 2024
    Applicant: Lanzhou Veterinary Research Institute, Chinese Academy of Agricultural Sciences
    Inventors: Jing ZHANG, Zengjun LU, Kun LI, Pu SUN, Jian WANG, Yimei CAO, Huifang BAO, Zhixun ZHAO, Pinghua LI, Yuanfang FU, Xueqing MA, Hong YUAN, Xingwen BAI, Qiang ZHANG, Dong LI, Zaixin LIU
  • Publication number: 20240052029
    Abstract: Provided are an ROR1-targeting antibody, and a multispecific antibody, a chimeric antigen receptor, an antibody conjugate, a pharmaceutical composition and a kit which comprise same, and the use thereof in the diagnosis/treatment/prevention of diseases associated with ROR1 expression.
    Type: Application
    Filed: January 12, 2022
    Publication date: February 15, 2024
    Applicant: BIOHENG THERAPEUTICS LIMITED
    Inventors: Yali ZHOU, Gong CHEN, Tingting GUO, Xiaoyan JIANG, Jiangtao REN, Xiaohong HE, Yanbin WANG, Lu HAN, Guokun LI, Jing ZHANG, Huifang SUN
  • Publication number: 20230073755
    Abstract: A use of the combination of IGF1 and IGFEc24 in a preparation of a drug for promoting tissue repair and regeneration is provided. The amino acid sequence of the IGF1 is shown in SEQ ID NO: 1; the amino acid sequence of the IGFEc24 is shown in SEQ ID NO: 2; and the amino acid sequence of the IGF1-24 is shown in SEQ ID NO: 4.
    Type: Application
    Filed: October 21, 2020
    Publication date: March 9, 2023
    Applicant: Chongqing University
    Inventors: Liling TANG, Xing WANG, Huifang SUN, Piaoyang LIU, Xichao XU, Ying CHEN, Lu ZHANG, Yuanyuan LIANG
  • Patent number: 10154281
    Abstract: A method processes keypoint trajectories in a video, wherein the keypoint trajectories describe motion of a plurality of keypoints across pictures of the video over time, by first acquiring the video of a scene using a camera. Keypoints and associated feature descriptors are detected in each picture. The keypoints and associated features descriptors are matched between neighboring pictures to generate keypoint trajectories. Then, the keypoint trajectories are coded predictively into a bitstream, which is outputted.
    Type: Grant
    Filed: January 22, 2016
    Date of Patent: December 11, 2018
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Dong Tian, Huifang Sun, Anthony Vetro
  • Publication number: 20170214936
    Abstract: A method processes keypoint trajectories in a video, wherein the keypoint trajectories describe motion of a plurality of keypoints across pictures of the video over time, by first acquiring the video of a scene using a camera. Kkeypoints and associated feature descriptors are detected in each picture. The keypoints and associated features descriptors are matched between neighboring pictures to generate keypoint trajectories. Then, the keypoint trajectories are coded predictively into a bitstream, which is outputted.
    Type: Application
    Filed: January 22, 2016
    Publication date: July 27, 2017
    Applicant: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Dong Tian, Huifang Sun, Anthony Vetro
  • Patent number: 8750634
    Abstract: A bitstream includes coded pictures, and split-flags for generating a transform tree. The bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs). The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with the CUs. The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging. Then, data contained in each PU can be decoded using the TVs associated with the PU according to the transform tree.
    Type: Grant
    Filed: June 18, 2013
    Date of Patent: June 10, 2014
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Robert A Cohen, Anthony Vetro, Huifang Sun
  • Patent number: 8692840
    Abstract: A quality of a virtual image for a synthetic viewpoint in a 3D scene is determined. The 3D scene is acquired by texture images, and each texture image is associated with a depth image acquired by a camera arranged at a real viewpoint. A texture noise power is based on the acquired texture images and reconstructed texture images corresponding to a virtual texture image. A depth noise power is based on the depth images and reconstructed depth images corresponding to a virtual depth image. The quality of the virtual image is based on a combination of the texture noise power and the depth noise power, and the virtual image is rendered from the reconstructed texture images and the reconstructed depth images.
    Type: Grant
    Filed: February 5, 2012
    Date of Patent: April 8, 2014
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Ngai-Man Cheung, Dong Tian, Anthony Vetro, Huifang Sun
  • Publication number: 20130279820
    Abstract: A bitstream includes coded pictures, and split-flags for generating a transform tree. The bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs). The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with the CUs. The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging. Then, data contained in each PU can be decoded using the TVs associated with the PU according to the transform tree.
    Type: Application
    Filed: June 18, 2013
    Publication date: October 24, 2013
    Inventors: Robert A. Cohen, Anthony Vetro, Huifang Sun
  • Publication number: 20130201177
    Abstract: A quality of a virtual image for a synthetic viewpoint in a 3D scene is determined. The 3D scene is acquired by texture images, and each texture image is associated with a depth image acquired by a camera arranged at a real viewpoint. A texture noise power is based on the acquired texture images and reconstructed texture images corresponding to a virtual texture image. A depth noise power is based on the depth images and reconstructed depth images corresponding to a virtual depth image. The quality of the virtual image is based on a combination of the texture noise power and the depth noise power, and the virtual image is rendered from the reconstructed texture images and the reconstructed depth images.
    Type: Application
    Filed: February 5, 2012
    Publication date: August 8, 2013
    Inventors: Ngai-Man Cheung, Dong Tian, Anthony Vetro, Huifang Sun
  • Patent number: 8494290
    Abstract: A bitstream includes coded pictures, and split-flags for generating a transform tree. The bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs). The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with the CUs. The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging. Then, data contained in each PU can be decoded using the TUs associated with the PU according to the transform tree.
    Type: Grant
    Filed: June 27, 2011
    Date of Patent: July 23, 2013
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Robert A. Cohen, Anthony Vetro, Huifang Sun
  • Patent number: 8451895
    Abstract: Multiview videos are acquired of a scene with corresponding cameras arranged at poses, such that there is view overlap between any pair of cameras. V-frames are generated from the multiview videos. The V-frames are encoded using only spatial prediction. Then, the V-frames are inserted periodically in an encoded bit stream to provide random temporal access to the multiview videos. Additional view dependency information enables the decoding of a reduced number of frames prior to accessing randomly a target frame for a specified view and time, and decoding the target frame. The method also decodes multiview videos by maintaining a reference picture list for a current frame of a plurality of multiview videos, and predicting each current frame of the plurality of multiview videos according to reference pictures indexed by the associated reference picture list.
    Type: Grant
    Filed: August 30, 2010
    Date of Patent: May 28, 2013
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Anthony Vetro, Huifang Sun, Jun Xin, Emin Martinian, Alexander Behrens
  • Publication number: 20120281928
    Abstract: A bitstream includes coded pictures, and split-flags for generating a transform tree. The bit stream is a partitioning of coding units (CUs) into Prediction Units (PUs). The transform tree is generated according to the split-flags. Nodes in the transform tree represent transform units (TU) associated with the CUs. The generation splits each TU only if the corresponding split-flag is set. For each PU that includes multiple TUs, the multiple TUs are merged into a larger TU, and the transform tree is modified according to the splitting and merging. Then, data contained in each PU can be decoded using the TUs associated with the PU according to the transform tree.
    Type: Application
    Filed: June 27, 2011
    Publication date: November 8, 2012
    Inventors: Robert A. Cohen, Anthony Vetro, Huifang Sun
  • Publication number: 20120230396
    Abstract: A method decodes a picture in a form of a bit-stream. The picture is encoded and represented by vectors of coefficients. Each coefficient is in a quantized form. A specific coefficient is selected in each vector based on a scan order of the vector. Then, a set of modes is inferred based on characteristics of the specific coefficient. Subsequently, the bit-stream is decoded according to the set of modes.
    Type: Application
    Filed: September 30, 2011
    Publication date: September 13, 2012
    Applicant: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Robert A. Cohen, Shantanu Rane, Anthony Vetro, Huifang Sun
  • Publication number: 20110090952
    Abstract: A bitstream includes a sequence of frames. Each frame is partitioned into encoded blocks. For each block, a set of paths is determined at a transform angle determined from a transform index in the bitstream. Transform coefficients are obtained from bitstream. The transform coefficients include one DC coefficient for each path. An inverse transform is applied to the transform coefficients to produce a decoded video.
    Type: Application
    Filed: October 21, 2009
    Publication date: April 21, 2011
    Inventors: Robert A. Cohen, Anthony Vetro, Huifang Sun
  • Publication number: 20110090954
    Abstract: An encoded video in the form of a bitstream includes a sequence of frames, and each frame is partitioned into encoded blocks. A context for decoding is selected for each encoded block. The bitstream is entropy decoded based on the context to obtain a transform indicator difference. The transform index, which indicates a transform type and a transform direction, is based on the transform indicator difference and a predicted transform indicator. Transform coefficients are obtained from the bitstream, and inverse transformed according to the transform index to produce a decoded video.
    Type: Application
    Filed: October 21, 2009
    Publication date: April 21, 2011
    Inventors: Robert A. Cohen, Sven Klomp, Huifang Sun, Anthony Vetro
  • Patent number: 7903737
    Abstract: A method randomly accesses multiview videos. Multiview videos are acquired of a scene with corresponding cameras arranged at poses, such that there is view overlap between any pair of cameras. V-frames are generated from the multiview videos. The V-frames are encoded using only spatial prediction. Then, the V-frames are inserted periodically in an encoded bit stream to provide random temporal access to the multiview videos. Additional view dependency information enables the decoding of a reduced number of frames prior to accessing randomly a target frame for a specified view and time, and decoding the target frame.
    Type: Grant
    Filed: March 21, 2006
    Date of Patent: March 8, 2011
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Emin Martinian, Anthony Vetro, Jun Xin, Sehoon Yea, Huifang Sun
  • Publication number: 20100322311
    Abstract: Multiview videos are acquired of a scene with corresponding cameras arranged at poses, such that there is view overlap between any pair of cameras. V-frames are generated from the multiview videos. The V-frames are encoded using only spatial prediction. Then, the V-frames are inserted periodically in an encoded bit stream to provide random temporal access to the multiview videos. Additional view dependency information enables the decoding of a reduced number of frames prior to accessing randomly a target frame for a specified view and time, and decoding the target frame. The method also decodes multiview videos by maintaining a reference picture list for a current frame of a plurality of multiview videos, and predicting each current frame of the plurality of multiview videos according to reference pictures indexed by the associated reference picture list.
    Type: Application
    Filed: August 30, 2010
    Publication date: December 23, 2010
    Inventors: Anthony Vetro, Huifang Sun, Jun Xin, Emin Martinian, Alexander Behrens
  • Patent number: 7728877
    Abstract: A system and method synthesizes multiview videos. Multiview videos are acquired of a scene with corresponding cameras arranged at a poses such that there is view overlap between any pair of cameras. A synthesized multiview video is generated from the acquired multiview videos for a virtual camera. A reference picture list is maintained for each current frame of each of the multiview videos and the synthesized video. The reference picture list indexes temporal reference pictures and spatial reference pictures of the acquired multiview videos and the synthesized reference pictures of the synthesized multiview video. Then, each current frame of the multiview videos is predicted according to reference pictures indexed by the associated reference picture list during encoding and decoding.
    Type: Grant
    Filed: November 30, 2005
    Date of Patent: June 1, 2010
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Jun Xin, Emin Martinian, Alexander Behrens, Anthony Vetro, Huifang Sun
  • Patent number: 7710462
    Abstract: A method randomly accesses multiview videos. Multiview videos are acquired of a scene with corresponding cameras arranged at poses, such that there is view overlap between any pair of cameras. V-frames are generated from the multiview videos. The V-frames are encoded using only spatial prediction. Then, the V-frames are inserted periodically in an encoded bitstream to provide random temporal access to the multiview videos.
    Type: Grant
    Filed: November 30, 2005
    Date of Patent: May 4, 2010
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Jun Xin, Emin Martinian, Alexander Behrens, Anthony Vetro, Huifang Sun
  • Patent number: 7489726
    Abstract: A method acquires compressed videos. Intra-, or inter-frames of each compressed video are acquired at a fixed sampling rate. Joint analysis is applied concurrently and in parallel to the compressed videos to determine a variable and non-uniform temporal sampling rate for each compressed video so that a combined distortion is minimized and a combined frame rate constraint is satisfied. Each compressed video is then sampled at the associated variable and non-uniform temporal sampling rate to produce output compressed videos having variable temporal resolutions.
    Type: Grant
    Filed: September 11, 2003
    Date of Patent: February 10, 2009
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Anthony Vetro, Huifang Sun