Patents by Inventor Zhenyu Wang

Zhenyu Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11126179
    Abstract: Techniques for determining and/or predicting a trajectory of an object by using the appearance of the object, as captured in an image, are discussed herein. Image data, sensor data, and/or a predicted trajectory of the object (e.g., a pedestrian, animal, and the like) may be used to train a machine learning model that can subsequently be provided to, and used by, an autonomous vehicle for operation and navigation. In some implementations, predicted trajectories may be compared to actual trajectories and such comparisons are used as training data for machine learning.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: September 21, 2021
    Assignee: Zoox, Inc.
    Inventors: Vasiliy Karasev, Tencia Lee, James William Vaisey Philbin, Sarah Tariq, Kai Zhenyu Wang
  • Patent number: 11119177
    Abstract: The invention relates to a method for the hyperpolarization of a material sample (4), which hits a number of first spin moments (10) of a first spin moment type, wherein the number of first spin moments (10) is brought into interaction with a second spin moment (16) of a second spin moment type, wherein the first spin moments (10) are nuclear spin moments and the second spin moment (16) is an election spin moment, wherein the first and second spin moments (10, 16) are exposed to a homogeneous magnetic field (B), wherein the second spin moment (16) is polarized along the magnetic field (B), wherein the second spin moment (16) is coherently manipulated by means of a, preferably repeated, sequence (S) having a number of successive high-frequency pulses (Pki, Pk?i) temporally offset to each by durations (Tki, Tk?i, T), in such a way that a polarization transfer from the second spin moment (16) to the first spin moments (10) occurs, and wherein durations (Tki, Tk?i, T) inversely proportional to a Lamor frequency (
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: September 14, 2021
    Assignee: NVision Imaging Technolgies GmbH
    Inventors: Ilai Schwartz, Martin Plenio, Qiong Chen, Zhenyu Wang
  • Patent number: 11110478
    Abstract: A saddle seal assembly for a high-pressure airless spray nozzle having a spray tip includes a metal sealing sleeve and a cylindrical elastic seal. The metal sealing sleeve may include a first saddle-shaped semi-cylinder surface closely matching with an outer surface of the spray tip to form an outer hard sealing structure. The cylindrical elastic seal may include a second saddle-shaped semi-cylinder surface closely matching with the outer surface of the spray tip to form an inner flexible sealing structure. A first end portion of the cylindrical elastic seal is configured to be inserted into the metal sealing sleeve, and the first saddle-shaped semi-cylinder surface and the second saddle-shaped semi-cylinder surface are configured to be spliced to form a continuous saddle-shaped semi-cylinder surface, to thereby seal a stepped inlet hole of the high-pressure airless spray nozzle.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: September 7, 2021
    Inventors: Zhenyu Wang, Qinghua Li
  • Publication number: 20210271901
    Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.
    Type: Application
    Filed: May 20, 2021
    Publication date: September 2, 2021
    Inventors: Gowtham Garimella, Marin Kobilarov, Andres Guillermo Morales Morales, Kai Zhenyu Wang
  • Patent number: 11106951
    Abstract: A bidirectional image-text retrieval method based on a multi-view joint embedding space includes: performing retrieval with reference to a semantic association relationship at a global level and a local level, obtaining the semantic association relationship at the global level and the local level in a frame-sentence view and a region-phrase view, and obtaining semantic association information in a global level subspace of frame and sentence in the frame-sentence view, obtaining semantic association information in a local level subspace of region and phrase in the region-phrase view, processing data by a dual-branch neural network in the two views to obtain an isomorphic feature and embedding the same in a common space, and using a constraint condition to reserve an original semantic relationship of the data during training, and merging the two semantic association relationships using multi-view merging and sorting to obtain a more accurate semantic similarity between data.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: August 31, 2021
    Assignee: Peking University Shenzhen Graduate Sohool
    Inventors: Wenmin Wang, Lu Ran, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20210263755
    Abstract: Apparatus and method for implementing a virtual display. For example, one embodiment of a graphics processing apparatus comprises at least one configuration register to store framebuffer descriptor information for a first guest running on a first virtual machine (VM) in a virtualized execution environment of a host processor, the framebuffer descriptor information to indicate one or more display pipes assigned to the first guest; and execution circuitry to execute a first driver assigned to the first guest, the first guest to use the first driver to display a framebuffer in a plane associated with one of the display pipes in accordance with the framebuffer descriptor information.
    Type: Application
    Filed: November 30, 2018
    Publication date: August 26, 2021
    Inventors: Kun TIAN, Ankur SHAH, David COWPERTHWAITE, Zhi WANG, Zhenyu WANG, Kalyan KONDAPALLY, Jonathan BLOOMFIELD, Wei ZHANG
  • Patent number: 11100095
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for blockchain-based file querying are provided. One of the methods includes: receiving a query request for a target file, the query request comprising identification information of a user and the target file; obtaining the target file based on the identification information of the user and the target file; providing a query page of the target file, the query page comprising interactive elements for selecting whether to upload the target file to a blockchain; receiving a user selecting to upload the target file to the blockchain; hashing the target file to generate a digital digest; signing the digital digest according to an asymmetric encryption algorithm using a private key associated with a cryptographic key pair to obtain a digital signature; and uploading the target file, the digital signature, and a public key associated with the cryptographic key pair.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: August 24, 2021
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventors: Zhenyu Wang, Songbo Yue, Nan Li, Yu Yan
  • Patent number: 11100370
    Abstract: Disclosed is a deep discriminative network for person re-identification in an image or a video. Concatenation are carried out on different input images on a color channel by constructing a deep discriminative network, and an obtained splicing result is defined as an original difference space of different images. The original difference space is sent into a convolutional network. The network outputs the similarity between two input images by learning difference information in the original difference space, thereby realizing person re-identification. The features of an individual image are not learnt, and concatenation are carried out on input images on a color channel at the beginning, and difference information is learnt on an original space of the images by using a designed network. By introducing an Inception module and embedding the same into a model, the learning ability of a network can be improved, and a better differentiation effect can be achieved.
    Type: Grant
    Filed: January 23, 2018
    Date of Patent: August 24, 2021
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Yihao Zhang, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20210256365
    Abstract: The present application discloses a cross-media retrieval method based on deep semantic space, which includes a feature generation stage and a semantic space learning stage. In the feature generation stage, a CNN visual feature vector and an LSTM language description vector of an image are generated by simulating a perception process of a person for the image; and topic information about a text is explored by using an LDA topic model, thus extracting an LDA text topic vector. In the semantic space learning phase, a training set image is trained to obtain a four-layer Multi-Sensory Fusion Deep Neural Network, and a training set text is trained to obtain a three-layer text semantic network, respectively. Finally, a test image and a text are respectively mapped to an isomorphic semantic space by using two networks, so as to realize cross-media retrieval. The disclosed method can significantly improve the performance of cross-media retrieval.
    Type: Application
    Filed: August 16, 2017
    Publication date: August 19, 2021
    Inventors: Wenmin Wang, Mengdi Fan, Peilei Dong, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20210250613
    Abstract: The present application provides a method and a device of encoding and decoding based on free viewpoint, and relates to the technical field of video encoding. The method includes: generating a planar splicing image and splice information based on multiple single-viewpoint videos at a server side; generating a planar splicing video based on the planar splicing image; generating camera side information of the planar splicing video based on camera side information existing in the multiple single-viewpoint videos; and encoding the planar splicing video, the splice information and the camera side information of the planar splicing video to generate a planar splicing video bit stream, and decoding a planar splicing video bit stream to acquire a virtual viewpoint according to viewpoint information of a viewer at client side.
    Type: Application
    Filed: April 8, 2019
    Publication date: August 12, 2021
    Inventors: Ronggang WANG, Zhenyu WANG, Wen GAO
  • Patent number: 11087439
    Abstract: The present disclosure provides a hybrid framework-based image bit-depth expansion method and device. The invention fuses a traditional de-banding algorithm and a depth network-based learning algorithm, and can remove unnatural effects in an image flat area whilst more realistically restoring numerical information of missing bits. The method comprises the extraction of image flat areas, local adaptive pixel value adjustment-based flat area bit-depth expansion and convolutional neural network-based non-flat area bit-depth expansion. The present invention uses a learning-based method to train an effective depth network to solve the problem of realistically restoring missing bits, whilst using a simple and robust local adaptive pixel value adjustment method in an flat area to effectively inhibit unnatural effects in the flat area such as banding, a ringing and flat noise, improving subjective visual quality of the flat area.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: August 10, 2021
    Assignee: PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL
    Inventors: Yang Zhao, Ronggang Wang, Wen Gao, Zhenyu Wang, Wenmin Wang
  • Publication number: 20210209411
    Abstract: This application provides a method for adjusting a resource of an intelligent analysis device and an apparatus. The method includes: obtaining status information of an intelligent analysis device that accesses a surveillance platform and application information deployed on the intelligent analysis device, where the status information includes resource usage and a quantity of bound cameras; after a camera accesses the surveillance platform, selecting a to-be-bound intelligent analysis device for the camera based on the status information and the application information of the intelligent analysis device that accesses the surveillance platform; and sending, to the selected intelligent analysis device, a command for binding the camera. In this way, the resource of the intelligent analysis device may be automatically allocated. This improves processing efficiency and avoids low efficiency caused by manual processing.
    Type: Application
    Filed: March 19, 2021
    Publication date: July 8, 2021
    Inventor: Zhenyu Wang
  • Publication number: 20210211656
    Abstract: The present application provides methods, systems, devices and computer-readable mediums for deblocking filter. A method of the present application comprises: determining a filtering boundary, and then determining a filter pixel group based on the filtering boundary; determining a filter strength of the filter pixel group, comprising: parsing separately a pixel value difference states of pixel points on both sides of the filtering boundary in the filter pixel group to obtain two one-sided flatness FL and FR; calculating a comprehensive flatness FS of the filter pixel group, wherein FS=FL+FR; calculating the filter strength according to FS; filtering calculation of pixel points included in the filter pixel group according to the filter strength.
    Type: Application
    Filed: March 6, 2019
    Publication date: July 8, 2021
    Inventors: Ronggang WANG, Zhenyu WANG, Xi XIE, Wen GAO
  • Publication number: 20210192748
    Abstract: Techniques for determining predictions on a top-down representation of an environment based on object movement are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) may capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle, a pedestrian, a bicycle). A multi-channel image representing a top-down view of the object(s) and the environment may be generated based in part on the sensor data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) may also be encoded in the image. Multiple images may be generated representing the environment over time and input into a prediction system configured to output a trajectory template (e.g., general intent for future movement) and a predicted trajectory (e.g., more accurate predicted movement) associated with each object. The prediction system may include a machine learned model configured to output the trajectory template(s) and the predicted trajector(ies).
    Type: Application
    Filed: December 18, 2019
    Publication date: June 24, 2021
    Inventors: Andres Guillermo Morales Morales, Marin Kobilarov, Gowtham Garimella, Kai Zhenyu Wang
  • Publication number: 20210192678
    Abstract: Disclosed are a panoramic video asymmetrical mapping method and a corresponding inverse mapping method: by means of the mapping methods, mapping a spherical surface corresponding to a panoramic image or video A onto a two-dimensional image or video B; projecting the spherical surface onto an isosceles quadrangular pyramid with a square bottom plane, and further projecting the isosceles quadrangular pyramid onto a planar surface; using isometric projection on a main viewpoint region in the projection and using a relatively high sampling density to ensure that the video quality of the region of the main viewpoint is high, while using a relatively low sample density for non-main viewpoint regions so as to reduce bit rate.
    Type: Application
    Filed: May 29, 2018
    Publication date: June 24, 2021
    Inventors: Ronggang WANG, Yueming WANG, Zhenyu WANG, Wen GAO
  • Patent number: 11032984
    Abstract: The present invention discloses genes and SNP markers significantly associated with lint percentage trait in cotton, and use thereof. The genes significantly associated with the lint percentage trait in cotton are genes Gh_D05G1124, Gh_D05G0313, and GhWAKL3. In the present invention, a CottonSNP63K gene array is used for genotyping, and genome re-sequencing data are analyzed to identify SNP markers significantly associated with the lint percentage trait in cotton. Moreover, the present invention also discloses use of the genes and SNP markers, which are significantly associated with the lint percentage trait in cotton, in cotton germplasm identification, breeding, or genetic diversity analysis.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: June 15, 2021
    Assignee: Institute of Cotton Research of the Chinese Academy of Agricultural Sciences
    Inventors: Wei Li, Daigang Yang, Xiongfeng Ma, Xiaoyu Pei, Yangai Liu, Kunlun He, Fei Zhang, Zhongying Ren, Xiaojian Zhou, Wensheng Zhang, Zhenyu Wang, Chengxiang Song, Kuan Sun
  • Patent number: 11030444
    Abstract: Disclosed is a method for detecting pedestrians in an image by using Gaussian penalty. Initial pedestrian boundary box is screened using a Gaussian penalty, to improve the pedestrian detection performance, especially sheltered pedestrians in an image. The method includes acquiring a training data set, a test data set and pedestrian labels of a pedestrian detection image; using the training data set for training to obtain a detection model by using a pedestrian detection method, and acquiring initial pedestrian boundary box and confidence degrees and coordinates thereof; performing Gaussian penalty on the confidence degrees of the pedestrian boundary box, to obtain confidence degree of the pedestrian boundary box after the penalty; and obtaining final pedestrian boundary boxes by screening the pedestrian boundary boxes. Thus, repeated boundary boxes of a single pedestrian are removed while reserving boundary boxes of sheltered pedestrians, thereby realizing the detection of the pedestrians in an image.
    Type: Grant
    Filed: November 24, 2017
    Date of Patent: June 8, 2021
    Assignee: Peking University Shenzhen Graduate School
    Inventors: Wenmin Wang, Peilei Dong, Mengdi Fan, Ronggang Wang, Ge Li, Shengfu Dong, Zhenyu Wang, Ying Li, Hui Zhao, Wen Gao
  • Publication number: 20210165781
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for blockchain-based file querying are provided. One of the methods includes: receiving a query request for a target file, the query request comprising identification information of a user and the target file; obtaining the target file based on the identification information of the user and the target file; providing a query page of the target file, the query page comprising interactive elements for selecting whether to upload the target file to a blockchain; receiving a user selecting to upload the target file to the blockchain; hashing the target file to generate a digital digest; signing the digital digest according to an asymmetric encryption algorithm using a private key associated with a cryptographic key pair to obtain a digital signature; and uploading the target file, the digital signature, and a public key associated with the cryptographic key pair.
    Type: Application
    Filed: February 16, 2021
    Publication date: June 3, 2021
    Inventors: Zhenyu WANG, Songbo YUE, Nan LI, Yu YAN
  • Patent number: 11023749
    Abstract: Techniques for determining predictions on a top-down representation of an environment based on vehicle action(s) are discussed herein. Sensors of a first vehicle (such as an autonomous vehicle) can capture sensor data of an environment, which may include object(s) separate from the first vehicle (e.g., a vehicle or a pedestrian). A multi-channel image representing a top-down view of the object(s) and the environment can be generated based on the sensor data, map data, and/or action data. Environmental data (object extents, velocities, lane positions, crosswalks, etc.) can be encoded in the image. Action data can represent a target lane, trajectory, etc. of the first vehicle. Multiple images can be generated representing the environment over time and input into a prediction system configured to output prediction probabilities associated with possible locations of the object(s) in the future, which may be based on the actions of the autonomous vehicle.
    Type: Grant
    Filed: July 5, 2019
    Date of Patent: June 1, 2021
    Assignee: Zoox, Inc.
    Inventors: Gowtham Garimella, Marin Kobilarov, Andres Guillermo Morales Morales, Kai Zhenyu Wang
  • Publication number: 20210156704
    Abstract: Techniques are disclosed for updating map data. The techniques may include detecting a traffic light in a first image, determining, based at least in part on the traffic light detected in the first image, a proposed three-dimensional position of the traffic light in a three-dimensional coordinate system associated with map data. The proposed three-dimensional position may then be projected into a second image to determine a two-dimensional position of the traffic light in the second image and the second image may be annotated, as an annotated image, with a proposed traffic light location indicator associated with the traffic light. The techniques further include causing a display to display the annotated image to a user, receiving user input associated with the annotated images, and updating, as updated map data, the map data to include a position of the traffic light in the map data based at least in part on the user input.
    Type: Application
    Filed: November 27, 2019
    Publication date: May 27, 2021
    Inventors: Christopher James Gibson, Kai Zhenyu Wang