Patents by Inventor Yongzhe Wang

Yongzhe Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240267342
    Abstract: Provided are a node matching method and apparatus, a device and a medium. The method includes: acquiring real-time state parameters of a plurality of allowed input ports and a plurality of allowed output ports of a target node; according to the real-time state parameters and a preset calculation rule, determining serial numbers of all the allowed input ports in an idle state, and determining serial numbers of all the allowed output ports in an idle state; matching the allowed input ports and the allowed output ports, which have corresponding serial numbers, to obtain matching relationships; and performing data transmission according to the matching relationships.
    Type: Application
    Filed: April 29, 2022
    Publication date: August 8, 2024
    Applicant: SUZHOU METABRAIN INTELLIGENT TECHNOLOGY CO., LTD.
    Inventors: Lei ZHANG, Tao YUAN, Yongzhe WEI, Lin WANG
  • Publication number: 20230419082
    Abstract: Systems and methods can include or leverage a machine-learned model (e.g., a convolutional neural network) that includes one or more temporal residual connections. In particular, each temporal residual connection can respectively supply one or more sets of intermediate feature data generated by a current instantiation of the model from a current sequential input to one or more other instantiations of the machine-learned model applied to process one or more other sequential inputs. For example, the other instantiations of the machine-learned model can include subsequent instantiations of the machine-learned model applied to process one or more subsequent sequential inputs that follow the current sequential input in a sequence and/or preceding instantiations of the machine-learned model applied to process one or more preceding sequential inputs that precede the current sequential input in a sequence.
    Type: Application
    Filed: December 20, 2021
    Publication date: December 28, 2023
    Inventors: Liangzhe Yuan, Yongzhe Wang
  • Patent number: 10482126
    Abstract: A content system identifies shots in a first video and shots in a second video. Shot durations are determined for the identified shots of each video. A histogram is generated for each video, each histogram dividing the identified shots of the corresponding video into a set of buckets divided according to a range of shot durations. The system determines confidence weights for the buckets of each histogram, with the confidence weight for a bucket based on a likelihood of a particular number of identified shots occurring within the range of shot duration for that bucket. A correlation value is computed for the two videos based on a number of identified shots in each bucket of each respective histogram and based on the confidence weights. The content system determines whether the two videos are similar based on the correlation value and a self-correlation value of each video.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: November 19, 2019
    Assignee: Google LLC
    Inventors: Yongzhe Wang, Anthony Mai
  • Patent number: 9998753
    Abstract: Encoding and decoding using prediction dependent transform coding are provided. Encoding and decoding using prediction dependent transform coding may include identifying a current input block from a current input frame from an input video stream, generating a prediction block for the current input block, generating a residual block based on a difference between the current input block and the prediction block, generating, by a processor in response to instructions stored on a non-transitory computer readable medium, an encoded block by encoding the residual block based on the prediction block using the using prediction dependent transform coding, including the encoded block in an output bitstream, and outputting or storing the output bitstream.
    Type: Grant
    Filed: January 16, 2017
    Date of Patent: June 12, 2018
    Assignee: GOOGLE LLC
    Inventors: Debargha Mukherjee, Yongzhe Wang, Jingning Han
  • Publication number: 20180150469
    Abstract: A content system identifies shots in a first video and shots in a second video. Shot durations are determined for the identified shots of each video. A histogram is generated for each video, each histogram dividing the identified shots of the corresponding video into a set of buckets divided according to a range of shot durations. The system determines confidence weights for the buckets of each histogram, with the confidence weight for a bucket based on a likelihood of a particular number of identified shots occurring within the range of shot duration for that bucket. A correlation value is computed for the two videos based on a number of identified shots in each bucket of each respective histogram and based on the confidence weights. The content system determines whether the two videos are similar based on the correlation value and a self-correlation value of each video.
    Type: Application
    Filed: November 30, 2016
    Publication date: May 31, 2018
    Inventors: Yongzhe Wang, Anthony Mai
  • Patent number: 9619755
    Abstract: A method processes a signal represented as a graph by first determining a graph spectral transform based on the graph. In a spectral domain, parameters of a graph filter are estimated using a training data set of unenhanced and corresponding enhanced signals. The graph filter is derived based on the graph spectral transform and the estimated graph filter parameters. Then, the signal is processed using the graph filter to produce an output signal. The processing can enhance signals such as images by denoising or interpolating missing samples.
    Type: Grant
    Filed: October 23, 2013
    Date of Patent: April 11, 2017
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Yongzhe Wang, Dong Tian, Hassan Mansour, Anthony Vetro, Antonio Ortega
  • Patent number: 9565451
    Abstract: Encoding and decoding using prediction dependent transform coding are provided. Encoding and decoding using prediction dependent transform coding may include identifying a current input block from a current input frame from an input video stream, generating a prediction block for the current input block, generating a residual block based on a difference between the current input block and the prediction block, generating, by a processor in response to instructions stored on a non-transitory computer readable medium, an encoded block by encoding the residual block based on the prediction block using the using prediction dependent transform coding, including the encoded block in an output bitstream, and outputting or storing the output bitstream.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: February 7, 2017
    Assignee: GOOGLE INC.
    Inventors: Debargha Mukherjee, Yongzhe Wang, Jingning Han
  • Publication number: 20150112897
    Abstract: A method processes a signal represented as a graph by first determining a graph spectral transform based on the graph. In a spectral domain, parameters of a graph filter are estimated using a training data set of unenhanced and corresponding enhanced signals. The graph filter is derived based on the graph spectral transform and the estimated graph filter parameters. Then, the signal is processed using the graph filter to produce an output signal. The processing can enhance signals such as images b denoising or interpolating missing samples.
    Type: Application
    Filed: October 23, 2013
    Publication date: April 23, 2015
    Applicant: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Yongzhe Wang, Dong Tian, Hassan Mansour, Anthony Vetro, Antonio Ortega
  • Patent number: 8994722
    Abstract: An image for a virtual view of a scene is generated based on a set of texture images and a corresponding set of depth images acquired of the scene. A set of candidate depths associated with each pixel of a selected image is determined. For each candidate depth, a cost that estimates a synthesis quality of the virtual image is determined. The candidate depth with a least cost is selected to produce an optimal depth for the pixel. Then, the virtual image is synthesized based on the optimal depth of each pixel and the texture images. The method also applies first and second depth enhancement before, and during view synthesis to correct errors or suppress noise due to the estimation or acquisition of the dense depth images and sparse depth features.
    Type: Grant
    Filed: February 27, 2012
    Date of Patent: March 31, 2015
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Dong Tian, Yongzhe Wang, Anthony Vetro
  • Patent number: 8514932
    Abstract: Systems, methods and articles of manufacture are disclosed for performing scalable video coding. In one embodiment, non-linear functions are used to predict source video data using retargeted video data. Differences may be determined between the predicted video data and the source video data. The retargeted video data, the non-linear functions, and the differences may be jointly encoded into a scalable bitstream. The scalable bitstream may be transmitted and selectively decoded to produce output video for one of a plurality of predefined target platforms.
    Type: Grant
    Filed: February 8, 2010
    Date of Patent: August 20, 2013
    Assignee: Disney Enterprises, Inc.
    Inventors: Nikolce Stefanoski, Aljosa Smolic, Yongzhe Wang, Manuel Lang, Alexander Hornung, Markus Gross
  • Publication number: 20120206442
    Abstract: An image for a virtual view of a scene is generated based on a set of texture images and a corresponding set of depth images acquired of the scene. A set of candidate depth values associated with each pixel of a selected image is determined. For each candidate depth value, a cost that estimates a synthesis quality of the virtual image is determined. The candidate depth value with a least cost is selected to produce an optimal depth value for the pixel. Then, the virtual image is synthesized based on the optimal depth value of each pixel and the texture images.
    Type: Application
    Filed: November 30, 2011
    Publication date: August 16, 2012
    Inventors: Dong Tian, Yongzhe Wang, Anthony Vetro
  • Publication number: 20120206451
    Abstract: An image for a virtual view of a scene is generated based on a set of texture images and a corresponding set of depth images acquired of the scene. A set of candidate depths associated with each pixel of a selected image is determined. For each candidate depth, a cost that estimates a synthesis quality of the virtual image is determined. The candidate depth with a least cost is selected to produce an optimal depth for the pixel. Then, the virtual image is synthesized based on the optimal depth of each pixel and the texture images. The method also applies first and second depth enhancement before, and during view synthesis to correct errors or suppress noise due to the estimation or acquisition of the dense depth images and sparse depth features.
    Type: Application
    Filed: February 27, 2012
    Publication date: August 16, 2012
    Inventors: Dong Tian, Yongzhe Wang, Anthony Vetro
  • Publication number: 20110194024
    Abstract: Systems, methods and articles of manufacture are disclosed for performing scalable video coding. In one embodiment, non-linear functions are used to predict source video data using retargeted video data. Differences may be determined between the predicted video data and the source video data. The retargeted video data, the non-linear functions, and the differences may be jointly encoded into a scalable bitstream. The scalable bitstream may be transmitted and selectively decoded to produce output video for one of a plurality of predefined target platforms.
    Type: Application
    Filed: February 8, 2010
    Publication date: August 11, 2011
    Inventors: Nikolce STEFANOSKI, Aljosa SMOLIC, Yongzhe WANG, Manuel LANG, Alexander HORNUNG, Markus GROSS