Patents by Inventor Zibo Meng

Zibo Meng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240327911
    Abstract: The present invention relates to a method of identifying a T-cell reactive to cells presenting a T-cell activating antigen (cancer-reactive T-cell), comprising (a) determining expression of at least one of CCL4, CCL4L2, CCL3, CCL3L1, and CXCL13 in T-cells from a sample of a subject; and (b) identifying a cancer-reactive T-cell based on the determination of step (a). The present invention also relates to a method of identifying a TCR binding to a cancer cell of a subject, said method comprising (A) identifying a cancer reactive T-cell according to the afore-said method (B) providing the amino acid sequences of at least the complementarity determining regions (CDRs) of the TCR of the cancer-reactive T-cell identified in step (A); and, hereby, (C) identifying a TCR binding to a cancer cell. The present invention further relates to further methods and cancer-reactive T-cells related thereto.
    Type: Application
    Filed: March 23, 2022
    Publication date: October 3, 2024
    Inventors: Rienk OFFRINGA, Zibo MENG, Aaron RODRIGUEZ EHRENFRIED,, Laura Katharina STEFFENS, Chin Leng TAN
  • Patent number: 11991487
    Abstract: In an embodiment, a method includes receiving and processing a first color image by an encoder. The first color image includes a first portion of the first color image and a second portion of the first color image located at different locations of the first color image. The encoder is configured to output at least one first feature map including fused global information and local information such that whether a color consistency relationship between the first portion of the first color image and the second portion of the first color image exists is encoded into the fused global information and local information.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: May 21, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Zibo Meng, Chiuman Ho
  • Patent number: 11972543
    Abstract: A method includes receiving and processing a first image by an encoder-decoder network. The first image includes a first portion and a second portion located at different locations. The encoder-decoder network includes an encoder and a decoder. The encoder is configured to output at least one feature map including global information and local information such that whether a color consistency relationship between the first portion and the second portion of the first image exists is encoded into the global information and the local information. The decoder is configured to output a second image generated from the at least one feature map, wherein a first portion of the second image corresponding to the first portion of the first image and a second portion of the second image corresponding to the second portion of the first image are restored considering whether the color consistency relationship exists.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: April 30, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Zibo Meng, Chiuman Ho
  • Patent number: 11887280
    Abstract: In an embodiment, a method includes receiving a low-light digital image; generating, by at least one processor, a resulting digital image by processing the low-light digital image with an encoder-decoder neural network comprising a plurality of convolutional layers classified into a downsampling stage and an upsampling stage, and a multi-scale context aggregating block configured to aggregate multi-scale context information of the low-light digital image and employed between the downsampling stage and the upsampling stage; and outputting, by the at least one processor, the resulting digital image to an output device. In the upsampling stage, spatial resolution increases by using a bilinear interpolation operation performed before every few convolutional layers to speed up the inference time of the network.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: January 30, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Zibo Meng
  • Publication number: 20230327030
    Abstract: A solar cell and preparation method. The solar cell includes silicon substrate having first or second polarity, where the substrate includes first and second sides opposite to each other; first passivation structure on first side of the substrate, a portion of first structure farthest from the substrate having first polarity and a position where first structure is located being first electrode region; second passivation structure on a side of first structure away from the substrate, a portion of second structure farthest from the substrate having second polarity and a position where second structure is located being second electrode region, second and first electrode regions are not overlapped and second structure has a process temperature lower than first structure; and first electrode in first region on a side of second structure away from the substrate and second electrode in second region on a side of second structure away from the substrate.
    Type: Application
    Filed: June 12, 2023
    Publication date: October 12, 2023
    Inventors: Hongwei LI, Zibo MENG, Tingting HUO, Guangtao YANG, Xueling ZHANG, Daming CHEN
  • Patent number: 11741578
    Abstract: In an embodiment, a method includes receiving a digital image; generating, by at least one processor, a resulting digital image by processing the digital image with an encoder-decoder neural network comprising a plurality of convolutional layers classified into a downsampling stage and an upsampling stage, and a multi-scale context aggregating block configured to aggregate multi-scale context information of the digital image and employed between the downsampling stage and the upsampling stage; and outputting, by the at least one processor, the resulting digital image to an output device.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: August 29, 2023
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Zibo Meng, Chiuman Ho
  • Patent number: 11631277
    Abstract: A method for training a model, the method including: defining a primary model for identifying a class of input data based on a first characteristic of the input data; defining a secondary model for detecting a change to a second characteristic between multiple input data captured at different times; defining a forward link from an output of an intermediate layer of the secondary model to an input of an intermediate layer of the primary model; and training the primary model and the secondary model in parallel based on a training set of input data.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: April 18, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Haibo Wang, Zibo Meng, Jia Xue, Cornelis Conradus Adrianus Maria Van Zon
  • Publication number: 20220406050
    Abstract: A method includes: aggregating information from a corresponding combination of all of multi-scale first dimensional receptive fields and each of multi-scale second dimensional receptive fields, so that information from multi-scale first and second dimensional receptive fields corresponding to the multi-scale second dimensional receptive fields is obtained; wherein the multi-scale first dimensional receptive fields being first dimensional is being one of spatial and temporal, and the multi-scale second dimensional receptive fields being second dimensional is being the other of spatial and temporal; wherein a corresponding first convolutional neural network operation set provides each of the multi-scale second dimensional receptive fields and each operation of the first convolutional neural network operation set has a corresponding first dimensional local to local constraint; and aggregating the information from the multi-scale first and second dimensional receptive fields to obtain aggregated multi-scale first
    Type: Application
    Filed: August 26, 2022
    Publication date: December 22, 2022
    Inventors: Zibo Meng, Ming Chen, Chiuman Ho
  • Publication number: 20220343468
    Abstract: An image processing method, an electronic device, and a computer-readable storage medium are provided. In the method, a first image is processed through a U-net to obtain a second image, and the second image is a noise map of the first image. The U-net includes an encoding network, a decoding network, and a bottleneck network between the encoding network and the decoding network. The bottleneck network includes a global pooling layer, a bilinear upscaling layer, and a 1×1 convolutional layer. Moreover, a third image is generated according to the first image and the second image, and the third image is a denoised map of the first image.
    Type: Application
    Filed: June 30, 2022
    Publication date: October 27, 2022
    Inventors: Zibo MENG, Chiuman HO
  • Patent number: 11398016
    Abstract: In an embodiment, a method includes receiving a low-light digital image; generating, by at least one processor, a resulting digital image by processing the low-light digital image with an encoder-decoder neural network comprising a plurality of convolutional layers classified into a downsampling stage and an upscaling stage, and a multi-scale context aggregating block configured to aggregate multi-scale context information of the low-light digital image and employed between the downsampling stage and the upscaling stage; and outputting, by the at least one processor, the resulting digital image to an output device.
    Type: Grant
    Filed: March 1, 2021
    Date of Patent: July 26, 2022
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventors: Zibo Meng, Chiuman Ho
  • Publication number: 20220222321
    Abstract: A tensor processing method and apparatus, and an electronic device are provided. In the method, a first matrix is determined based on a first tensor, and a first sub-matrix is extracted from the first matrix. The first matrix includes all elements of the first tensor, and the first sub-matrix includes all elements of the first subtensor, and the first subtensor is a subset of the first tensor.
    Type: Application
    Filed: March 29, 2022
    Publication date: July 14, 2022
    Inventors: Ming Chen, Chiuman Ho, Zibo Meng
  • Publication number: 20220207651
    Abstract: Provided are method and apparatus for image processing. A neural network includes an encoding network, an intermediate network, and a decoding network including multiple input layers and an output layer. In the method, at an input layer of the decoding network, first output data is received from a previous layer, and a first operation is performed on the first output data to obtain first input data of the input layer, the input layer is any one of multiple input layers. At the input layer, second output data is received from a corresponding layer of the encoding network, and a second operation is performed on the second output data to obtain second input data of the input layer. Output data of the input layer is obtained according to the first and second input data. Operations are performed in a next layer based on the output data to obtain an output image.
    Type: Application
    Filed: March 17, 2022
    Publication date: June 30, 2022
    Inventors: Zibo MENG, Runsheng XU, Chiuman HO
  • Publication number: 20220207870
    Abstract: Provided are a method and apparatus for image processing and a terminal. In the method, an input image is received and processed in a neural network to obtain an output image according to global information of the input image. The terminal includes at least one processor; and a memory coupled with the at least one processor and configured to store instructions which, when executed by the at least one processor, are operable with the at least one processor to implement a neural network to receive an input image and process the input image in the neural network to obtain an output image according to global information of the input image.
    Type: Application
    Filed: March 16, 2022
    Publication date: June 30, 2022
    Inventors: Zibo Meng, Runsheng Xu, Jin Pi, Chiuman Ho
  • Publication number: 20220207109
    Abstract: A convolution method, an electronic device and a non-transitory computer-readable storage medium are provided. The method includes that: multiple resultant matrices respectively corresponding to multiple 1×1 convolution kernel elements in a filter are added to different sub-regions of a first output matrix, to obtain an accumulating feature of the first output matrix, and a second output matrix is extracted from the first output matrix with the accumulating feature. A size of the second output matrix is less than a size of the first output matrix.
    Type: Application
    Filed: March 17, 2022
    Publication date: June 30, 2022
    Inventors: Ming CHEN, Chiuman HO, Zibo MENG
  • Publication number: 20220157059
    Abstract: The present disclosure relates to a Temporal Information Aggregation (TIA) neural network block to extract underlying multiscale temporal information. By applying TIA, information in different temporal scales may be effectively extracted. The TIA block may be implemented as a block and thus may be inserted into any architectures. The extracted multi-scale temporal information contributes to the final output as a residual.
    Type: Application
    Filed: November 30, 2021
    Publication date: May 19, 2022
    Inventors: Zibo MENG, Ming CHEN, Chiuman HO
  • Publication number: 20220092294
    Abstract: A method includes: receiving a facial image (204); obtaining a facial shape (206) using the facial image (204); defining, using the facial image (204) and the facial shape (206), a plurality of facial component-specific local regions, wherein each of the facial component-specific local regions includes a corresponding separately considered facial component of a plurality of separately considered facial components from the facial image (204), and the corresponding separately considered facial component of the separately considered facial components corresponds to a corresponding first facial landmark set (208) of a plurality of first facial landmark sets in the facial shape (206); for each of the facial component-specific local regions, performing a cascaded regression method using each of the facial component-specific local regions and a corresponding facial landmark set (208) of the first facial landmark sets to obtain a corresponding facial landmark set (210) of a plurality of second facial landmark sets.
    Type: Application
    Filed: December 7, 2021
    Publication date: March 24, 2022
    Inventors: Runsheng XU, Zibo MENG, Chiuman HO
  • Publication number: 20220086410
    Abstract: In an embodiment, a method includes receiving and processing a first color image by an encoder. The first color image includes a first portion of the first color image and a second portion of the first color image located at different locations of the first color image. The encoder is configured to output at least one first feature map including fused global information and local information such that whether a color consistency relationship between the first portion of the first color image and the second portion of the first color image exists is encoded into the fused global information and local information.
    Type: Application
    Filed: November 29, 2021
    Publication date: March 17, 2022
    Inventors: Zibo Meng, Chiuman Ho
  • Publication number: 20210279509
    Abstract: In an embodiment, a computer-implemented method includes receiving and processing a first image, and outputting a first feature map by an encoder. The encoder includes a plurality of first convolutional stages that receive the first image and output stage-by-stage a plurality of second feature maps corresponding to the first convolutional stages. The second feature maps have gradually decreased scales. For each second convolutional stage of the first convolutional stages, a first skip connection is added between each second convolutional stage and each of at least one remaining convolutional stage of the first convolutional stages corresponding to each second convolutional stage.
    Type: Application
    Filed: May 13, 2021
    Publication date: September 9, 2021
    Inventors: Zibo Meng, Ming Chen
  • Publication number: 20210272246
    Abstract: In an embodiment, a method includes receiving a digital image; generating, by at least one processor, a resulting digital image by processing the digital image with an encoder-decoder neural network comprising a plurality of convolutional layers classified into a downsampling stage and an upsampling stage, and a multi-scale context aggregating block configured to aggregate multi-scale context information of the digital image and employed between the downsampling stage and the upsampling stage; and outputting, by the at least one processor, the resulting digital image to an output device.
    Type: Application
    Filed: May 18, 2021
    Publication date: September 2, 2021
    Inventors: Zibo Meng, Chiuman Ho
  • Publication number: 20210256667
    Abstract: A method includes receiving and processing a first image by an encoder-decoder network. The first image includes a first portion and a second portion located at different locations. The encoder-decoder network includes an encoder and a decoder. The encoder is configured to output at least one feature map including global information and local information such that whether a color consistency relationship between the first portion and the second portion of the first image exists is encoded into the global information and the local information. The decoder is configured to output a second image generated from the at least one feature map, wherein a first portion of the second image corresponding to the first portion of the first image and a second portion of the second image corresponding to the second portion of the first image are restored considering whether the color consistency relationship exists.
    Type: Application
    Filed: May 3, 2021
    Publication date: August 19, 2021
    Inventors: Zibo MENG, Chiuman HO