Patents by Inventor Christian BESENBRUCH

Christian BESENBRUCH has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11936866
    Abstract: A method for lossy video encoding, transmission and decoding, the method comprising the steps of: receiving an input video at a first computer system; encoding an input frame of the input video to produce a latent representation; producing a quantized latent; producing a hyper-latent representation; producing a quantized hyper-latent; entropy encoding the quantized latent; transmitting the entropy encoded quantized latent and the quantized hyper-latent to a second computer system; decoding the quantized hyper-latent to produce a set of context variables, wherein the set of context variables comprise a temporal context variable; entropy decoding the entropy encoded quantized latent using the set of context variables to obtain an output quantized latent; and decoding the output quantized latent to produce an output frame, wherein the output frame is an approximation of the input frame.
    Type: Grant
    Filed: August 30, 2023
    Date of Patent: March 19, 2024
    Assignee: DEEP RENDER LTD.
    Inventors: Chris Finlay, Christian Besenbruch, Jan Xu, Bilal Abbasi, Christian Etmann, Arsalan Zafar, Sebastjan Cizel, Vira Koshkina
  • Publication number: 20240070925
    Abstract: A method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising steps including: receiving an input image at a first computer system; encoding the input image using a first neural network and decoding the latent representation using a second neural network to produce an output image; at least one of the plurality of layers of the first or second neural network comprises a transformation; and the method further comprises the steps of: evaluating a difference between the output image and the input image and evaluating a function based on an output of the transformation; updating the parameters of the first neural network and the second neural network based on the evaluated difference and the evaluated function; and repeating the above steps.
    Type: Application
    Filed: August 30, 2023
    Publication date: February 29, 2024
    Inventors: Chris FINLAY, Jonathan RAYNER, Jan XU, Christian BESENBRUCH, Arsalan ZAFAR, Sebastjan CIZEL, Vira KOSHKINA
  • Patent number: 11893762
    Abstract: A method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the input image using a first trained neural network to produce a latent representation; identifying one or more regions of the input image associated with high visual sensitivity; encoding the one or more regions of the input image associated with high visual sensitivity using a second trained neural network to produce one or more region latent representations; performing a quantization process on the latent representation and the one or more region latent representations; transmitting the result of the quantization process to a second computer system; decoding the result of the quantization process to produce an output image, wherein the output image is an approximation of the input image.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: February 6, 2024
    Assignee: DEEP RENDER LTD.
    Inventors: Thomas Ryder, Alexander Lytchier, Vira Koshkina, Christian Besenbruch, Arsalan Zafar
  • Publication number: 20240007631
    Abstract: A method for lossy video encoding, transmission and decoding, the method comprising the steps of: receiving an input video at a first computer system; encoding an input frame of the input video to produce a latent representation; producing a quantized latent; producing a hyper-latent representation; producing a quantized hyper-latent; entropy encoding the quantized latent; transmitting the entropy encoded quantized latent and the quantized hyper-latent to a second computer system; decoding the quantized hyper-latent to produce a set of context variables, wherein the set of context variables comprise a temporal context variable; entropy decoding the entropy encoded quantized latent using the set of context variables to obtain an output quantized latent; and decoding the output quantized latent to produce an output frame, wherein the output frame is an approximation of the input frame.
    Type: Application
    Filed: August 30, 2023
    Publication date: January 4, 2024
    Inventors: Chris FINLAY, Christian BESENBRUCH, Jan XU, Bilal ABBASI, Christian ETMANN, Arsalan ZAFAR, Sebastjan CIZEL, Vira KOSHKINA
  • Publication number: 20230082809
    Abstract: A method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the input image using a first trained neural network to produce a latent representation; identifying one or more regions of the input image associated with high visual sensitivity; encoding the one or more regions of the input image associated with high visual sensitivity using a second trained neural network to produce one or more region latent representations; performing a quantization process on the latent representation and the one or more region latent representations; transmitting the result of the quantization process to a second computer system; decoding the result of the quantization process to produce an output image, wherein the output image is an approximation of the input image.
    Type: Application
    Filed: November 15, 2022
    Publication date: March 16, 2023
    Inventors: Thomas RYDER, Alexander LYTCHIER, Vira KOSHKINA, Christian BESENBRUCH, Arsalan ZAFAR
  • Patent number: 11532104
    Abstract: A method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the input image using a first trained neural network to produce a latent representation; identifying one or more regions of the input image associated with high visual sensitivity; encoding the one or more regions of the input image associated with high visual sensitivity using a second trained neural network to produce one or more region latent representations; performing a quantization process on the latent representation and the one or more region latent representations; transmitting the result of the quantization process to a second computer system; decoding the result of the quantization process to produce an output image, wherein the output image is an approximation of the input image.
    Type: Grant
    Filed: May 19, 2022
    Date of Patent: December 20, 2022
    Assignee: DEEP RENDER LTD.
    Inventors: Thomas Ryder, Alexander Lytchier, Vira Koshkina, Christian Besenbruch, Arsalan Zafar
  • Publication number: 20220277492
    Abstract: A method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the input image using a first trained neural network to produce a latent representation; identifying one or more regions of the input image associated with high visual sensitivity; encoding the one or more regions of the input image associated with high visual sensitivity using a second trained neural network to produce one or more region latent representations; performing a quantization process on the latent representation and the one or more region latent representations; transmitting the result of the quantization process to a second computer system; decoding the result of the quantization process to produce an output image, wherein the output image is an approximation of the input image.
    Type: Application
    Filed: May 19, 2022
    Publication date: September 1, 2022
    Inventors: Thomas RYDER, Alexander LYTCHIER, Vira KOSHKINA, Christian BESENBRUCH, Arsalan ZAFAR
  • Publication number: 20220215511
    Abstract: A system and method for lossy image and video compression that utilizes a metanetwork to generate a set of hyperparameters necessary for an image encoding network to reconstruct the desired image from a given noise image, and for lossy image and video compression and transmission that utilizes a neural network as a function to map a known noise image to a desired or target image, allowing the transfer only of hyperparameters of the function instead of a compressed version of the image itself. This allows the recreation of a high-quality approximation of the desired image by any system receiving the hyperparameters, provided that the receiving system possesses the same noise image and a similar neural network. The amount of data required to transfer an image of a given quality is dramatically reduced versus existing image compression technology.
    Type: Application
    Filed: April 29, 2020
    Publication date: July 7, 2022
    Inventors: Arsalan ZAFAR, Christian BESENBRUCH