Patents by Inventor Ioannis Andreopoulos

Ioannis Andreopoulos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240070819
    Abstract: Image data of a first image in a sequence of images is processed using an artificial neural network (ANN) to generate output image data indicative of an alignment of the first image with a second image in the sequence. The ANN is trained using outputs of an alignment pipeline configured to perform alignment of images. The alignment pipeline is configured to determine flow vectors representing optical flow between images, and perform an image transformation using the flow vectors to align the images. The ANN is trained to emulate a result derivable using the alignment pipeline.
    Type: Application
    Filed: January 31, 2023
    Publication date: February 29, 2024
    Inventors: Ayan BHUNIA, Muhammad Umar Karim KHAN, Aaron CHADHA, Ioannis ANDREOPOULOS
  • Publication number: 20240062333
    Abstract: Image data representing one or more images at a first resolution is received at a first artificial neural network (ANN). The image data is processed using the first ANN to generate upscaled image data representing the one or more images at a second, higher resolution. The first ANN is trained to perform image upscaling and is trained using first training image data representing one or more training images at the first resolution, the first training image data being at a first level of quality. The first ANN is also trained using features of a second ANN, wherein the second ANN is trained to perform image upscaling and is trained using second training image data representing one or more training images at the first resolution, the second training image data being at a second level of quality, higher than the first level of quality.
    Type: Application
    Filed: January 31, 2023
    Publication date: February 22, 2024
    Inventors: Muhammad Umar Karim KHAN, Ayan Bhunia, Aaron Chadha, Ioannis Andreopoulos
  • Publication number: 20230254230
    Abstract: A method of processing a time-varying signal in a signal processing system. Data representative of one or more first time samples of the time-varying signal is received at an artificial neural network, ANN. The received data is processed using the ANN to generate predicted data representative of a second time sample of the time-varying signal, the second time sample being later than the one or more first time samples. The ANN is trained to predict data representative of time samples of time-varying signals based on data representative of earlier time samples of the time-varying signals. The signal processing system processes the predicted data representative of the second time sample in place of a third time sample of the time-varying signal, the third time sample being earlier than the second time sample.
    Type: Application
    Filed: July 13, 2022
    Publication date: August 10, 2023
    Inventors: Aaron CHADHA, Ioannis ANDREOPOULOS, Matthias TREDER, Jia-Jie LIM
  • Publication number: 20230145616
    Abstract: A computer-implemented method of processing image data using a model of the human visual system. The model comprises a first artificial neural network system trained to generate the first output data using one or more differentiable functions configured to model the generation of signals from images by the human eye, and a second artificial neural network system trained to generate the second output data using one or more differentiable functions configured to model the processing of signals from the human eye by the human visual cortex. The method comprises receiving image data representing one or more images, processing the received image data using the first artificial neural network system to generate first output data, processing the first output data using a second artificial neural network system to generate second output data. Model output data is determined from the second output data, and output for use in an image processing process.
    Type: Application
    Filed: January 5, 2022
    Publication date: May 11, 2023
    Inventors: Aaron CHADHA, Ioannis ANDREOPOULOS, Matthias TREDER
  • Publication number: 20230112647
    Abstract: A method of processing image data is provided. Pixel data for a first image is preprocessed to identify a subset of the pixel data corresponding to a region of interest depicting a scene element. The subset of the pixel data is processed at a first encoder to generate a first data structure representative of the region of interest, the first data structure identifying the scene element depicted in the region of interest. The subset of pixel data is also processed at a second encoder to generate a second data structure representative of the region of interest, the second data structure comprising values for visual characteristics associated with the scene element. The first and second data structures are outputted for use by a decoder to generate a second image approximating the region of interest of the first image.
    Type: Application
    Filed: December 3, 2021
    Publication date: April 13, 2023
    Inventors: Matthias TREDER, Aaron CHADHA, Ilya FADEEV, Ioannis ANDREOPOULOS
  • Patent number: 11582481
    Abstract: Certain aspects of the present disclosure provide techniques for encoding image data for one or more images. In one embodiment, a method includes the steps of downscaling the one or more images, and encoding the one or more downscaled images using an image codec. Another embodiment concerns a computer-implemented method of decoding encoded image data, and a computer-implemented method of encoding and decoding image data.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: February 14, 2023
    Assignee: ISIZE LIMITED
    Inventors: Djordje Djokovic, Ioannis Andreopoulos, Ilya Fadeev, Srdjan Grce
  • Publication number: 20220321879
    Abstract: A method of processing, prior to encoding using an external encoder, image data using an artificial neural network is provided. The external encoder is operable in a plurality of encoding modes. At the neural network, image data representing one or more images is received. The image data is processed using the neural network to generate output data indicative of an encoding mode selected from the plurality of encoding modes of the external encoder. The neural network trained to select using image data an encoding mode of the plurality of encoding modes of the external encoder using one or more differentiable functions configured to emulate an encoding process. The generated output data is outputted from the neural network to the external encoder to enable the external encoder to encode the image data using the selected encoding mode.
    Type: Application
    Filed: June 16, 2021
    Publication date: October 6, 2022
    Inventors: Aaron CHADHA, Ioannis ANDREOPOULOS
  • Patent number: 11445222
    Abstract: Certain aspects of the present disclosure provide techniques for preprocessing, prior to encoding with an external encoder, image data using a preprocessing network comprising a set of inter-connected weights is provided. At the preprocessing network, image data from one or more images is received. The image data is processed using the preprocessing network to generate an output pixel representation for encoding with the external encoder. The weights of the preprocessing network are trained to optimize a combination of at least one quality score indicative of the quality of the output pixel representation and a rate score indicative of the bits required by the external encoder to encode the output pixel representation.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: September 13, 2022
    Assignee: ISIZE LIMITED
    Inventors: Ioannis Andreopoulos, Aaron Chadha
  • Patent number: 11394980
    Abstract: A method of preprocessing, prior to encoding with an external encoder, image data using a preprocessing network comprising a set of inter-connected learnable weights is provided. At the preprocessing network, image data from one or more images is received. The image data is processed using the preprocessing network to generate an output pixel representation for encoding with the external encoder. The preprocessing network is configured to take as an input display configuration data representing one or more display settings of a display device operable to receive encoded pixel representations from the external encoder. The weights of the preprocessing network are dependent upon the one or more display settings of the display device.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: July 19, 2022
    Assignee: iSize Limited
    Inventors: Ioannis Andreopoulos, Srdjan Grce
  • Patent number: 11252417
    Abstract: A method of configuring an image encoder emulator. Input image data is encoded at an encoding stage comprising a network of inter-connected weights, and decoded at a decoding stage to generate a first distorted version of the input image data. The first distorted version is compared with a second distorted version of the input image data generated using an external encoder to determine a distortion difference score. A rate prediction model is used to predict an encoding bitrate associated with encoding the input image data to a quality corresponding to the first distorted version. A rate difference score is determined by comparing the predicted encoding bitrate with an encoding bitrate used by the external encoder to encode the input image data to a quality corresponding to the second distorted version. The weights of the encoding stage are trained based on the distortion difference score and the rate difference score.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: February 15, 2022
    Assignee: Size Limited
    Inventor: Ioannis Andreopoulos
  • Patent number: 11223833
    Abstract: A method of preprocessing, prior to encoding with an external encoder, image data using a preprocessing network comprising a set of inter-connected learnable weights is provided. At the preprocessing network, image data from one or more images is received. The image data is processed using the preprocessing network to generate an output pixel representation for encoding with the external encoder. The preprocessing network is configured to take as an input encoder configuration data representing one or more configuration settings of the external encoder. The weights of the preprocessing network are dependent upon the one or more configuration settings of the external encoder.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: January 11, 2022
    Assignee: iSize Limited
    Inventors: Ioannis Andreopoulos, Aaron Chadha
  • Patent number: 11172210
    Abstract: A method of processing image data at a server is provided. Image data from one or more images is received at a preprocessing network comprising a set of inter-connected learnable weights, the weights being dependent upon one or more display settings of a display device. The image data is processed using the preprocessing network to generate a plurality of output pixel representations corresponding to different display settings of the display device. The plurality of output pixel representations are encoded to generate a plurality of encoded bitstreams. At least one selected bitstream is transmitted from the server to the display device, wherein the at least one encoded bitstream is selected on the basis of the one or more display settings of the display device.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: November 9, 2021
    Assignee: ISIZE LIMITED
    Inventors: Ioannis Andreopoulos, Srdjan Grce
  • Publication number: 20210211739
    Abstract: A method of processing image data at a server is provided. Image data from one or more images is received at a preprocessing network comprising a set of inter-connected learnable weights, the weights being dependent upon one or more display settings of a display device. The image data is processed using the preprocessing network to generate a plurality of output pixel representations corresponding to different display settings of the display device. The plurality of output pixel representations are encoded to generate a plurality of encoded bitstreams. At least one selected bitstream is transmitted from the server to the display device, wherein the at least one encoded bitstream is selected on the basis of the one or more display settings of the display device.
    Type: Application
    Filed: September 30, 2020
    Publication date: July 8, 2021
    Inventors: Ioannis ANDREOPOULOS, Srdjan GRCE
  • Publication number: 20210211682
    Abstract: A method of preprocessing, prior to encoding with an external encoder, image data using a preprocessing network comprising a set of inter-connected learnable weights is provided. At the preprocessing network, image data from one or more images is received. The image data is processed using the preprocessing network to generate an output pixel representation for encoding with the external encoder. The preprocessing network is configured to take as an input encoder configuration data representing one or more configuration settings of the external encoder. The weights of the preprocessing network are dependent upon the one or more configuration settings of the external encoder.
    Type: Application
    Filed: September 30, 2020
    Publication date: July 8, 2021
    Inventors: Ioannis ANDREOPOULOS, Aaron CHADHA
  • Publication number: 20210211684
    Abstract: A method of configuring an image encoder emulator. Input image data is encoded at an encoding stage comprising a network of inter-connected weights, and decoded at a decoding stage to generate a first distorted version of the input image data. The first distorted version is compared with a second distorted version of the input image data generated using an external encoder to determine a distortion difference score. A rate prediction model is used to predict an encoding bitrate associated with encoding the input image data to a quality corresponding to the first distorted version. A rate difference score is determined by comparing the predicted encoding bitrate with an encoding bitrate used by the external encoder to encode the input image data to a quality corresponding to the second distorted version. The weights of the encoding stage are trained based on the distortion difference score and the rate difference score.
    Type: Application
    Filed: September 30, 2020
    Publication date: July 8, 2021
    Inventor: Ioannis ANDREOPOULOS
  • Publication number: 20210211741
    Abstract: A method of preprocessing, prior to encoding with an external encoder, image data using a preprocessing network comprising a set of inter-connected learnable weights is provided. At the preprocessing network, image data from one or more images is received. The image data is processed using the preprocessing network to generate an output pixel representation for encoding with the external encoder. The preprocessing network is configured to take as an input display configuration data representing one or more display settings of a display device operable to receive encoded pixel representations from the external encoder. The weights of the preprocessing network are dependent upon the one or more display settings of the display device.
    Type: Application
    Filed: September 30, 2020
    Publication date: July 8, 2021
    Inventors: Ioannis ANDREOPOULOS, Srdjan GRCE
  • Publication number: 20210021866
    Abstract: Certain aspects of the present disclosure provide techniques for encoding image data for one or more images. In one embodiment, a method includes the steps of downscaling the one or more images, and encoding the one or more downscaled images using an image codec. Another embodiment concerns a computer-implemented method of decoding encoded image data, and a computer-implemented method of encoding and decoding image data.
    Type: Application
    Filed: September 30, 2020
    Publication date: January 21, 2021
    Inventors: Djordje DJOKOVIC, Ioannis ANDREOPOULOS, Ilya FADEEV, Srdjan GRCE
  • Publication number: 20170300372
    Abstract: A method and apparatus for detecting and mitigating faults in numerical computations of M input data streams is claimed (embodiments of FIG. 1 and FIG. 14). Such faults may occur due to circuit or processor malfunctions stemming from (but not limited to): supply voltage or current fluctuation, timing signal errors, hardware device noise, or other signalling, hardware, or software non-idealities. The invented method and apparatus for numerical entanglement linearly superimposes M input data streams to form M numerically-entangled data streams that can optionally be stored in-place of the original inputs (as in the example embodiments of: Step 2 of FIG. 1 and item 1054 of FIG. 14). A series of operations, such as (but not limited to): scaling, additions/subtractions, inner or outer vector or matrix products and permutations, can then be performed directly using these entangled data streams (as in the example embodiment of Step 3 of FIG. 1, operator g of FIG. 2, FIGS. 6-11, item 1053 of FIG. 14).
    Type: Application
    Filed: September 2, 2015
    Publication date: October 19, 2017
    Inventors: IOANNIS ANDREOPOULOS, MOHAMMAD ASHRAFUL ANAM
  • Publication number: 20160021376
    Abstract: In a method of generating a measure of video quality, a set of weightings (160) for a plurality of objective quality metrics is obtained. The objective quality metrics have themselves been calculated from a plurality of measurable objective properties (120) of video data files (100). The weightings (160) have been determined by fitting the objective quality metrics to a set comprising a ground-truth quality rating of each of the video data files coming from human scoring of quality (100). The method includes receiving a target video data file (180), the quality of which is to be measured. Values are calculated for the objective quality metrics (220) on the target video data file (180). The measure of video quality (240) is generated by combining the values for the objective quality metrics (220) on the target video data file (180) using the obtained set of weightings (160).
    Type: Application
    Filed: July 16, 2015
    Publication date: January 21, 2016
    Inventors: Ioannis Andreopoulos, Pamela D. Fisher, Nikolaos Deligiannis, Vasileios Giotsas
  • Patent number: 7876820
    Abstract: A bit stream representing n-dimensional data structures may be encoded and decoded. A part of the data can be mappable within predefined similarity criteria to a part of the data of another data structure. The similarity criteria may include, a spatial or temporal shift of the data. The data structures are typically sequential video frames such as is used in motion estimation and/or compensation of moving pictures, and a part of the data structure may be a block of data within a frame. The shift may be any suitable shift such as linear translation, rotation, or change of size. Digital filtering may be applied to a reference or other frame of data to generate subbands of a set of subbands of an overcomplete representation of the frame by calculations performed at single rate. The digital filtering may be implemented in a separate filter module or in software.
    Type: Grant
    Filed: September 3, 2002
    Date of Patent: January 25, 2011
    Assignees: IMEC, Vrije Universiteit Brussel
    Inventors: Geert Van der Auwera, Ioannis Andreopoulos, Adrian Munteanu, Peter Schelkens, Jan Cornelis