Patents by Inventor Paul Brasnett
Paul Brasnett has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9973661Abstract: An interlaced video signal can include content of different types, such as interlaced content and progressive content. The progressive content may have different cadences according to the ratio between the frame rate of the progressive content and the field rate of the interlaced video signal. Cadence analysis is performed to identify the cadence of the video signal and/or to determine field pairings when progressive content is included. As described herein, motion information (e.g. motion vectors) for blocks of fields of a video signal can be used for the cadence analysis. The use of motion information provides a robust method of performing cadence analysis.Type: GrantFiled: May 1, 2015Date of Patent: May 15, 2018Assignee: Imagination Technologies LimitedInventor: Paul Brasnett
-
Patent number: 9760997Abstract: A reduced noise image can be formed from a set of images. One of the images of the set can be selected to be a reference image and other images of the set are transformed such that they are better aligned with the reference image. A measure of the alignment of each image with the reference image is determined. At least some of the transformed images can then be combined using weights which depend on the alignment of the transformed image with the reference image to thereby form the reduced noise image. By weighting the images according to their alignment with the reference image the effects of misalignment between the images in the combined image are reduced. Furthermore, motion correction may be applied to the reduced noise image.Type: GrantFiled: March 14, 2016Date of Patent: September 12, 2017Assignee: Imagination Technologies LimitedInventors: Marc Vivet, Paul Brasnett
-
Publication number: 20170193281Abstract: A data processing system for performing face detection on a stream of frames of image data, the data processing system comprising: a skin patch identifier configured to identify one or more patches of skin colour in a first frame and characterise each patch in the first frame using a respective patch construct of a predefined shape; a first search tile generator configured to generate one or more first search tiles from the one or more patch constructs; and a face detector configured to detect faces in the stream by performing face detection in one or more frames of the stream within the first search tiles.Type: ApplicationFiled: March 21, 2017Publication date: July 6, 2017Inventors: Szabolcs Cséfalvay, Paul Brasnett
-
Patent number: 9633249Abstract: A data processing system for performing face detection on a stream of frames of image data, the data processing system comprising: a skin patch identifier configured to identify one or more patches of skin color in a first frame and characterize each patch in the first frame using a respective patch construct of a predefined shape; a first search tile generator configured to generate one or more first search tiles from the one or more patch constructs; and a face detector configured to detect faces in the stream by performing face detection in one or more frames of the stream within the first search tiles.Type: GrantFiled: October 23, 2014Date of Patent: April 25, 2017Assignee: Imagination Technologies LimitedInventors: Szabolcs Cséfalvay, Paul Brasnett
-
Patent number: 9525869Abstract: An image processor comprises an image pre-processing block and an encoder processing block for processing and encoding an image. The image pre-processing block receives image data and processes it to provide an image comprising image sections which each comprise pixels. For each of the image sections, the pixels are analyzed to estimate an indication of the complexity of the image section, and metadata is determined based on the estimated complexity indications of the image sections. The metadata is passed to the encoder processing block which uses it to determine a quantization level for use in encoding the image. The encoder processing block can then encode the image using the determined quantization level. Conveniently, the image pre-processing block 106 processes the image data to provide the image, and therefore has access to the image which it can analyze to determine the metadata without requiring a separate read operation of the image.Type: GrantFiled: April 28, 2014Date of Patent: December 20, 2016Assignee: Imagination Technologies LimitedInventors: Jonathan Diggins, Paul Brasnett
-
Patent number: 9525870Abstract: A quantization level is determined for use by an encoder in encoding an image in accordance with a target number of bits. For each section of the image, the pixels are analyzed to estimate the complexity of the image section. For each of a plurality of candidate quantization levels, a relationship and the estimated complexity of the image sections are used to estimate the number of bits that would be generated by encoding the image with the encoder using the respective candidate quantization level, and based thereon one of the candidate quantization levels is selected. The relationship is a function of the quantization level used by the encoder, and is for use in relating the complexity of an image section to the number of bits that would be generated by encoding that image section with the encoder. The encoder uses the selected quantization level in encoding the image.Type: GrantFiled: April 28, 2014Date of Patent: December 20, 2016Assignee: Imagination Technologies LimitedInventors: Jonathan Diggins, Paul Brasnett
-
Publication number: 20160267660Abstract: A reduced noise image can be formed from a set of images. One of the images of the set can be selected to be a reference image and other images of the set are transformed such that they are better aligned with the reference image. A measure of the alignment of each image with the reference image is determined. At least some of the transformed images can then be combined using weights which depend on the alignment of the transformed image with the reference image to thereby form the reduced noise image. By weighting the images according to their alignment with the reference image the effects of misalignment between the images in the combined image are reduced. Furthermore, motion correction may be applied to the reduced noise image.Type: ApplicationFiled: March 14, 2016Publication date: September 15, 2016Inventors: Marc Vivet, Paul Brasnett
-
Publication number: 20160267640Abstract: A reduced noise image can be formed from a set of images. One of the images of the set can be selected to be a reference image and other images of the set are transformed such that they are better aligned with the reference image. A measure of the alignment of each image with the reference image is determined. At least some of the transformed images can then be combined using weights which depend on the alignment of the transformed image with the reference image to thereby form the reduced noise image. By weighting the images according to their alignment with the reference image the effects of misalignment between the images in the combined image are reduced. Furthermore, motion correction may be applied to the reduced noise image.Type: ApplicationFiled: March 14, 2016Publication date: September 15, 2016Inventors: Marc Vivet, Paul Brasnett
-
Patent number: 9430820Abstract: A technique is described for combining several image sources into a single output image or video sequence. For a given pixel of the output image, pixel values are received from the image sources, and a matrix of distance measures between the pixel values (e.g. based on their colors) is computed. Clusters of pixel values are formed using the distance measures, and a score determined for each. One of the clusters is selected according to the scores, and used to derive an output pixel value. In an example, the clusters are formed using an iterative process where the closest pairs of pixel values or clusters are merged to form new clusters up to a size threshold. Examples are described for scoring the clusters based on model-based weighting or cluster size. Examples are also described for a rule-based cluster selection system. A composite image generator implementing the technique is also described.Type: GrantFiled: January 17, 2014Date of Patent: August 30, 2016Assignee: Imagination Technologies LimitedInventors: Paul Brasnett, Jonathan Diggins
-
Patent number: 9349037Abstract: A data processing system for performing face detection on a stream of frames of image data, the data processing system comprising: a face detector configured to detect a first face candidate in a first frame by performing face detection within first search tiles defined for the first frame; a color measurement unit configured to calculate a set of color parameters including an average color of the first face candidate expressed according to a predefined color space; a transformation unit configured to: transform a second frame into the predefined color space, one of the axes of the color space being substantially oriented in the direction of maximum variation according to a predetermined distribution of skin color; and form a skin color probability map for the second frame by calculating the probability that a given color is a skin color from a measure of the color space distance of that color from the calculated average color; and a search tile generator configured to generate second search tiles based on thType: GrantFiled: October 23, 2014Date of Patent: May 24, 2016Assignee: Imagination Technologies LimitedInventors: Szabolcs Cséfalvay, Paul Brasnett
-
Publication number: 20150319406Abstract: An interlaced video signal can include content of different types, such as interlaced content and progressive content. The progressive content may have different cadences according to the ratio between the frame rate of the progressive content and the field rate of the interlaced video signal. Cadence analysis is performed to identify the cadence of the video signal and/or to determine field pairings when progressive content is included. As described herein, motion information (e.g. motion vectors) for blocks of fields of a video signal can be used for the cadence analysis. The use of motion information provides a robust method of performing cadence analysis.Type: ApplicationFiled: May 1, 2015Publication date: November 5, 2015Inventor: Paul Brasnett
-
Publication number: 20150110352Abstract: A data processing system for performing face detection on a stream of frames of image data, the data processing system comprising: a face detector configured to detect a first face candidate in a first frame by performing face detection within first search tiles defined for the first frame; a colour measurement unit configured to calculate a set of colour parameters including an average colour of the first face candidate expressed according to a predefined colour space; a transformation unit configured to: transform a second frame into the predefined colour space, one of the axes of the colour space being substantially oriented in the direction of maximum variation according to a predetermined distribution of skin colour; and form a skin colour probability map for the second frame by calculating the probability that a given colour is a skin colour from a measure of the colour space distance of that colour from the calculated average colour; and a search tile generator configured to generate second search tileType: ApplicationFiled: October 23, 2014Publication date: April 23, 2015Inventors: Szabolcs CSÉFALVAY, Paul BRASNETT
-
Publication number: 20150110351Abstract: A data processing system for performing face detection on a stream of frames of image data, the data processing system comprising: a skin patch identifier configured to identify one or more patches of skin colour in a first frame and characterise each patch in the first frame using a respective patch construct of a predefined shape; a first search tile generator configured to generate one or more first search tiles from the one or more patch constructs; and a face detector configured to detect faces in the stream by performing face detection in one or more frames of the stream within the first search tiles.Type: ApplicationFiled: October 23, 2014Publication date: April 23, 2015Inventors: Szabolcs Cséfalvay, Paul Brasnett
-
Publication number: 20150023411Abstract: A quantization level is determined for use by an encoder in encoding an image in accordance with a target number of bits. For each section of the image, the pixels are analysed to estimate the complexity of the image section. For each of a plurality of candidate quantization levels, a relationship and the estimated complexity of the image sections are used to estimate the number of bits that would be generated by encoding the image with the encoder using the respective candidate quantization level, and based thereon one of the candidate quantization levels is selected. The relationship is a function of the quantization level used by the encoder, and is for use in relating the complexity of an image section to the number of bits that would be generated by encoding that image section with the encoder. The encoder uses the selected quantization level in encoding the image.Type: ApplicationFiled: April 28, 2014Publication date: January 22, 2015Applicant: Imagination Technologies LimitedInventors: Jonathan Diggins, Paul Brasnett
-
Publication number: 20140369621Abstract: An image processor comprises an image pre-processing block and an encoder processing block for processing and encoding an image. The image pre-processing block receives image data and processes it to provide an image comprising image sections which each comprise pixels. For each of the image sections, the pixels are analysed to estimate an indication of the complexity of the image section, and metadata is determined based on the estimated complexity indications of the image sections. The metadata is passed to the encoder processing block which uses it to determine a quantization level for use in encoding the image. The encoder processing block can then encode the image using the determined quantization level. Conveniently, the image pre-processing block 106 processes the image data to provide the image, and therefore has access to the image which it can analyse to determine the metadata without requiring a separate read operation of the image.Type: ApplicationFiled: April 28, 2014Publication date: December 18, 2014Applicant: Imagination Technologies LimitedInventors: Jonathan Diggins, Paul Brasnett
-
Patent number: 8831355Abstract: A method for deriving an image identifier comprises deriving a scale-space representation of an image, and processing the scale-space representation to detect a plurality of feature points having values that are maxima or minima. A representation is derived for a scale-dependent image region associated with one or more of the detected plurality of feature points. In an embodiment, the size of the image region is dependent on the scale associated with the corresponding feature point. An image identifier is derived using the representations derived for the scale-dependent image regions. The image identifiers may be used in a method for comparing images.Type: GrantFiled: April 21, 2009Date of Patent: September 9, 2014Assignee: Mitsubishi Electric CorporationInventors: Miroslaw Bober, Paul Brasnett
-
Publication number: 20140212066Abstract: A technique is described for combining several image sources into a single output image or video sequence. For a given pixel of the output image, pixel values are received from the image sources, and a matrix of distance measures between the pixel values (e.g. based on their colours) is computed. Clusters of pixel values are formed using the distance measures, and a score determined for each. One of the clusters is selected according to the scores, and used to derive an output pixel value. In an example, the clusters are formed using an iterative process where the closest pairs of pixel values or clusters are merged to form new clusters up to a size threshold. Examples are described for scoring the clusters based on model-based weighting or cluster size. Examples are also described for a rule-based cluster selection system. A composite image generator implementing the technique is also described.Type: ApplicationFiled: January 17, 2014Publication date: July 31, 2014Applicant: IMAGINATION TECHNOLOGIES LIMITEDInventors: Paul Brasnett, Jonathan Diggins
-
Patent number: 8731066Abstract: A method and apparatus for coding and decoding the fingerprint of a multimedia item such as video or audio is disclosed. A multimedia content temporal, such as a video segment or audio segment, is described by a coarse fingerprint and a plurality of fine fingerprints, each fine fingerprint corresponding to a temporal sub-interval of said temporal interval, said temporal sub-interval typically being smaller than said temporal interval. One or more fine fingerprints are encoded in a non-predictive way, with no reference to the temporally neighboring signatures, and one or more fine fingerprints are encoded in a predictive way, from the temporally neighboring signatures.Type: GrantFiled: October 4, 2010Date of Patent: May 20, 2014Assignee: Mitsubishi Electric CorporationInventors: Nikola Sprljan, Paul Brasnett, Stavros Paschalakis
-
Patent number: 8699851Abstract: A method and apparatus for processing a first sequence of images and a second sequence of images to compare the first and second sequences is disclosed. Each of a plurality of the images in the first sequence and each of a plurality of the images in the second sequence is processed by (i) processing the image data for each of a plurality of pixel neighborhoods in the image to generate at least one respective descriptor element for each of the pixel neighborhoods, each descriptor element comprising one or more bits; and (ii) forming a plurality of words from the descriptor elements of the image such that each word comprises a unique combination of descriptor element bits. The words for the second sequence are generated from the same respective combinations of descriptor element bits as the words for the first sequence.Type: GrantFiled: January 25, 2010Date of Patent: April 15, 2014Assignee: Mitsubishi Electric CorporationInventors: Paul Brasnett, Stavros Paschalakis, Miroslaw Bober
-
Publication number: 20140063031Abstract: In an example method and system, image data to an image processing module. Image data is read from memory into a down-scaler, which down-scales the image data to a first resolution, which is stored in a first buffer. A region of image data which the image processing module will request is predicted, and image data corresponding to at least part of the predicted region of image data is stored in a first buffer, in a second resolution, higher than the first. When a request for image data is received, it is then determined whether image data corresponding to the requested image data is in the second buffer, and if so, then image data is provided to the image processing module from the second buffer. If not, then image data from the first buffer is up-scaled, and the up-scaled image data is provided to the image processing module.Type: ApplicationFiled: March 13, 2013Publication date: March 6, 2014Inventors: Paul Brasnett, Jonathan Diggins, Steven Fishwick, Stephen Morphet