Patents Assigned to Fotonation Limited
  • Publication number: 20220414829
    Abstract: Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images using the plurality of imagers, using a microprocessor to determine an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and using a microprocessor to determine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image.
    Type: Application
    Filed: August 22, 2022
    Publication date: December 29, 2022
    Applicant: FotoNation Limited
    Inventors: Dan Lelescu, Gabriel Molina, Kartik Venkataraman
  • Patent number: 11538192
    Abstract: A method for calibrating a vehicle cabin camera having: a pitch, yaw and roll angle; and a field of view capturing vehicle cabin features which are symmetric about a vehicle longitudinal axis comprises: selecting points from within an image of the vehicle cabin and projecting the points onto a 3D unit circle in accordance with a camera projection model. For each of one or more rotations of a set of candidate yaw and roll rotations, the method comprises: rotating the projected points with the rotation; flipping the rotated points about a pitch axis; counter-rotating the projected points with an inverse of the rotation; and mapping the counter-rotated points back into an image plane to provide a set of transformed points. A candidate rotation which provides a best match between the set of transformed points and the locations of the selected points in the image plane is selected.
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: December 27, 2022
    Assignee: FotoNation Limited
    Inventors: Piotr Stec, Petronel Bigioi
  • Patent number: 11531211
    Abstract: A method for stabilizing a video sequence comprises: obtaining an indication of camera movement from acquisition of a previous camera frame to acquisition of a current camera frame; determining an orientation for the camera at a time of acquiring the current camera frame; and determining a candidate orientation for a crop frame for the current camera frame by adjusting an orientation of a crop frame associated with the previous camera frame according to the determined orientation. A boundary of one of the camera frame or crop frame is traversed to determine if a specific point on the boundary of the crop frame exceeds a boundary of the camera frame. If so, a rotation of the specific point location which would bring the specific point location onto the boundary of the crop frame is determined and the candidate crop frame orientation updated accordingly before the crop frame is displayed.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: December 20, 2022
    Assignee: FotoNation Limited
    Inventors: Brian O'Sullivan, Piotr Stec
  • Patent number: 11532148
    Abstract: An image processing system comprises a template matching engine (TME). The TME reads an image from the memory; and as each pixel of the image is being read, calculates a respective feature value of a plurality of feature maps as a function of the pixel value. A pre-filter is responsive to a current pixel location comprising a node within a limited detector cascade to be applied to a window within the image to: compare a feature value from a selected one of the plurality of feature maps corresponding to the pixel location to a threshold value; and responsive to pixels for all nodes within a limited detector cascade to be applied to the window having been read, determine a score for the window. A classifier, responsive to the pre-filter indicating that a score for a window is below a window threshold, does not apply a longer detector cascade to the window before indicating that the window does not comprise an object to be detected.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: December 20, 2022
    Assignee: FotoNation Limited
    Inventors: Nicolae Nicoara, Cristina Raceala, Corneliu Zaharia, Szabolcs Fulop, Oana Iovita
  • Publication number: 20220385848
    Abstract: Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. An imaging device in accordance with one embodiment of the invention includes at least one imager array, and each imager in the array comprises a plurality of light sensing elements and a lens stack including at least one lens surface, where the lens stack is configured to form an image on the light sensing elements, control circuitry configured to capture images formed on the light sensing elements of each of the imagers, and a super-resolution processing module configured to generate at least one higher resolution super-resolved image using a plurality of the captured images.
    Type: Application
    Filed: August 5, 2022
    Publication date: December 1, 2022
    Applicant: FotoNation Limited
    Inventors: Kartik Venkataraman, Amandeep S. Jabbi, Robert H. Mullis, Jacques Duparre, Shane Ching-Feng Hu
  • Publication number: 20220309709
    Abstract: A method comprises displaying a first image acquired from a camera having an input camera projection model including a first focal length and an optical axis parameter value. A portion of the first image is selected as a second image associated with an output camera projection model in which either a focal length and/or an optical axis parameter value differ from the parameters of the input camera projection model. The method involves iteratively: adjusting either the focal length and/or an optical axis parameter value for the camera lens so that it approaches the corresponding value of the output camera projection model; acquiring a subsequent image using the adjusted focal length or optical axis parameter value; mapping pixel coordinates in the second image, through a normalized 3D coordinate system to respective locations in the subsequent image to determine respective values for the pixel coordinates; and displaying the second image.
    Type: Application
    Filed: March 26, 2021
    Publication date: September 29, 2022
    Applicant: FotoNation Limited
    Inventor: Piotr STEC
  • Publication number: 20220301102
    Abstract: A method for correcting an image divides an output image into a grid with vertical sections of width smaller than the image width but wide enough to allow efficient bursts when writing distortion corrected line sections into memory. A distortion correction engine includes a relatively small amount of memory for an input image buffer but without requiring unduly complex control. The input image buffer accommodates enough lines of an input image to cover the distortion of a single most vertically distorted line section of the input image. The memory required for the input image buffer can be significantly less than would be required to store all the lines of a distorted input image spanning a maximal distortion of a complete line within the input image.
    Type: Application
    Filed: June 6, 2022
    Publication date: September 22, 2022
    Applicant: FotoNation Limited
    Inventors: Piotr Stec, Vlad Georgescu
  • Publication number: 20220277172
    Abstract: A method for training a neural network for detecting a plurality of classes of object within a sample comprises providing a training data set comprising a plurality of samples, each annotated according to whether the samples include labelled objects of interest. In a first type of samples, all objects of interest are labelled according to their class and comprise a foreground of said samples, the remainder of the samples comprising background. In a second type of samples, some objects of interest are labelled in a foreground and their background may comprise unlabelled objects. A third type of samples comprise only background comprising no objects of interest. Negative mining is only performed on the results of processing the first and third types of samples.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 1, 2022
    Applicant: FotoNation Limited
    Inventors: Eoin O'CONNELL, Joseph LEMLEY
  • Patent number: 11429771
    Abstract: A hardware acceleration module may generate a channel-wise argmax map using a predefined set of hardware-implemented operations. In some examples, a hardware acceleration module may receive a set of feature maps for different image channels. The hardware acceleration module may execute a sequence of hardware operations, including a portion(s) of hardware for executing a convolution, rectified linear unit (ReLU) activation, and/or layer concatenation, to determine a maximum channel feature value and/or argument maxima (argmax) value for a set of associated locations within the feature maps. An argmax map may be generated based at least in part on the argument maximum for a set of associated locations.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: August 30, 2022
    Assignee: FotoNation Limited
    Inventor: Tudor Mihail Pop
  • Patent number: 11423567
    Abstract: A method for determining an absolute depth map to monitor the location and pose of a head (100) being imaged by a camera comprises: acquiring (20) an image from the camera (110) including a head with a facial region; determining (23) at least one distance from the camera (110) to a facial feature of the facial region using a distance measuring sub-system (120); determining (24) a relative depth map of facial features within the facial region; and combining (25) the relative depth map with the at least one distance to form an absolute depth map for the facial region.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: August 23, 2022
    Assignee: FotoNation Limited
    Inventors: Joe Lemley, Peter Corcoran
  • Publication number: 20220254171
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Application
    Filed: February 24, 2022
    Publication date: August 11, 2022
    Applicant: FotoNation Limited
    Inventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
  • Publication number: 20220254105
    Abstract: In an embodiment, a 3D facial modeling system includes a plurality of cameras configured to capture images from different viewpoints, a processor, and a memory containing a 3D facial modeling application and parameters defining a face detector, wherein the 3D facial modeling application directs the processor to obtain a plurality of images of a face captured from different viewpoints using the plurality of cameras, locate a face within each of the plurality of images using the face detector, wherein the face detector labels key feature points on the located face within each of the plurality of images, determine disparity between corresponding key feature points of located faces within the plurality of images, and generate a 3D model of the face using the depth of the key feature points.
    Type: Application
    Filed: February 22, 2022
    Publication date: August 11, 2022
    Applicant: FotoNation Limited
    Inventor: Kartik Venkataraman
  • Patent number: 11409854
    Abstract: A biometrics-enabled portable storage device may store and secure data via biometrics related to a user's iris. The biometrics-enabled portable storage device may include a camera that captures image data related a user's iris and stores the image data to enroll the user for use of the biometrics-enabled portable storage device. To unlock the data, a user aligns the camera with their iris using a hot mirror and the camera captures iris data for comparison with the iris image data stored during enrollment. If the two sets of image data match, the biometrics-enabled portable storage device may be unlocked and the user may access data stored on the biometrics-enabled portable storage device. If the two sets of image data do not match, then the biometrics-enabled portable storage device remains locked.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: August 9, 2022
    Assignee: FotoNation Limited
    Inventors: Istvan Andorko, Petronel Bigioi, Darragh Ballesty
  • Patent number: 11405580
    Abstract: A method of producing an image frame from event packets received from an event camera comprises: forming a tile buffer sized to accumulate event information for a subset of image tiles, the tile buffer having an associated tile table that determines a mapping between each tile of the image frame for which event information is accumulated in the tile buffer and the image frame. For each event packet: an image tile corresponding to the pixel location of the event packet is identified; responsive to the tile buffer storing information for one other event corresponding to the image tile, event information is added to the tile buffer; and responsive to the tile buffer not storing information for another event corresponding to the image tile and responsive to the tile buffer being capable of accumulating event information for at least one more tile, the image tile is added to the tile buffer.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: August 2, 2022
    Assignee: FotoNation Limited
    Inventors: Lorant Bartha, Corneliu Zaharia, Vlad Georgescu, Joe Lemley
  • Publication number: 20220239890
    Abstract: Systems and methods for calibrating an array camera are disclosed. Systems and methods for calibrating an array camera in accordance with embodiments of this invention include the capturing of an image of a test pattern with the array camera such that each imaging component in the array camera captures an image of the test pattern. The image of the test pattern captured by a reference imaging component is then used to derive calibration information for the reference component. A corrected image of the test pattern for the reference component is then generated from the calibration information and the image of the test pattern captured by the reference imaging component. The corrected image is then used with the images captured by each of the associate imaging components associated with the reference component to generate calibration information for the associate imaging components.
    Type: Application
    Filed: March 2, 2022
    Publication date: July 28, 2022
    Applicant: FotoNation Limited
    Inventor: Robert Mullis
  • Publication number: 20220222496
    Abstract: Disclosed is a multi-modal convolutional neural network (CNN) for fusing image information from a frame based camera, such as, a near infra-red (NIR) camera and an event camera for analysing facial characteristics in order to produce classifications such as head pose or eye gaze. The neural network processes image frames acquired from each camera through a plurality of convolutional layers to provide a respective set of one or more intermediate images. The network fuses at least one corresponding pair of intermediate images generated from each of image frames through an array of fusing cells. Each fusing cell is connected to at least a respective element of each intermediate image and is trained to weight each element from each intermediate image to provide the fused output. The neural network further comprises at least one task network configured to generate one or more task outputs for the region of interest.
    Type: Application
    Filed: January 13, 2021
    Publication date: July 14, 2022
    Applicant: FotoNation Limited
    Inventors: Cian RYAN, Richard BLYTHMAN, Joseph LEMLEY, Paul KIELTY
  • Patent number: 11379719
    Abstract: A method of tracking an object across a stream of images comprises determining a region of interest (ROI) bounding the object in an initial frame of an image stream. A HOG map is provided for the ROI by: dividing the ROI into an array of M×N cells, each cell comprising a plurality of image pixels; and determining a HOG for each of the cells. The HOG map is stored as indicative of the features of the object. Subsequent frames are acquired from the stream of images. The frames are scanned ROI by ROI to identify a candidate ROI having a HOG map best matching the stored HOG map features. If the match meets a threshold, the stored HOG map indicative of the features of the object is updated according to the HOG map for the best matching candidate ROI.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: July 5, 2022
    Assignee: FotoNation Limited
    Inventors: Dragos Dinu, Mihai Constantin Munteanu, Alexandru Caliman
  • Patent number: 11375133
    Abstract: A method for automatically determining exposure settings for an image acquisition system comprises maintaining a plurality of look-up tables, each look-up table being associated with a corresponding light condition and storing image exposure settings associated with corresponding distance values between a subject and the image acquisition system. An image of a subject is acquired from a camera module; and a light condition occurring during the acquisition is determined based on the acquired image. A distance between the subject and the camera module during the acquisition is calculated.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: June 28, 2022
    Assignee: FotoNation Limited
    Inventor: Istvan Andorko
  • Patent number: 11361525
    Abstract: This disclosure describes, in part, devices and techniques for performing biometric identification for an electronic device. For instance, the electronic device may include one or more near-infrared illuminators that output near-infrared light. The one or more near-infrared illuminators may be located in or on a bezel and/or a display of the electronic device. The electronic device may also include an imaging device that generates first image data representing the near-infrared light and visible light. After generating the image data, the electronic device may process the first image data using one or more image processing techniques to generate second image data representing a near-infrared image and third image data representing a visible image. The electronic device may then analyze the second image data and/or the third image data using one or more biometric identification techniques. Based on the analysis, the electronic device may identify a person possessing the electronic device.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: June 14, 2022
    Assignee: FotoNation Limited
    Inventors: Istvan Andorko, Petronel Bigioi
  • Patent number: 11354773
    Abstract: A method for correcting an image divides an output image into a grid with vertical sections of width smaller than the image width but wide enough to allow efficient bursts when writing distortion corrected line sections into memory. A distortion correction engine includes a relatively small amount of memory for an input image buffer but without requiring unduly complex control. The input image buffer accommodates enough lines of an input image to cover the distortion of a single most vertically distorted line section of the input image. The memory required for the input image buffer can be significantly less than would be required to store all the lines of a distorted input image spanning a maximal distortion of a complete line within the input image.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: June 7, 2022
    Assignee: FotoNation Limited
    Inventors: Piotr Stec, Vlad Georgescu