Patents Assigned to Fotonation Limited
  • Patent number: 11301702
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: April 12, 2022
    Assignee: FotoNation Limited
    Inventors: Cian Ryan, Richard Blythman, Joe Lemley, Amr Elrasad, Brian O'Sullivan
  • Patent number: 11303811
    Abstract: A camera comprises a lens assembly coupled to an event-sensor, the lens assembly being configured to focus a light field onto a surface of the event-sensor, the event-sensor surface comprising a plurality of light sensitive-pixels, each of which cause an event to be generated when there is a change in light intensity greater than a threshold amount incident on the pixel. The camera further includes an actuator which can be triggered to cause a change in the light field incident on the surface of the event-sensor and to generate a set of events from a sub-set of pixels distributed across the surface of the event-sensor.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: April 12, 2022
    Assignee: FotoNation Limited
    Inventor: Piotr Stec
  • Patent number: 11300784
    Abstract: A device, such as a head-mounted device (HMD), may include a frame and a plurality of mirrors coupled to an interior portion of the frame. An imaging device may be coupled to the frame at a position to capture images of an eye of the wearer reflected from the mirrors. The HMD may also include a mirror angle adjustment device to adjust an angle of one or more of the mirrors relative to the imaging device so that the mirror(s) reflect the eye of the wearer to the imaging device.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: April 12, 2022
    Assignee: FotoNation Limited
    Inventors: Cosmin Nicolae Rotariu, Istvan Andorko
  • Patent number: 11302009
    Abstract: A method of generating landmark locations for an image crop comprises: processing the crop through an encoder-decoder to provide a plurality of N output maps of comparable spatial resolution to the crop, each output map corresponding to a respective landmark of an object appearing in the image crop; processing an output map from the encoder through a plurality of feed forward layers to provide a feature vector comprising N elements, each element including an (x,y) location for a respective landmark. Any landmarks locations from the feature vector having an x or a y location outside a range for a respective row or column of the crop are selected for a final set of landmark locations; with remaining landmark locations tending to be selected from the N (x,y) landmark locations from the plurality of N output maps.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: April 12, 2022
    Assignee: FotoNation Limited
    Inventors: Ruxandra Vranceanu, Tudor Mihail Pop, Oana Parvan-Cernatescu, Sathish Mangapuram
  • Publication number: 20220101497
    Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.
    Type: Application
    Filed: December 13, 2021
    Publication date: March 31, 2022
    Applicant: FotoNation Limited
    Inventors: Cian Ryan, Richard Blythman
  • Publication number: 20220103749
    Abstract: A method and system for detecting facial expressions in digital images and applications therefore are disclosed. Analysis of a digital image determines whether or not a smile and/or blink is present on a person's face. Face recognition, and/or a pose or illumination condition determination, permits application of a specific, relatively small classifier cascade.
    Type: Application
    Filed: December 7, 2021
    Publication date: March 31, 2022
    Applicant: FotoNation Limited
    Inventors: Catalina Neghina, Mihnea Gangea, Stefan Petrescu, Emilian David, Petronel Bigioi, Eric Zarakov, Eran Steinberg
  • Patent number: 11288504
    Abstract: An approach for an iris liveness detection is provided. A plurality of image pairs is acquired using one or more image sensors of a mobile device. A particular image pair is selected from the plurality of image pairs, and a hyperspectral image is generated for the particular image pair. Based on, at least in part, the hyperspectral image, a particular feature vector for the eye-iris region depicted in the particular image pair is generated, and one or more trained model feature vectors generated for facial features of a particular user of the device are retrieved. Based on, at least in part, the particular feature vector and the one or more trained model feature vectors, a distance metric is determined and compared with a threshold. If the distance metric exceeds the threshold, then a first message indicating that the plurality of image pairs fails to depict the particular user is generated.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: March 29, 2022
    Assignee: FotoNation Limited
    Inventor: Shejin Thavalengal
  • Publication number: 20220092361
    Abstract: The technology relates to tuning a data translation block (DTB) including a generator model and a discriminator model. One or more processors may be configured to receive training data including an image in a second domain. The image in the second domain may be transformed into a first domain with a generator model. The transformed image may be processed to determine one or more outputs with one or more deep neural networks (DNNs) trained to process data in the first domain. An original objective function for the DTB may be updated based on the one or more outputs. The generator and discriminator models may be trained to satisfy the updated objective function.
    Type: Application
    Filed: December 3, 2021
    Publication date: March 24, 2022
    Applicant: FotoNation Limited
    Inventors: Alexandru Malaescu, Adrian Dorin Capata, Mihai Ciuc, Alina Sultana, Dan Filip, Liviu-Cristian Dutu
  • Publication number: 20220078369
    Abstract: A method of producing an image frame from event packets received from an event camera comprises: forming a tile buffer sized to accumulate event information for a subset of image tiles, the tile buffer having an associated tile table that determines a mapping between each tile of the image frame for which event information is accumulated in the tile buffer and the image frame. For each event packet: an image tile corresponding to the pixel location of the event packet is identified; responsive to the tile buffer storing information for one other event corresponding to the image tile, event information is added to the tile buffer; and responsive to the tile buffer not storing information for another event corresponding to the image tile and responsive to the tile buffer being capable of accumulating event information for at least one more tile, the image tile is added to the tile buffer.
    Type: Application
    Filed: September 9, 2020
    Publication date: March 10, 2022
    Applicant: FotoNation Limited
    Inventors: Lorant BARTHA, Corneliu ZAHARIA, Vlad GEORGESCU, Joe LEMLEY
  • Patent number: 11272161
    Abstract: Systems and methods for calibrating an array camera are disclosed. Systems and methods for calibrating an array camera in accordance with embodiments of this invention include the capturing of an image of a test pattern with the array camera such that each imaging component in the array camera captures an image of the test pattern. The image of the test pattern captured by a reference imaging component is then used to derive calibration information for the reference component. A corrected image of the test pattern for the reference component is then generated from the calibration information and the image of the test pattern captured by the reference imaging component. The corrected image is then used with the images captured by each of the associate imaging components associated with the reference component to generate calibration information for the associate imaging components.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: March 8, 2022
    Assignee: FotoNation Limited
    Inventor: Robert Mullis
  • Patent number: 11270137
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: March 8, 2022
    Assignee: FotoNation Limited
    Inventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
  • Patent number: 11257192
    Abstract: A method of correcting an image obtained by an image acquisition device includes obtaining successive measurements, Gn, of device movement during exposure of each row of an image. An integration range, idx, is selected in proportion to an exposure time, te, for each row of the image. Accumulated measurements, Cn, of device movement for each row of an image are averaged across the integration range to provide successive filtered measurements, G, of device movement during exposure of each row of an image. The image is corrected for device movement using the filtered measurements G.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: February 22, 2022
    Assignee: FotoNation Limited
    Inventor: Piotr Stec
  • Patent number: 11257289
    Abstract: In an embodiment, a 3D facial modeling system includes a plurality of cameras configured to capture images from different viewpoints, a processor, and a memory containing a 3D facial modeling application and parameters defining a face detector, wherein the 3D facial modeling application directs the processor to obtain a plurality of images of a face captured from different viewpoints using the plurality of cameras, locate a face within each of the plurality of images using the face detector, wherein the face detector labels key feature points on the located face within each of the plurality of images, determine disparity between corresponding key feature points of located faces within the plurality of images, and generate a 3D model of the face using the depth of the key feature points.
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: February 22, 2022
    Assignee: FotoNation Limited
    Inventor: Kartik Venkataraman
  • Patent number: 11244429
    Abstract: A method of providing a sharpness measure for an image comprises detecting an object region within an image; obtaining meta-data for the image; and scaling the chosen object region to a fixed size. A gradient map is calculated for the scaled object region and compared against a threshold determined for the image to provide a filtered gradient map of values exceeding the threshold. The threshold for the image is a function of at least: a contrast level for the detected object region, a distance to the subject and an ISO/gain used for image acquisition. A sharpness measure for the object region is determined as a function of the filtered gradient map values, the sharpness measure being proportional to the filtered gradient map values.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: February 8, 2022
    Assignee: FotoNation Limited
    Inventors: Florin Nanu, Adrian Bobei, Alexandru Malaescu, Cosmin Clapon
  • Publication number: 20220019776
    Abstract: A method to determine activity in a sequence of successively acquired images of a scene, comprises: acquiring the sequence of images; for each image in the sequence of images, forming a feature block of features extracted from the image and determining image specific information including a weighting for the image; normalizing the determined weightings to form a normalized weighting for each image in the sequence of images; for each image in the sequence of images, combining the associated normalized weighting and associated feature block to form a weighted feature block; passing a combination of the weighted feature blocks through a predictive module to determine an activity in the sequence of images; and outputting a result comprising the determined activity in the sequence of images.
    Type: Application
    Filed: July 14, 2020
    Publication date: January 20, 2022
    Applicant: FotoNation Limited
    Inventors: Alexandru MALAESCU, Dan FILIP, Mihai CIUC, Liviu-Cristian DUTU, Madalin DUMITRU-GUZU
  • Patent number: 11223764
    Abstract: A method for determining bias in an inertial measurement unit of an image acquisition device comprises mapping at least one reference point within an image frame into a 3D spherical space based on a lens projection model for the image acquisition device to provide a respective anchor point in 3D space for each reference point.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: January 11, 2022
    Assignee: FotoNation Limited
    Inventor: Piotr Stec
  • Patent number: 11209633
    Abstract: An iris image acquisition system for a mobile device, comprises a lens assembly arranged along an optical axis and configured for forming an image comprising at least one iris of a subject disposed frontally to the lens assembly; and an image sensor configured to acquire the formed image. The lens assembly comprises a first lens refractive element and at least one second lens element for converging incident radiation to the first refractive element. The first refractive element has a variable thickness configured to counteract a shift of the formed image along the optical axis induced by change in iris-lens assembly distance, such that different areas of the image sensor on which irises at different respective iris-lens assembly distances are formed are in focus within a range of respective iris-lens assembly distances at which iris detail is provided at sufficient contrast to be recognised.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: December 28, 2021
    Assignee: FotoNation Limited
    Inventors: Niamh Fitzgerald, Christopher Dainty, Alexander Goncharov
  • Publication number: 20210397861
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Application
    Filed: September 29, 2020
    Publication date: December 23, 2021
    Applicant: FotoNation Limited
    Inventors: Amr ELRASAD, Cian RYAN, Richard BLYTHMAN, Joe LEMLEY, Brian O'SULLIVAN
  • Publication number: 20210397860
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Application
    Filed: July 29, 2020
    Publication date: December 23, 2021
    Applicant: FotoNation Limited
    Inventors: Cian RYAN, Richard BLYTHMAN, Joe LEMLEY, Amr ELRASAD, Brian O'SULLIVAN
  • Publication number: 20210398313
    Abstract: A method for determining an absolute depth map to monitor the location and pose of a head (100) being imaged by a camera comprises: acquiring (20) an image from the camera (110) including a head with a facial region; determining (23) at least one distance from the camera (110) to a facial feature of the facial region using a distance measuring sub-system (120); determining (24) a relative depth map of facial features within the facial region; and combining (25) the relative depth map with the at least one distance to form an absolute depth map for the facial region.
    Type: Application
    Filed: June 17, 2020
    Publication date: December 23, 2021
    Applicant: FotoNation Limited
    Inventors: Joe LEMLEY, Peter CORCORAN