Patents by Inventor Joe LEMLEY

Joe LEMLEY has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11818495
    Abstract: A method of producing an image frame from event packets received from an event camera comprises: forming a tile buffer sized to accumulate event information for a subset of image tiles, the tile buffer having an associated tile table that determines a mapping between each tile of the image frame for which event information is accumulated in the tile buffer and the image frame. For each event packet: an image tile corresponding to the pixel location of the event packet is identified; responsive to the tile buffer storing information for one other event corresponding to the image tile, event information is added to the tile buffer; and responsive to the tile buffer not storing information for another event corresponding to the image tile and responsive to the tile buffer being capable of accumulating event information for at least one more tile, the image tile is added to the tile buffer.
    Type: Grant
    Filed: June 28, 2022
    Date of Patent: November 14, 2023
    Inventors: Lorant Bartha, Corneliu Zaharia, Vlad Georgescu, Joe Lemley
  • Patent number: 11749004
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Grant
    Filed: February 24, 2022
    Date of Patent: September 5, 2023
    Inventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
  • Publication number: 20220329750
    Abstract: A method of producing an image frame from event packets received from an event camera comprises: forming a tile buffer sized to accumulate event information for a subset of image tiles, the tile buffer having an associated tile table that determines a mapping between each tile of the image frame for which event information is accumulated in the tile buffer and the image frame. For each event packet: an image tile corresponding to the pixel location of the event packet is identified; responsive to the tile buffer storing information for one other event corresponding to the image tile, event information is added to the tile buffer; and responsive to the tile buffer not storing information for another event corresponding to the image tile and responsive to the tile buffer being capable of accumulating event information for at least one more tile, the image tile is added to the tile buffer.
    Type: Application
    Filed: June 28, 2022
    Publication date: October 13, 2022
    Inventors: Lorant Bartha, Corneliu Zaharia, Vlad Georgescu, Joe Lemley
  • Patent number: 11423567
    Abstract: A method for determining an absolute depth map to monitor the location and pose of a head (100) being imaged by a camera comprises: acquiring (20) an image from the camera (110) including a head with a facial region; determining (23) at least one distance from the camera (110) to a facial feature of the facial region using a distance measuring sub-system (120); determining (24) a relative depth map of facial features within the facial region; and combining (25) the relative depth map with the at least one distance to form an absolute depth map for the facial region.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: August 23, 2022
    Assignee: FotoNation Limited
    Inventors: Joe Lemley, Peter Corcoran
  • Publication number: 20220254171
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Application
    Filed: February 24, 2022
    Publication date: August 11, 2022
    Applicant: FotoNation Limited
    Inventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
  • Patent number: 11405580
    Abstract: A method of producing an image frame from event packets received from an event camera comprises: forming a tile buffer sized to accumulate event information for a subset of image tiles, the tile buffer having an associated tile table that determines a mapping between each tile of the image frame for which event information is accumulated in the tile buffer and the image frame. For each event packet: an image tile corresponding to the pixel location of the event packet is identified; responsive to the tile buffer storing information for one other event corresponding to the image tile, event information is added to the tile buffer; and responsive to the tile buffer not storing information for another event corresponding to the image tile and responsive to the tile buffer being capable of accumulating event information for at least one more tile, the image tile is added to the tile buffer.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: August 2, 2022
    Assignee: FotoNation Limited
    Inventors: Lorant Bartha, Corneliu Zaharia, Vlad Georgescu, Joe Lemley
  • Patent number: 11301702
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: April 12, 2022
    Assignee: FotoNation Limited
    Inventors: Cian Ryan, Richard Blythman, Joe Lemley, Amr Elrasad, Brian O'Sullivan
  • Publication number: 20220078369
    Abstract: A method of producing an image frame from event packets received from an event camera comprises: forming a tile buffer sized to accumulate event information for a subset of image tiles, the tile buffer having an associated tile table that determines a mapping between each tile of the image frame for which event information is accumulated in the tile buffer and the image frame. For each event packet: an image tile corresponding to the pixel location of the event packet is identified; responsive to the tile buffer storing information for one other event corresponding to the image tile, event information is added to the tile buffer; and responsive to the tile buffer not storing information for another event corresponding to the image tile and responsive to the tile buffer being capable of accumulating event information for at least one more tile, the image tile is added to the tile buffer.
    Type: Application
    Filed: September 9, 2020
    Publication date: March 10, 2022
    Applicant: FotoNation Limited
    Inventors: Lorant BARTHA, Corneliu ZAHARIA, Vlad GEORGESCU, Joe LEMLEY
  • Patent number: 11270137
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: March 8, 2022
    Assignee: FotoNation Limited
    Inventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
  • Publication number: 20210397861
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Application
    Filed: September 29, 2020
    Publication date: December 23, 2021
    Applicant: FotoNation Limited
    Inventors: Amr ELRASAD, Cian RYAN, Richard BLYTHMAN, Joe LEMLEY, Brian O'SULLIVAN
  • Publication number: 20210397860
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Application
    Filed: July 29, 2020
    Publication date: December 23, 2021
    Applicant: FotoNation Limited
    Inventors: Cian RYAN, Richard BLYTHMAN, Joe LEMLEY, Amr ELRASAD, Brian O'SULLIVAN
  • Publication number: 20210398313
    Abstract: A method for determining an absolute depth map to monitor the location and pose of a head (100) being imaged by a camera comprises: acquiring (20) an image from the camera (110) including a head with a facial region; determining (23) at least one distance from the camera (110) to a facial feature of the facial region using a distance measuring sub-system (120); determining (24) a relative depth map of facial features within the facial region; and combining (25) the relative depth map with the at least one distance to form an absolute depth map for the facial region.
    Type: Application
    Filed: June 17, 2020
    Publication date: December 23, 2021
    Applicant: FotoNation Limited
    Inventors: Joe LEMLEY, Peter CORCORAN
  • Patent number: 11164019
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: November 2, 2021
    Assignee: FotoNation Limited
    Inventors: Cian Ryan, Richard Blythman, Joe Lemley, Amr Elrasad
  • Patent number: 10915817
    Abstract: Training a target neural network comprises providing a first batch of samples of a given class to respective instances of a generative neural network, each instance providing a variant of the sample in accordance with the parameters of the generative network. Each variant produced by the generative network is compared with another sample of the class to provide a first loss function for the generative network. A second batch of samples is provided to the target neural network, at least some of the samples comprising variants produced by the generative network. A second loss function is determined for the target neural network by comparing outputs of instances of the target neural network to one or more targets for the neural network. The parameters for the target neural network are updated using the second loss function and the parameters for the generative network are updated using the first and second loss functions.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: February 9, 2021
    Assignee: FotoNation Limited
    Inventors: Shabab Bazrafkan, Joe Lemley
  • Patent number: 10546231
    Abstract: Synthesizing a neural network from a plurality of component neural networks is disclosed. The method comprises mapping each component network to a respective graph node where each node is first labelled in accordance with the structure of a corresponding layer of the component network and a distance of the node from one of a given input or output. The graphs for each component network are merged into a single merged graph by merging nodes from component network graphs having the same first structural label. Each node of the merged graph is second labelled in accordance with the structure of the corresponding layer of the component network and a distance of the node from the other of a given input or output. The merged graph is contracted by merging nodes of the merged graph having the same second structural label. The contracted-merged graph is mapped to a synthesized neural network.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: January 28, 2020
    Assignee: FotoNation Limited
    Inventors: Shabab Bazrafkan, Joe Lemley
  • Publication number: 20180211164
    Abstract: Training a target neural network comprises providing a first batch of samples of a given class to respective instances of a generative neural network, each instance providing a variant of the sample in accordance with the parameters of the generative network. Each variant produced by the generative network is compared with another sample of the class to provide a first loss function for the generative network. A second batch of samples is provided to the target neural network, at least some of the samples comprising variants produced by the generative network. A second loss function is determined for the target neural network by comparing outputs of instances of the target neural network to one or more targets for the neural network. The parameters for the target neural network are updated using the second loss function and the parameters for the generative network are updated using the first and second loss functions.
    Type: Application
    Filed: January 23, 2017
    Publication date: July 26, 2018
    Inventors: Shabab BAZRAFKAN, Joe LEMLEY
  • Publication number: 20180211155
    Abstract: Synthesizing a neural network from a plurality of component neural networks is disclosed. The method comprises mapping each component network to a respective graph node where each node is first labelled in accordance with the structure of a corresponding layer of the component network and a distance of the node from one of a given input or output. The graphs for each component network are merged into a single merged graph by merging nodes from component network graphs having the same first structural label. Each node of the merged graph is second labelled in accordance with the structure of the corresponding layer of the component network and a distance of the node from the other of a given input or output. The merged graph is contracted by merging nodes of the merged graph having the same second structural label. The contracted-merged graph is mapped to a synthesized neural network.
    Type: Application
    Filed: January 23, 2017
    Publication date: July 26, 2018
    Inventors: Shabab BAZRAFKAN, Joe LEMLEY