Patents by Inventor Richard BLYTHMAN
Richard BLYTHMAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230394120Abstract: Disclosed is a multi-modal convolutional neural network (CNN) for fusing image information from a frame based camera, such as, a near infra-red (NIR) camera and an event camera for analysing facial characteristics in order to produce classifications such as head pose or eye gaze. The neural network processes image frames acquired from each camera through a plurality of convolutional layers to provide a respective set of one or more intermediate images. The network fuses at least one corresponding pair of intermediate images generated from each of image frames through an array of fusing cells. Each fusing cell is connected to at least a respective element of each intermediate image and is trained to weight each element from each intermediate image to provide the fused output. The neural network further comprises at least one task network configured to generate one or more task outputs for the region of interest.Type: ApplicationFiled: August 17, 2023Publication date: December 7, 2023Inventors: Cian Ryan, Richard Blythman, Joseph Lemley, Paul Kielty
-
Patent number: 11768919Abstract: Disclosed is a multi-modal convolutional neural network (CNN) for fusing image information from a frame based camera, such as, a near infra-red (NIR) camera and an event camera for analysing facial characteristics in order to produce classifications such as head pose or eye gaze. The neural network processes image frames acquired from each camera through a plurality of convolutional layers to provide a respective set of one or more intermediate images. The network fuses at least one corresponding pair of intermediate images generated from each of image frames through an array of fusing cells. Each fusing cell is connected to at least a respective element of each intermediate image and is trained to weight each element from each intermediate image to provide the fused output. The neural network further comprises at least one task network configured to generate one or more task outputs for the region of interest.Type: GrantFiled: January 13, 2021Date of Patent: September 26, 2023Inventors: Cian Ryan, Richard Blythman, Joseph Lemley, Paul Kielty
-
Patent number: 11749004Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: GrantFiled: February 24, 2022Date of Patent: September 5, 2023Inventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
-
Patent number: 11727541Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.Type: GrantFiled: December 13, 2021Date of Patent: August 15, 2023Inventors: Cian Ryan, Richard Blythman
-
Publication number: 20220254171Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: ApplicationFiled: February 24, 2022Publication date: August 11, 2022Applicant: FotoNation LimitedInventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
-
Publication number: 20220222496Abstract: Disclosed is a multi-modal convolutional neural network (CNN) for fusing image information from a frame based camera, such as, a near infra-red (NIR) camera and an event camera for analysing facial characteristics in order to produce classifications such as head pose or eye gaze. The neural network processes image frames acquired from each camera through a plurality of convolutional layers to provide a respective set of one or more intermediate images. The network fuses at least one corresponding pair of intermediate images generated from each of image frames through an array of fusing cells. Each fusing cell is connected to at least a respective element of each intermediate image and is trained to weight each element from each intermediate image to provide the fused output. The neural network further comprises at least one task network configured to generate one or more task outputs for the region of interest.Type: ApplicationFiled: January 13, 2021Publication date: July 14, 2022Applicant: FotoNation LimitedInventors: Cian RYAN, Richard BLYTHMAN, Joseph LEMLEY, Paul KIELTY
-
Patent number: 11301702Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: GrantFiled: July 29, 2020Date of Patent: April 12, 2022Assignee: FotoNation LimitedInventors: Cian Ryan, Richard Blythman, Joe Lemley, Amr Elrasad, Brian O'Sullivan
-
Publication number: 20220101497Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.Type: ApplicationFiled: December 13, 2021Publication date: March 31, 2022Applicant: FotoNation LimitedInventors: Cian Ryan, Richard Blythman
-
Patent number: 11270137Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: GrantFiled: September 29, 2020Date of Patent: March 8, 2022Assignee: FotoNation LimitedInventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
-
Publication number: 20210397861Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: ApplicationFiled: September 29, 2020Publication date: December 23, 2021Applicant: FotoNation LimitedInventors: Amr ELRASAD, Cian RYAN, Richard BLYTHMAN, Joe LEMLEY, Brian O'SULLIVAN
-
Publication number: 20210397860Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: ApplicationFiled: July 29, 2020Publication date: December 23, 2021Applicant: FotoNation LimitedInventors: Cian RYAN, Richard BLYTHMAN, Joe LEMLEY, Amr ELRASAD, Brian O'SULLIVAN
-
Patent number: 11200644Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.Type: GrantFiled: February 27, 2020Date of Patent: December 14, 2021Inventors: Cian Ryan, Richard Blythman
-
Patent number: 11164019Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: GrantFiled: June 17, 2020Date of Patent: November 2, 2021Assignee: FotoNation LimitedInventors: Cian Ryan, Richard Blythman, Joe Lemley, Amr Elrasad
-
Publication number: 20210272247Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.Type: ApplicationFiled: February 27, 2020Publication date: September 2, 2021Applicant: FotoNation LimitedInventors: Cian RYAN, Richard BLYTHMAN