Patents by Inventor Cian Ryan
Cian Ryan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11983327Abstract: A method for identifying a gesture from one of a plurality of dynamic gestures, each dynamic gesture comprising a distinct movement made by a user over a period of time within a field of view of an image acquisition device comprises iteratively: acquiring a current image from said image acquisition device at a given time; and passing at least a portion of the current image through a bidirectionally recurrent multi-layer classifier. A final layer of the multi-layer classifier comprises an output indicating a probability that a gesture from the plurality of dynamic gestures is being made by a user during the time of acquiring the image.Type: GrantFiled: October 6, 2021Date of Patent: May 14, 2024Assignee: FotoNation LimitedInventors: Tudor Topoleanu, Szabolcs Fulop, Petronel Bigioi, Cian Ryan, Joseph Lemley
-
Publication number: 20230394120Abstract: Disclosed is a multi-modal convolutional neural network (CNN) for fusing image information from a frame based camera, such as, a near infra-red (NIR) camera and an event camera for analysing facial characteristics in order to produce classifications such as head pose or eye gaze. The neural network processes image frames acquired from each camera through a plurality of convolutional layers to provide a respective set of one or more intermediate images. The network fuses at least one corresponding pair of intermediate images generated from each of image frames through an array of fusing cells. Each fusing cell is connected to at least a respective element of each intermediate image and is trained to weight each element from each intermediate image to provide the fused output. The neural network further comprises at least one task network configured to generate one or more task outputs for the region of interest.Type: ApplicationFiled: August 17, 2023Publication date: December 7, 2023Inventors: Cian Ryan, Richard Blythman, Joseph Lemley, Paul Kielty
-
Patent number: 11768919Abstract: Disclosed is a multi-modal convolutional neural network (CNN) for fusing image information from a frame based camera, such as, a near infra-red (NIR) camera and an event camera for analysing facial characteristics in order to produce classifications such as head pose or eye gaze. The neural network processes image frames acquired from each camera through a plurality of convolutional layers to provide a respective set of one or more intermediate images. The network fuses at least one corresponding pair of intermediate images generated from each of image frames through an array of fusing cells. Each fusing cell is connected to at least a respective element of each intermediate image and is trained to weight each element from each intermediate image to provide the fused output. The neural network further comprises at least one task network configured to generate one or more task outputs for the region of interest.Type: GrantFiled: January 13, 2021Date of Patent: September 26, 2023Inventors: Cian Ryan, Richard Blythman, Joseph Lemley, Paul Kielty
-
Patent number: 11749004Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: GrantFiled: February 24, 2022Date of Patent: September 5, 2023Inventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
-
Patent number: 11727541Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.Type: GrantFiled: December 13, 2021Date of Patent: August 15, 2023Inventors: Cian Ryan, Richard Blythman
-
Publication number: 20230107097Abstract: A method for identifying a gesture from one of a plurality of dynamic gestures, each dynamic gesture comprising a distinct movement made by a user over a period of time within a field of view of an image acquisition device comprises iteratively: acquiring a current image from said image acquisition device at a given time; and passing at least a portion of the current image through a bidirectionally recurrent multi-layer classifier. A final layer of the multi-layer classifier comprises an output indicating a probability that a gesture from the plurality of dynamic gestures is being made by a user during the time of acquiring the image.Type: ApplicationFiled: October 6, 2021Publication date: April 6, 2023Applicant: FotoNation LimitedInventors: Tudor TOPOLEANU, Szabolcs FULOP, Petronel BIGIOI, Cian RYAN, Joseph LEMLEY
-
Publication number: 20220254171Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: ApplicationFiled: February 24, 2022Publication date: August 11, 2022Applicant: FotoNation LimitedInventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
-
Publication number: 20220222496Abstract: Disclosed is a multi-modal convolutional neural network (CNN) for fusing image information from a frame based camera, such as, a near infra-red (NIR) camera and an event camera for analysing facial characteristics in order to produce classifications such as head pose or eye gaze. The neural network processes image frames acquired from each camera through a plurality of convolutional layers to provide a respective set of one or more intermediate images. The network fuses at least one corresponding pair of intermediate images generated from each of image frames through an array of fusing cells. Each fusing cell is connected to at least a respective element of each intermediate image and is trained to weight each element from each intermediate image to provide the fused output. The neural network further comprises at least one task network configured to generate one or more task outputs for the region of interest.Type: ApplicationFiled: January 13, 2021Publication date: July 14, 2022Applicant: FotoNation LimitedInventors: Cian RYAN, Richard BLYTHMAN, Joseph LEMLEY, Paul KIELTY
-
Patent number: 11301702Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: GrantFiled: July 29, 2020Date of Patent: April 12, 2022Assignee: FotoNation LimitedInventors: Cian Ryan, Richard Blythman, Joe Lemley, Amr Elrasad, Brian O'Sullivan
-
Publication number: 20220101497Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.Type: ApplicationFiled: December 13, 2021Publication date: March 31, 2022Applicant: FotoNation LimitedInventors: Cian Ryan, Richard Blythman
-
Patent number: 11270137Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: GrantFiled: September 29, 2020Date of Patent: March 8, 2022Assignee: FotoNation LimitedInventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
-
Publication number: 20210397860Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: ApplicationFiled: July 29, 2020Publication date: December 23, 2021Applicant: FotoNation LimitedInventors: Cian RYAN, Richard BLYTHMAN, Joe LEMLEY, Amr ELRASAD, Brian O'SULLIVAN
-
Publication number: 20210397861Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: ApplicationFiled: September 29, 2020Publication date: December 23, 2021Applicant: FotoNation LimitedInventors: Amr ELRASAD, Cian RYAN, Richard BLYTHMAN, Joe LEMLEY, Brian O'SULLIVAN
-
Patent number: 11200644Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.Type: GrantFiled: February 27, 2020Date of Patent: December 14, 2021Inventors: Cian Ryan, Richard Blythman
-
Patent number: 11164019Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.Type: GrantFiled: June 17, 2020Date of Patent: November 2, 2021Assignee: FotoNation LimitedInventors: Cian Ryan, Richard Blythman, Joe Lemley, Amr Elrasad
-
Publication number: 20210272247Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.Type: ApplicationFiled: February 27, 2020Publication date: September 2, 2021Applicant: FotoNation LimitedInventors: Cian RYAN, Richard BLYTHMAN
-
Patent number: 10751204Abstract: Methods and apparatus are disclosed for customizing an elution rate of a stent. The stent includes a hollow strut that forms the stent, the hollow strut defining a lumenal space, a drug formulation disposed within the lumenal space of the hollow strut, and at least one side port for eluting the drug formulation in vivo. When the stent is in the radially expanded configuration the hollow strut is deformable from a first configuration that has a first elution rate for the drug formulation to a second configuration that has a second elution rate for the drug formulation. The second elution rate is faster than the first elution rate. The hollow strut deforms from the first configuration to the second configuration upon application of an applied pressure above a predetermined threshold.Type: GrantFiled: June 8, 2017Date of Patent: August 25, 2020Assignee: Medtronic, Inc.Inventors: Cian Ryan, David Hobbins, Shane Nolan, Michael Sayers, Eamon Keane, Brian Dowling, Jonathan Cope, Conor O'Donovan
-
Publication number: 20170354521Abstract: Methods and apparatus are disclosed for customizing an elution rate of a stent. The stent includes a hollow strut that forms the stent, the hollow strut defining a lumenal space, a drug formulation disposed within the lumenal space of the hollow strut, and at least one side port for eluting the drug formulation in vivo. When the stent is in the radially expanded configuration the hollow strut is deformable from a first configuration that has a first elution rate for the drug formulation to a second configuration that has a second elution rate for the drug formulation. The second elution rate is faster than the first elution rate. The hollow strut deforms from the first configuration to the second configuration upon application of an applied pressure above a predetermined threshold.Type: ApplicationFiled: June 8, 2017Publication date: December 14, 2017Inventors: Cian Ryan, David Hobbins, Shan Nolan, Michael Sayers, Eamon Keane, Brian Dowling, Jonathan Cope, Conor O'Donovan