Patents by Inventor Cian Ryan

Cian Ryan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11983327
    Abstract: A method for identifying a gesture from one of a plurality of dynamic gestures, each dynamic gesture comprising a distinct movement made by a user over a period of time within a field of view of an image acquisition device comprises iteratively: acquiring a current image from said image acquisition device at a given time; and passing at least a portion of the current image through a bidirectionally recurrent multi-layer classifier. A final layer of the multi-layer classifier comprises an output indicating a probability that a gesture from the plurality of dynamic gestures is being made by a user during the time of acquiring the image.
    Type: Grant
    Filed: October 6, 2021
    Date of Patent: May 14, 2024
    Assignee: FotoNation Limited
    Inventors: Tudor Topoleanu, Szabolcs Fulop, Petronel Bigioi, Cian Ryan, Joseph Lemley
  • Publication number: 20230394120
    Abstract: Disclosed is a multi-modal convolutional neural network (CNN) for fusing image information from a frame based camera, such as, a near infra-red (NIR) camera and an event camera for analysing facial characteristics in order to produce classifications such as head pose or eye gaze. The neural network processes image frames acquired from each camera through a plurality of convolutional layers to provide a respective set of one or more intermediate images. The network fuses at least one corresponding pair of intermediate images generated from each of image frames through an array of fusing cells. Each fusing cell is connected to at least a respective element of each intermediate image and is trained to weight each element from each intermediate image to provide the fused output. The neural network further comprises at least one task network configured to generate one or more task outputs for the region of interest.
    Type: Application
    Filed: August 17, 2023
    Publication date: December 7, 2023
    Inventors: Cian Ryan, Richard Blythman, Joseph Lemley, Paul Kielty
  • Patent number: 11768919
    Abstract: Disclosed is a multi-modal convolutional neural network (CNN) for fusing image information from a frame based camera, such as, a near infra-red (NIR) camera and an event camera for analysing facial characteristics in order to produce classifications such as head pose or eye gaze. The neural network processes image frames acquired from each camera through a plurality of convolutional layers to provide a respective set of one or more intermediate images. The network fuses at least one corresponding pair of intermediate images generated from each of image frames through an array of fusing cells. Each fusing cell is connected to at least a respective element of each intermediate image and is trained to weight each element from each intermediate image to provide the fused output. The neural network further comprises at least one task network configured to generate one or more task outputs for the region of interest.
    Type: Grant
    Filed: January 13, 2021
    Date of Patent: September 26, 2023
    Inventors: Cian Ryan, Richard Blythman, Joseph Lemley, Paul Kielty
  • Patent number: 11749004
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Grant
    Filed: February 24, 2022
    Date of Patent: September 5, 2023
    Inventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
  • Patent number: 11727541
    Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: August 15, 2023
    Inventors: Cian Ryan, Richard Blythman
  • Publication number: 20230107097
    Abstract: A method for identifying a gesture from one of a plurality of dynamic gestures, each dynamic gesture comprising a distinct movement made by a user over a period of time within a field of view of an image acquisition device comprises iteratively: acquiring a current image from said image acquisition device at a given time; and passing at least a portion of the current image through a bidirectionally recurrent multi-layer classifier. A final layer of the multi-layer classifier comprises an output indicating a probability that a gesture from the plurality of dynamic gestures is being made by a user during the time of acquiring the image.
    Type: Application
    Filed: October 6, 2021
    Publication date: April 6, 2023
    Applicant: FotoNation Limited
    Inventors: Tudor TOPOLEANU, Szabolcs FULOP, Petronel BIGIOI, Cian RYAN, Joseph LEMLEY
  • Publication number: 20220254171
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Application
    Filed: February 24, 2022
    Publication date: August 11, 2022
    Applicant: FotoNation Limited
    Inventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
  • Publication number: 20220222496
    Abstract: Disclosed is a multi-modal convolutional neural network (CNN) for fusing image information from a frame based camera, such as, a near infra-red (NIR) camera and an event camera for analysing facial characteristics in order to produce classifications such as head pose or eye gaze. The neural network processes image frames acquired from each camera through a plurality of convolutional layers to provide a respective set of one or more intermediate images. The network fuses at least one corresponding pair of intermediate images generated from each of image frames through an array of fusing cells. Each fusing cell is connected to at least a respective element of each intermediate image and is trained to weight each element from each intermediate image to provide the fused output. The neural network further comprises at least one task network configured to generate one or more task outputs for the region of interest.
    Type: Application
    Filed: January 13, 2021
    Publication date: July 14, 2022
    Applicant: FotoNation Limited
    Inventors: Cian RYAN, Richard BLYTHMAN, Joseph LEMLEY, Paul KIELTY
  • Patent number: 11301702
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: April 12, 2022
    Assignee: FotoNation Limited
    Inventors: Cian Ryan, Richard Blythman, Joe Lemley, Amr Elrasad, Brian O'Sullivan
  • Publication number: 20220101497
    Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.
    Type: Application
    Filed: December 13, 2021
    Publication date: March 31, 2022
    Applicant: FotoNation Limited
    Inventors: Cian Ryan, Richard Blythman
  • Patent number: 11270137
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: March 8, 2022
    Assignee: FotoNation Limited
    Inventors: Amr Elrasad, Cian Ryan, Richard Blythman, Joe Lemley, Brian O'Sullivan
  • Publication number: 20210397860
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Application
    Filed: July 29, 2020
    Publication date: December 23, 2021
    Applicant: FotoNation Limited
    Inventors: Cian RYAN, Richard BLYTHMAN, Joe LEMLEY, Amr ELRASAD, Brian O'SULLIVAN
  • Publication number: 20210397861
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Application
    Filed: September 29, 2020
    Publication date: December 23, 2021
    Applicant: FotoNation Limited
    Inventors: Amr ELRASAD, Cian RYAN, Richard BLYTHMAN, Joe LEMLEY, Brian O'SULLIVAN
  • Patent number: 11200644
    Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: December 14, 2021
    Inventors: Cian Ryan, Richard Blythman
  • Patent number: 11164019
    Abstract: A method for producing a textural image from event information generated by an event camera comprises: accumulating event information from a plurality of events occurring during successive event cycles across a field of view of the event camera, each event indicating an x,y location within the field of view, a polarity for a change of detected light intensity incident at the x,y location and an event cycle at which the event occurred; in response to selected event cycles, analysing event information for one or more preceding event cycles to identify one or more regions of interest bounding a respective object to be tracked; and responsive to a threshold event criterion for a region of interest being met, generating a textural image for the region of interest from event information accumulated from within the region of interest.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: November 2, 2021
    Assignee: FotoNation Limited
    Inventors: Cian Ryan, Richard Blythman, Joe Lemley, Amr Elrasad
  • Publication number: 20210272247
    Abstract: A video super resolution method comprises successively executing instances of a first plurality of layers (SISR) of a neural network for generating a first image (St) at a higher resolution than an input image frame (Xt); successively executing a second plurality of layers (VSR) of the neural network for generating a second image (Vt) at the higher resolution, at least one of the second plurality of layers generating intermediate output information (Ht), the second plurality of layers taking into account an output image (Yt?1) at the higher resolution generated by a previous instance of the network from a previous input image frame (Xt?1) and intermediate output information (Ht?1) generated by the second plurality of layers of the previous instance, and executing a third plurality of layers for combining the first (St) and second (Vt) images to produce an output image (Yt) for the instance of the network.
    Type: Application
    Filed: February 27, 2020
    Publication date: September 2, 2021
    Applicant: FotoNation Limited
    Inventors: Cian RYAN, Richard BLYTHMAN
  • Patent number: 10751204
    Abstract: Methods and apparatus are disclosed for customizing an elution rate of a stent. The stent includes a hollow strut that forms the stent, the hollow strut defining a lumenal space, a drug formulation disposed within the lumenal space of the hollow strut, and at least one side port for eluting the drug formulation in vivo. When the stent is in the radially expanded configuration the hollow strut is deformable from a first configuration that has a first elution rate for the drug formulation to a second configuration that has a second elution rate for the drug formulation. The second elution rate is faster than the first elution rate. The hollow strut deforms from the first configuration to the second configuration upon application of an applied pressure above a predetermined threshold.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: August 25, 2020
    Assignee: Medtronic, Inc.
    Inventors: Cian Ryan, David Hobbins, Shane Nolan, Michael Sayers, Eamon Keane, Brian Dowling, Jonathan Cope, Conor O'Donovan
  • Publication number: 20170354521
    Abstract: Methods and apparatus are disclosed for customizing an elution rate of a stent. The stent includes a hollow strut that forms the stent, the hollow strut defining a lumenal space, a drug formulation disposed within the lumenal space of the hollow strut, and at least one side port for eluting the drug formulation in vivo. When the stent is in the radially expanded configuration the hollow strut is deformable from a first configuration that has a first elution rate for the drug formulation to a second configuration that has a second elution rate for the drug formulation. The second elution rate is faster than the first elution rate. The hollow strut deforms from the first configuration to the second configuration upon application of an applied pressure above a predetermined threshold.
    Type: Application
    Filed: June 8, 2017
    Publication date: December 14, 2017
    Inventors: Cian Ryan, David Hobbins, Shan Nolan, Michael Sayers, Eamon Keane, Brian Dowling, Jonathan Cope, Conor O'Donovan