Patents by Inventor Suren Jayasuriya

Suren Jayasuriya has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240153103
    Abstract: Tracking-based motion deblurring via coded exposure is provided. Fast object tracking is useful for a variety of applications in surveillance, autonomous vehicles, and remote sensing. In particular, there is a need to have these algorithms embedded on specialized hardware, such as field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs), to ensure energy-efficient operation while saving on latency, bandwidth, and memory access/storage. In an exemplary aspect, an object tracker is used to track motion of one or more objects in a scene captured by an image sensor. The object tracker is coupled with coded exposure of the image sensor, which modulates photodiodes in the image sensor with a known exposure function (e.g., based on the object tracking). This allows for motion blur to be encoded in a characteristic manner in image data captured by the image sensor. Then, in post-processing, deblurring is performed using a computational algorithm.
    Type: Application
    Filed: November 21, 2023
    Publication date: May 9, 2024
    Applicant: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Suren Jayasuriya, Odrika Iqbal, Andreas Spanias
  • Patent number: 11978466
    Abstract: Systems, methods, and apparatuses to restore degraded speech via a modified diffusion model are described. An exemplary system is specially configured to train a diffusion-based vocoder containing an upsampler, based on pairing original speech x and degraded speech mel-spectrum mT samples; train a deep convoluted neural network (CNN) upsampler based on a mean absolute error loss to match the estimated original speech {circumflex over (x)}? outputted by the diffusion-based vocoder by extracting the upsampler, generating a reference conditioner, and generating a weighted altered conditioner cTn?. The system further optimizes speech quality to invert non-linear transformation and estimate lost data by feeding the degraded mel-spectrum mT through the CNN upsampler and feeding the degraded mel-spectrum mT through the diffusion-based vocoder. The system then generates estimated original speech {circumflex over (x)}? based on the corresponding degraded speech mel-spectrum mT. Other related embodiments are described.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: May 7, 2024
    Assignee: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Jianwei Zhang, Suren Jayasuriya, Visar Berisha
  • Patent number: 11880984
    Abstract: Tracking-based motion deblurring via coded exposure is provided. Fast object tracking is useful for a variety of applications in surveillance, autonomous vehicles, and remote sensing. In particular, there is a need to have these algorithms embedded on specialized hardware, such as field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs), to ensure energy-efficient operation while saving on latency, bandwidth, and memory access/storage. In an exemplary aspect, an object tracker is used to track motion of one or more objects in a scene captured by an image sensor. The object tracker is coupled with coded exposure of the image sensor, which modulates photodiodes in the image sensor with a known exposure function (e.g., based on the object tracking). This allows for motion blur to be encoded in a characteristic manner in image data captured by the image sensor. Then, in post-processing, deblurring is performed using a computational algorithm.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: January 23, 2024
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Suren Jayasuriya, Odrika Iqbal, Andreas Spanias
  • Patent number: 11741643
    Abstract: A system for generating a 4D representation of a scene in motion given a sinogram collected from the scene while in motion. The system generates, based on scene parameters, an initial 3D representation of the scene indicating linear attenuation coefficients (LACs) of voxels of the scene. The system generates, based on motion parameters, a 4D motion field indicating motion of the scene. The system generates, based on the initial 3D representation and the 4D motion field, a 4D representation of the scene that is a sequence of 3D representations having LACs. The system generates a synthesized sinogram of the scene from the generated 4D representation. The system adjusts the scene parameters and the motion parameters based on differences between the collected sinogram and the synthesized sinogram. The processing is repeated until the differences satisfy a termination criterion.
    Type: Grant
    Filed: March 22, 2021
    Date of Patent: August 29, 2023
    Assignees: Lawrence Livermore National Security, LLC, Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Hyojin Kim, Rushil Anirudh, Kyle Champley, Kadri Aditya Mohan, Albert William Reed, Suren Jayasuriya
  • Publication number: 20230113589
    Abstract: The disclosure concerns biometeorological sensing devices including a processor communicatively coupled to a memory, and a plurality of sensors communicatively coupled to the processor. The plurality of sensors includes a humidity sensor, a UV sensor, an anemometer, an atmospheric thermometer, a globe thermometer, and a camera. The device also includes a network interface communicatively coupled to the processor. The processor is configured to estimate a mean radiant temperature (MRT) using data received from the plurality of sensors, identify a person in an image received from the camera, determine a bounding box that encloses the person in the image, generate a shadow map from the image, calculate an intersection over union (IOU) of the bounding box with the shadow map to determine if the person is in the shade, and transmit observed space usage and estimated MRT to a server communicatively coupled to the network interface.
    Type: Application
    Filed: October 7, 2022
    Publication date: April 13, 2023
    Applicant: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITY
    Inventors: Karthik Kashinath Kulkarni, Suren Jayasuriya, Ariane Middel, Tejaswi Gowda, Florian Arwed Schneider
  • Publication number: 20220392471
    Abstract: Systems, methods, and apparatuses to restore degraded speech via a modified diffusion model are described. An exemplary system is specially configured to train a diffusion-based vocoder containing an upsampler, based on pairing original speech x and degraded speech mel-spectrum mT samples; train a deep convoluted neural network (CNN) upsampler based on a mean absolute error loss to match the estimated original speech {circumflex over (x)}? outputted by the diffusion-based vocoder by extracting the upsampler, generating a reference conditioner, and generating a weighted altered conditioner ??Tn. The system further optimizes speech quality to invert non-linear transformation and estimate lost data by feeding the degraded mel-spectrum mT through the CNN upsampler and feeding the degraded mel-spectrum mT through the diffusion-based vocoder. The system then generates estimated original speech {circumflex over (x)}? based on the corresponding degraded speech mel-spectrum mT. Other related embodiments are described.
    Type: Application
    Filed: May 27, 2022
    Publication date: December 8, 2022
    Inventors: Jianwei Zhang, Suren Jayasuriya, Visar Berisha
  • Patent number: 11481881
    Abstract: Various embodiments of systems and methods for adaptive video subsampling for energy-efficient object detection are disclosed herein.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: October 25, 2022
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Andreas Spanias, Pavan Turaga, Sameeksha Katoch, Suren Jayasuriya, Divya Mohan
  • Publication number: 20220301241
    Abstract: A system for generating a 4D representation of a scene in motion given a sinogram collected from the scene while in motion. The system generates, based on scene parameters, an initial 3D representation of the scene indicating linear attenuation coefficients (LACs) of voxels of the scene. The system generates, based on motion parameters, a 4D motion field indicating motion of the scene. The system generates, based on the initial 3D representation and the 4D motion field, a 4D representation of the scene that is a sequence of 3D representations having LACs. The system generates a synthesized sinogram of the scene from the generated 4D representation. The system adjusts the scene parameters and the motion parameters based on differences between the collected sinogram and the synthesized sinogram. The processing is repeated until the differences satisfy a termination criterion.
    Type: Application
    Filed: March 22, 2021
    Publication date: September 22, 2022
    Inventors: Hyojin Kim, Rushil Anirudh, Kyle Champley, Kadri Aditya Mohan, Albert William Reed, Suren Jayasuriya
  • Publication number: 20210396580
    Abstract: Various embodiments of a system for linear unmixing of spectral images using a dispersion model are disclosed herein.
    Type: Application
    Filed: June 21, 2021
    Publication date: December 23, 2021
    Applicants: Arizona Board of Regents on Behalf of Arizona State University, Arizona Board Of Regents For And On Behalf Of Northern Arizona University
    Inventors: John Janiczek, Suren Jayasuriya, Gautam Dasarathy, Christopher Edwards, Philip Christensen
  • Publication number: 20210390669
    Abstract: Tracking-based motion deblurring via coded exposure is provided. Fast object tracking is useful for a variety of applications in surveillance, autonomous vehicles, and remote sensing. In particular, there is a need to have these algorithms embedded on specialized hardware, such as field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs), to ensure energy-efficient operation while saving on latency, bandwidth, and memory access/storage. In an exemplary aspect, an object tracker is used to track motion of one or more objects in a scene captured by an image sensor. The object tracker is coupled with coded exposure of the image sensor, which modulates photodiodes in the image sensor with a known exposure function (e.g., based on the object tracking). This allows for motion blur to be encoded in a characteristic manner in image data captured by the image sensor. Then, in post-processing, deblurring is performed using a computational algorithm.
    Type: Application
    Filed: June 15, 2021
    Publication date: December 16, 2021
    Applicant: Arizona Board of Regents on behalf of Arizona State University
    Inventors: Suren Jayasuriya, Odrika Iqbal, Andreas Spanias
  • Patent number: 11127159
    Abstract: Various embodiments for system and method of adaptive lighting for data-driven non-line-of-sight imaging are disclosed.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: September 21, 2021
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Sreenithy Chandran, Suren Jayasuriya
  • Patent number: 10983216
    Abstract: A depth of field imaging apparatus includes a light field imager and a time of flight imager combined in a single on-chip architecture. This hybrid device enables simultaneous capture of a light field image and a time of flight image of an object scene. Algorithms are described, which enable the simultaneous acquisition of light field images and a time of flight images. Associated hybrid pixel structures, device arrays (hybrid imaging systems), and device applications are disclosed.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: April 20, 2021
    Assignee: CORNELL UNIVERSITY
    Inventors: Alyosha Molnar, Suren Jayasuriya, Sriram Sivaramakrishnan
  • Publication number: 20210012472
    Abstract: Various embodiments of systems and methods for adaptive video subsampling for energy-efficient object detection are disclosed herein.
    Type: Application
    Filed: June 15, 2020
    Publication date: January 14, 2021
    Applicant: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Andreas Spanias, Pavan Turaga, Sameeksha Katoch, Suren Jayasuriya, Divya Mohan
  • Patent number: 10735675
    Abstract: A configurable image processing system can process image data for multiple applications by including an image sensor capable of operating in a machine vision mode and a photography mode in response to an operating system command. When operating in machine vision mode, the image sensor may send image data to first processor for machine vision processing. When operating in photography mode, the image sensor may send image data to an image coprocessor for photography processing.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: August 4, 2020
    Assignee: Cornell University
    Inventors: Mark Buckler, Adrian Sampson, Suren Jayasuriya
  • Publication number: 20200226783
    Abstract: Various embodiments for system and method of adaptive lighting for data-driven non-line-of-sight imaging are disclosed.
    Type: Application
    Filed: January 8, 2020
    Publication date: July 16, 2020
    Applicant: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Sreenithy Chandran, Suren Jayasuriya
  • Publication number: 20200191967
    Abstract: A depth of field imaging apparatus includes a light field imager and a time of flight imager combined in a single on-chip architecture. This hybrid device enables simultaneous capture of a light field image and a time of flight image of an object scene. Algorithms are described, which enable the simultaneous acquisition of light field images and a time of flight images. Associated hybrid pixel structures, device arrays (hybrid imaging systems), and device applications are disclosed.
    Type: Application
    Filed: February 21, 2020
    Publication date: June 18, 2020
    Applicant: CORNELL UNIVERSITY
    Inventors: Alyosha Molnar, Suren Jayasuriya, Sriram Sivaramakrishnan
  • Patent number: 10605916
    Abstract: A depth of field imaging apparatus includes a light field imager and a time of flight imager combined in a single on-chip architecture. This hybrid device enables simultaneous capture of a light field image and a time of flight image of an object scene. Algorithms are described, which enable the simultaneous acquisition of light field images and a time of flight images. Associated hybrid pixel structures, device arrays (hybrid imaging systems), and device applications are disclosed.
    Type: Grant
    Filed: March 17, 2016
    Date of Patent: March 31, 2020
    Assignee: CORNELL UNIVERSITY
    Inventors: Alyosha Molnar, Suren Jayasuriya, Sriram Sivaramakrishnan
  • Publication number: 20190320127
    Abstract: A configurable image processing system can process image data for multiple applications by including an image sensor capable of operating in a machine vision mode and a photography mode in response to an operating system command. When operating in machine vision mode, the image sensor may send image data to first processor for machine vision processing. When operating in photography mode, the image sensor may send image data to an image coprocessor for photography processing.
    Type: Application
    Filed: April 13, 2018
    Publication date: October 17, 2019
    Inventors: Mark Buckler, Adrian Sampson, Suren Jayasuriya
  • Publication number: 20190033448
    Abstract: A depth of field imaging apparatus includes a light field imager and a time of flight imager combined in a single on-chip architecture. This hybrid device enables simultaneous capture of a light field image and a time of flight image of an object scene. Algorithms are described, which enable the simultaneous acquisition of light field images and a time of flight images. Associated hybrid pixel structures, device arrays (hybrid imaging systems), and device applications are disclosed.
    Type: Application
    Filed: March 17, 2016
    Publication date: January 31, 2019
    Applicant: CORNELL UNIVERSITY
    Inventors: Alyosha Molnar, Suren Jayasuriya, Sriram Sivaramakrishnan