Patents by Inventor Suren Jayasuriya
Suren Jayasuriya has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250057466Abstract: Described are platforms, systems, media, and methods for evaluating, monitoring, and/or treating a subject for brain injury based on machine learning analysis of one or more of brain imaging features, clinical features, demographic features, or speech features.Type: ApplicationFiled: August 30, 2024Publication date: February 20, 2025Inventors: Visar BERISHA, Jianwei ZHANG, Todd J. SCHWEDT, Catherine CHONG, Suren JAYASURIYA, Teresa Wu
-
Publication number: 20240369703Abstract: A system may be configured for implementing neural volumetric reconstruction for coherent synthetic aperture sonar. Exemplary systems include means for measuring underwater objects using high-resolution Synthetic aperture sonar (SAS) by coherently combining data from a moving array to form high-resolution imagery. Such a system may receive a waveform from the measurements of the underwater object and optimize the waveform for deconvolving via an iterative deconvolution optimization process applying an adaptable approach to waveform compression where performance is tuned via sparsity and smoothness parameters. Such a system may deconvolve the wave form using pulse deconvolution and use the deconvolved waveforms in an analysis-by-synthesis optimization operation with an implicit neural representation to yield higher resolution and superior volumetric reconstruction scene of the underwater object.Type: ApplicationFiled: April 26, 2024Publication date: November 7, 2024Inventors: Albert Reed, Juhyeon Kim, Thomas Blanford, Adithya Pediredla, Daniel Brown, Suren Jayasuriya
-
Publication number: 20240153103Abstract: Tracking-based motion deblurring via coded exposure is provided. Fast object tracking is useful for a variety of applications in surveillance, autonomous vehicles, and remote sensing. In particular, there is a need to have these algorithms embedded on specialized hardware, such as field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs), to ensure energy-efficient operation while saving on latency, bandwidth, and memory access/storage. In an exemplary aspect, an object tracker is used to track motion of one or more objects in a scene captured by an image sensor. The object tracker is coupled with coded exposure of the image sensor, which modulates photodiodes in the image sensor with a known exposure function (e.g., based on the object tracking). This allows for motion blur to be encoded in a characteristic manner in image data captured by the image sensor. Then, in post-processing, deblurring is performed using a computational algorithm.Type: ApplicationFiled: November 21, 2023Publication date: May 9, 2024Applicant: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Suren Jayasuriya, Odrika Iqbal, Andreas Spanias
-
Patent number: 11978466Abstract: Systems, methods, and apparatuses to restore degraded speech via a modified diffusion model are described. An exemplary system is specially configured to train a diffusion-based vocoder containing an upsampler, based on pairing original speech x and degraded speech mel-spectrum mT samples; train a deep convoluted neural network (CNN) upsampler based on a mean absolute error loss to match the estimated original speech {circumflex over (x)}? outputted by the diffusion-based vocoder by extracting the upsampler, generating a reference conditioner, and generating a weighted altered conditioner cTn?. The system further optimizes speech quality to invert non-linear transformation and estimate lost data by feeding the degraded mel-spectrum mT through the CNN upsampler and feeding the degraded mel-spectrum mT through the diffusion-based vocoder. The system then generates estimated original speech {circumflex over (x)}? based on the corresponding degraded speech mel-spectrum mT. Other related embodiments are described.Type: GrantFiled: May 27, 2022Date of Patent: May 7, 2024Assignee: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Jianwei Zhang, Suren Jayasuriya, Visar Berisha
-
Patent number: 11880984Abstract: Tracking-based motion deblurring via coded exposure is provided. Fast object tracking is useful for a variety of applications in surveillance, autonomous vehicles, and remote sensing. In particular, there is a need to have these algorithms embedded on specialized hardware, such as field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs), to ensure energy-efficient operation while saving on latency, bandwidth, and memory access/storage. In an exemplary aspect, an object tracker is used to track motion of one or more objects in a scene captured by an image sensor. The object tracker is coupled with coded exposure of the image sensor, which modulates photodiodes in the image sensor with a known exposure function (e.g., based on the object tracking). This allows for motion blur to be encoded in a characteristic manner in image data captured by the image sensor. Then, in post-processing, deblurring is performed using a computational algorithm.Type: GrantFiled: June 15, 2021Date of Patent: January 23, 2024Assignee: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Suren Jayasuriya, Odrika Iqbal, Andreas Spanias
-
Patent number: 11741643Abstract: A system for generating a 4D representation of a scene in motion given a sinogram collected from the scene while in motion. The system generates, based on scene parameters, an initial 3D representation of the scene indicating linear attenuation coefficients (LACs) of voxels of the scene. The system generates, based on motion parameters, a 4D motion field indicating motion of the scene. The system generates, based on the initial 3D representation and the 4D motion field, a 4D representation of the scene that is a sequence of 3D representations having LACs. The system generates a synthesized sinogram of the scene from the generated 4D representation. The system adjusts the scene parameters and the motion parameters based on differences between the collected sinogram and the synthesized sinogram. The processing is repeated until the differences satisfy a termination criterion.Type: GrantFiled: March 22, 2021Date of Patent: August 29, 2023Assignees: Lawrence Livermore National Security, LLC, Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Hyojin Kim, Rushil Anirudh, Kyle Champley, Kadri Aditya Mohan, Albert William Reed, Suren Jayasuriya
-
Publication number: 20230113589Abstract: The disclosure concerns biometeorological sensing devices including a processor communicatively coupled to a memory, and a plurality of sensors communicatively coupled to the processor. The plurality of sensors includes a humidity sensor, a UV sensor, an anemometer, an atmospheric thermometer, a globe thermometer, and a camera. The device also includes a network interface communicatively coupled to the processor. The processor is configured to estimate a mean radiant temperature (MRT) using data received from the plurality of sensors, identify a person in an image received from the camera, determine a bounding box that encloses the person in the image, generate a shadow map from the image, calculate an intersection over union (IOU) of the bounding box with the shadow map to determine if the person is in the shade, and transmit observed space usage and estimated MRT to a server communicatively coupled to the network interface.Type: ApplicationFiled: October 7, 2022Publication date: April 13, 2023Applicant: ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITYInventors: Karthik Kashinath Kulkarni, Suren Jayasuriya, Ariane Middel, Tejaswi Gowda, Florian Arwed Schneider
-
Publication number: 20220392471Abstract: Systems, methods, and apparatuses to restore degraded speech via a modified diffusion model are described. An exemplary system is specially configured to train a diffusion-based vocoder containing an upsampler, based on pairing original speech x and degraded speech mel-spectrum mT samples; train a deep convoluted neural network (CNN) upsampler based on a mean absolute error loss to match the estimated original speech {circumflex over (x)}? outputted by the diffusion-based vocoder by extracting the upsampler, generating a reference conditioner, and generating a weighted altered conditioner ??Tn. The system further optimizes speech quality to invert non-linear transformation and estimate lost data by feeding the degraded mel-spectrum mT through the CNN upsampler and feeding the degraded mel-spectrum mT through the diffusion-based vocoder. The system then generates estimated original speech {circumflex over (x)}? based on the corresponding degraded speech mel-spectrum mT. Other related embodiments are described.Type: ApplicationFiled: May 27, 2022Publication date: December 8, 2022Inventors: Jianwei Zhang, Suren Jayasuriya, Visar Berisha
-
Patent number: 11481881Abstract: Various embodiments of systems and methods for adaptive video subsampling for energy-efficient object detection are disclosed herein.Type: GrantFiled: June 15, 2020Date of Patent: October 25, 2022Assignee: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Andreas Spanias, Pavan Turaga, Sameeksha Katoch, Suren Jayasuriya, Divya Mohan
-
Publication number: 20220301241Abstract: A system for generating a 4D representation of a scene in motion given a sinogram collected from the scene while in motion. The system generates, based on scene parameters, an initial 3D representation of the scene indicating linear attenuation coefficients (LACs) of voxels of the scene. The system generates, based on motion parameters, a 4D motion field indicating motion of the scene. The system generates, based on the initial 3D representation and the 4D motion field, a 4D representation of the scene that is a sequence of 3D representations having LACs. The system generates a synthesized sinogram of the scene from the generated 4D representation. The system adjusts the scene parameters and the motion parameters based on differences between the collected sinogram and the synthesized sinogram. The processing is repeated until the differences satisfy a termination criterion.Type: ApplicationFiled: March 22, 2021Publication date: September 22, 2022Inventors: Hyojin Kim, Rushil Anirudh, Kyle Champley, Kadri Aditya Mohan, Albert William Reed, Suren Jayasuriya
-
Publication number: 20210396580Abstract: Various embodiments of a system for linear unmixing of spectral images using a dispersion model are disclosed herein.Type: ApplicationFiled: June 21, 2021Publication date: December 23, 2021Applicants: Arizona Board of Regents on Behalf of Arizona State University, Arizona Board Of Regents For And On Behalf Of Northern Arizona UniversityInventors: John Janiczek, Suren Jayasuriya, Gautam Dasarathy, Christopher Edwards, Philip Christensen
-
Publication number: 20210390669Abstract: Tracking-based motion deblurring via coded exposure is provided. Fast object tracking is useful for a variety of applications in surveillance, autonomous vehicles, and remote sensing. In particular, there is a need to have these algorithms embedded on specialized hardware, such as field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs), to ensure energy-efficient operation while saving on latency, bandwidth, and memory access/storage. In an exemplary aspect, an object tracker is used to track motion of one or more objects in a scene captured by an image sensor. The object tracker is coupled with coded exposure of the image sensor, which modulates photodiodes in the image sensor with a known exposure function (e.g., based on the object tracking). This allows for motion blur to be encoded in a characteristic manner in image data captured by the image sensor. Then, in post-processing, deblurring is performed using a computational algorithm.Type: ApplicationFiled: June 15, 2021Publication date: December 16, 2021Applicant: Arizona Board of Regents on behalf of Arizona State UniversityInventors: Suren Jayasuriya, Odrika Iqbal, Andreas Spanias
-
Patent number: 11127159Abstract: Various embodiments for system and method of adaptive lighting for data-driven non-line-of-sight imaging are disclosed.Type: GrantFiled: January 8, 2020Date of Patent: September 21, 2021Assignee: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Sreenithy Chandran, Suren Jayasuriya
-
Patent number: 10983216Abstract: A depth of field imaging apparatus includes a light field imager and a time of flight imager combined in a single on-chip architecture. This hybrid device enables simultaneous capture of a light field image and a time of flight image of an object scene. Algorithms are described, which enable the simultaneous acquisition of light field images and a time of flight images. Associated hybrid pixel structures, device arrays (hybrid imaging systems), and device applications are disclosed.Type: GrantFiled: February 21, 2020Date of Patent: April 20, 2021Assignee: CORNELL UNIVERSITYInventors: Alyosha Molnar, Suren Jayasuriya, Sriram Sivaramakrishnan
-
Publication number: 20210012472Abstract: Various embodiments of systems and methods for adaptive video subsampling for energy-efficient object detection are disclosed herein.Type: ApplicationFiled: June 15, 2020Publication date: January 14, 2021Applicant: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Andreas Spanias, Pavan Turaga, Sameeksha Katoch, Suren Jayasuriya, Divya Mohan
-
Patent number: 10735675Abstract: A configurable image processing system can process image data for multiple applications by including an image sensor capable of operating in a machine vision mode and a photography mode in response to an operating system command. When operating in machine vision mode, the image sensor may send image data to first processor for machine vision processing. When operating in photography mode, the image sensor may send image data to an image coprocessor for photography processing.Type: GrantFiled: April 13, 2018Date of Patent: August 4, 2020Assignee: Cornell UniversityInventors: Mark Buckler, Adrian Sampson, Suren Jayasuriya
-
Publication number: 20200226783Abstract: Various embodiments for system and method of adaptive lighting for data-driven non-line-of-sight imaging are disclosed.Type: ApplicationFiled: January 8, 2020Publication date: July 16, 2020Applicant: Arizona Board of Regents on Behalf of Arizona State UniversityInventors: Sreenithy Chandran, Suren Jayasuriya
-
Publication number: 20200191967Abstract: A depth of field imaging apparatus includes a light field imager and a time of flight imager combined in a single on-chip architecture. This hybrid device enables simultaneous capture of a light field image and a time of flight image of an object scene. Algorithms are described, which enable the simultaneous acquisition of light field images and a time of flight images. Associated hybrid pixel structures, device arrays (hybrid imaging systems), and device applications are disclosed.Type: ApplicationFiled: February 21, 2020Publication date: June 18, 2020Applicant: CORNELL UNIVERSITYInventors: Alyosha Molnar, Suren Jayasuriya, Sriram Sivaramakrishnan
-
Patent number: 10605916Abstract: A depth of field imaging apparatus includes a light field imager and a time of flight imager combined in a single on-chip architecture. This hybrid device enables simultaneous capture of a light field image and a time of flight image of an object scene. Algorithms are described, which enable the simultaneous acquisition of light field images and a time of flight images. Associated hybrid pixel structures, device arrays (hybrid imaging systems), and device applications are disclosed.Type: GrantFiled: March 17, 2016Date of Patent: March 31, 2020Assignee: CORNELL UNIVERSITYInventors: Alyosha Molnar, Suren Jayasuriya, Sriram Sivaramakrishnan
-
Publication number: 20190320127Abstract: A configurable image processing system can process image data for multiple applications by including an image sensor capable of operating in a machine vision mode and a photography mode in response to an operating system command. When operating in machine vision mode, the image sensor may send image data to first processor for machine vision processing. When operating in photography mode, the image sensor may send image data to an image coprocessor for photography processing.Type: ApplicationFiled: April 13, 2018Publication date: October 17, 2019Inventors: Mark Buckler, Adrian Sampson, Suren Jayasuriya