Patents by Inventor Shibin Parameswaran

Shibin Parameswaran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230394771
    Abstract: An apparatus, system, and method for augmented reality tracking of unmanned systems using multimodal input processing, comprising receiving multimodal inputs, calculating unmanned vehicle positions, providing identifiers associated with the unmanned vehicles location, and superimposing the indicators on an augmented reality display. Furthermore, this may include providing an operator/pilot with telemetry information pertaining to unmanned vehicles, task or assignment information, and more.
    Type: Application
    Filed: March 3, 2023
    Publication date: December 7, 2023
    Applicant: The United States of America as represented by the Secretary of the Navy
    Inventors: Mark Bilinski, Shibin Parameswaran, Martin Thomas Jaszewski, Daniel Sean Jennings
  • Patent number: 11461881
    Abstract: A method for processing images comprising: capturing a plurality of degraded images of a first real-world environment with a first sensor; processing each degraded image with a first, untrained convolutional neural network, via a Deep Image Prior approach, to obtain a plurality of clean images, wherein each clean image corresponds to a degraded image; pairing each clean image with its corresponding degraded image to create a plurality of degraded/clean image pairs; training, via a supervised learning approach, a machine learning model to learn a function for converting degraded images into restored images based on the plurality of degraded/clean image pairs; capturing a second plurality of degraded images of a second real-world environment; and using the trained machine learning model to convert the second plurality of degraded images into restored images based on the learned function.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: October 4, 2022
    Assignee: United States of America as represented by the Secretary of the Navy
    Inventors: Shibin Parameswaran, Martin Thomas Jaszewski
  • Publication number: 20220164933
    Abstract: A method for processing images comprising: capturing a plurality of degraded images of a first real-world environment with a first sensor; processing each degraded image with a first, untrained convolutional neural network, via a Deep Image Prior approach, to obtain a plurality of clean images, wherein each clean image corresponds to a degraded image; pairing each clean image with its corresponding degraded image to create a plurality of degraded/clean image pairs; training, via a supervised learning approach, a machine learning model to learn a function for converting degraded images into restored images based on the plurality of degraded/clean image pairs; capturing a second plurality of degraded images of a second real-world environment; and using the trained machine learning model to convert the second plurality of degraded images into restored images based on the learned function.
    Type: Application
    Filed: November 25, 2020
    Publication date: May 26, 2022
    Inventors: Shibin Parameswaran, Martin Thomas Jaszewski
  • Patent number: 10410360
    Abstract: A method for displaying off-screen target indicators in motion video comprising the steps of receiving motion video containing a series of individual video frames, selecting a target object within a selected video frame by choosing selected target object pixel space coordinates, and determining whether the selected target object pixel space coordinates are within the selected video frame. Upon determining that the selected target object pixel space coordinates are within the selected video frame, the method updates a dynamical system model with the target object geographical coordinates, longitudinal target object speed, and latitudinal target object speed. Upon determining that the selected target object pixel space coordinates are not within the selected video frame, the method calculates estimated target object geographical coordinates at time t using the dynamical system model. The method then calculates final values in the video field of view at which to draw a target indicator.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: September 10, 2019
    Assignee: United States of America as Represented by Secretary of the Navy
    Inventors: Bryan D. Bagnall, Joshua Harguess, Shibin Parameswaran, Martin T. Jaszewski
  • Publication number: 20190188868
    Abstract: A method for displaying off-screen target indicators in motion video comprising the steps of receiving motion video containing a series of individual video frames, selecting a target object within a selected video frame by choosing selected target object pixel space coordinates, and determining whether the selected target object pixel space coordinates are within the selected video frame. Upon determining that the selected target object pixel space coordinates are within the selected video frame, the method updates a dynamical system model with the target object geographical coordinates, longitudinal target object speed, and latitudinal target object speed. Upon determining that the selected target object pixel space coordinates are not within the selected video frame, the method calculates estimated target object geographical coordinates at time t using the dynamical system model. The method then calculates final values in the video field of view at which to draw a target indicator.
    Type: Application
    Filed: December 15, 2017
    Publication date: June 20, 2019
    Inventors: Bryan D. Bagnall, Joshua Harguess, Shibin Parameswaran, Martin T. Jaszewski
  • Publication number: 20180150635
    Abstract: A method using behavior-based detection to detect and observe known malicious traffic on a virtual machine; parsing up the observed malicious traffic by flow features; using a machine learning algorithm to train a classifier that separates the features into a normal class and an abnormal class, wherein the abnormal class is malware; weighing the importance of the features, wherein importance is based on each feature's contribution to overall system performance; creating models using the classified normal and abnormal features; using these models to classify future observed traffic.
    Type: Application
    Filed: November 28, 2016
    Publication date: May 31, 2018
    Inventors: Sara E. Melvin, Logan M. Straatemeier, Eric L. Dorman, Shibin Parameswaran
  • Patent number: 9305214
    Abstract: Methods for detecting a horizon in an image with a plurality of pixels can include the step of blurring the image with a noise filter, then dividing the image into an M×N matrix of sub-blocks S. For each sub-block S, horizon features can be coarse-extracted by defining an r-dimensional vector having P feature values for each sub-block S and clustering each r-dimensional vectors into two clusters using a k-means statistical analysis. The corresponding sub-blocks S corresponding to the two clusters can be masked with a binary mask. The methods can further include the step of fine-extracting the horizon features at a pixel level for each sub-block Si, j and sub-block Si?1, j when the binary mask changes value from sub-block Si?1 j to said sub-block Si, j, for i=1 to M and j=1 to N.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: April 5, 2016
    Assignee: The United States of America, as Represented by the Secretary of the Navy
    Inventors: Gracie Bay Young, Corey A. Lane, Bryan D. Bagnall, Shibin Parameswaran