Patents by Inventor Ramalingam Chellappa

Ramalingam Chellappa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11636328
    Abstract: Various face discrimination systems may benefit from techniques for providing increased accuracy. For example, certain discriminative face verification systems can benefit from L2-constrained softmax loss. A method can include applying an image of a face as an input to a deep convolutional neural network. The method can also include applying an output of a fully connected layer of the deep convolutional neural network to an L2-normalizing layer. The method can further include determining softmax loss based on an output of the L2-normalizing layer.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: April 25, 2023
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Rajeev Ranjan, Carlos Castillo, Ramalingam Chellappa
  • Patent number: 11023711
    Abstract: Various facial recognition systems may benefit from appropriate use of computer systems. For example, certain face analysis systems may benefit from an all-in-one convolutional neural network that has been appropriately configured. A method can include obtaining an image of a face. The method can also include processing the image of the face using a first set of convolutional network layers configured to perform subject-independent tasks. The method can further include subsequently processing the image of the face using a second set of convolutional network layers configured to perform subject-dependent tasks. The second set of convolutional network layers can be integrated with the first set of convolutional network layers to form a single convolutional neural network. The method can additionally include outputting facial image detection results based on the processing and subsequent processing.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: June 1, 2021
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Rajeev Ranjan, Swaminathan Sankaranarayanan, Carlos Castillo, Ramalingam Chellappa
  • Patent number: 10860837
    Abstract: Various image processing may benefit from the application deep convolutional neural networks. For example, a deep multi-task learning framework may assist face detection, for example when combined with landmark localization, pose estimation, and gender recognition. An apparatus can include a first module of at least three modules configured to generate class independent region proposals to provide a region. The apparatus can also include a second module of the at least three modules configured to classify the region as face or non-face using a multi-task analysis. The apparatus can further include a third module configured to perform post-processing on the classified region.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: December 8, 2020
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Rajeev Ranjan, Vishal M. Patel, Ramalingam Chellappa, Carlos D. Castillo
  • Publication number: 20190303754
    Abstract: Various face discrimination systems may benefit from techniques for providing increased accuracy. For example, certain discriminative face verification systems can benefit from L2-constrained softmax loss. A method can include applying an image of a face as an input to a deep convolutional neural network. The method can also include applying an output of a fully connected layer of the deep convolutional neural network to an L2-normalizing layer. The method can further include determining softmax loss based on an output of the L2-normalizing layer.
    Type: Application
    Filed: March 28, 2018
    Publication date: October 3, 2019
    Inventors: Rajeev RANJAN, Carlos CASTILLO, Ramalingam CHELLAPPA
  • Publication number: 20190244014
    Abstract: Various facial recognition systems may benefit from appropriate use of computer systems. For example, certain face analysis systems may benefit from an all-in-one convolutional neural network that has been appropriately configured. A method can include obtaining an image of a face. The method can also include processing the image of the face using a first set of convolutional network layers configured to perform subject-independent tasks. The method can further include subsequently processing the image of the face using a second set of convolutional network layers configured to perform subject-dependent tasks. The second set of convolutional network layers can be integrated with the first set of convolutional network layers to form a single convolutional neural network. The method can additionally include outputting facial image detection results based on the processing and subsequent processing.
    Type: Application
    Filed: October 10, 2017
    Publication date: August 8, 2019
    Inventors: Rajeev RANJAN, Swaminathan SANKARANARAYANAN, Carlos CASTILLO, Ramalingam CHELLAPPA
  • Publication number: 20180211099
    Abstract: Various image processing may benefit from the application deep convolutional neural networks. For example, a deep multi-task learning framework may assist face detection, for example when combined with landmark localization, pose estimation, and gender recognition. An apparatus can include a first module of at least three modules configured to generate class independent region proposals to provide a region. The apparatus can also include a second module of the at least three modules configured to classify the region as face or non-face using a multi-task analysis. The apparatus can further include a third module configured to perform post-processing on the classified region.
    Type: Application
    Filed: July 20, 2016
    Publication date: July 26, 2018
    Inventors: Rajeev RANJAN, Vishal M. PATEL, Ramalingam CHELLAPPA, Carlos D. CASTILLO
  • Publication number: 20170026836
    Abstract: Various devices and systems may benefit from convenient authentication. For example, certain mobile devices may benefit from attribute-based continuous user authentication. A method can include determining attributes of an authorized user of a mobile device. The method can also include obtaining an unconstrained image of a current user of the mobile device. The method can further include processing the unconstrained image to determine at least one characteristic of the current user. The method can additionally include making an authorization determination based on a comparison between the attributes and the determined characteristic.
    Type: Application
    Filed: July 20, 2016
    Publication date: January 26, 2017
    Inventors: Pouya SAMANGOUEI, Vishal M. PATEL, Ramalingam CHELLAPPA, Emily HAND
  • Patent number: 9530052
    Abstract: The sensor adaptation technique applicable to non-contact biometric authentication, specifically in iris recognition, is designed to handle the sensor mismatch problem which occurs when enrollment iris samples and test iris samples are acquired with different sensors. The present system and method are capable of adapting iris data collected from one sensor to another sensor by transforming the iris samples in a fashion bringing the samples belonging to the same person closer than those samples belonging to different persons, irrespective of the sensor acquiring the samples. The sensor adaptation technique is easily incorporable into existing iris recognition systems and uses the training iris samples acquired with different sensors for learning adaptation parameters and subsequently applying the adaptation parameters for sensor adaptation during verification stage to significantly improve the recognition system performance.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: December 27, 2016
    Assignee: University of Maryland
    Inventors: Jaishanker K. Pillai, Maria Puertas-Calvo, Ramalingam Chellappa
  • Patent number: 9291711
    Abstract: A method, apparatus and computer-readable medium is provided that can utilize an undersampling method and can produce a radar image of a target. The radar image of the target can be based on a collection of waveform measurements, where the collection can be based on a significantly reduced number of transmitted and received electromagnetic pulse waveforms.
    Type: Grant
    Filed: February 25, 2011
    Date of Patent: March 22, 2016
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Dennis M. Healy, Jr., Kathy Hart, Vishal M. Patel, Glenn R. Easley, Ramalingam Chellappa
  • Patent number: 8379485
    Abstract: Compressive Sensing (CS) is an emerging area which uses a relatively small number of non-traditional samples in the form of randomized projections to reconstruct sparse or compressible signals. Direction-of-arrival (DOA) estimation is performed with an array of sensors using CS. Using random projections of the sensor data, along with a full waveform recording on one reference sensor, a sparse angle space scenario can be reconstructed, giving the number of sources and their DOA's. Signal processing algorithms are also developed and described herein for randomly deployable wireless sensor arrays that are severely constrained in communication bandwidth. There is a focus on the acoustic bearing estimation problem and it is shown that when the target bearings are modeled as a sparse vector in the angle space, functions of the low dimensional random projections of the microphone signals can be used to determine multiple source bearings as a solution of an l]-norm minimization problem.
    Type: Grant
    Filed: November 3, 2008
    Date of Patent: February 19, 2013
    Assignees: University of Maryland, Georgia Tech Research Corporation
    Inventors: Volkan Cevher, Ali Cafer Gurbuz, James H. McClellan, Ramalingam Chellappa
  • Patent number: 8179440
    Abstract: Method and system for objects surveillance and real-time activity recognition is based on analysis of spatio-temporal images of individuals under surveillance where a spatio-temporal volume occupied by each individual is decomposed by crossing the same at specific heights to form 2-dimensional slices, each containing representation of trajectory of the motion of corresponding portions of the individual body. The symmetry of the trajectories (Gait DNA) is analyzed and classified to generate data indicative of a type of activity of the individual based on the symmetry or asymmetry of the Gait DNA in each 2-dimensional slice. An effective occlusion handling ability is implemented which permits to restore the occluded silhouette of an individual.
    Type: Grant
    Filed: December 5, 2006
    Date of Patent: May 15, 2012
    Assignee: University of Maryland
    Inventors: Yang Ran, Ramalingam Chellappa, Qinfen Zheng
  • Patent number: 8023726
    Abstract: Completely automated end-to-end method and system for markerless motion capture performs segmentation of articulating objects in Laplacian Eigenspace and is applicable to handling of the poses of some complexity. 3D voxel representation of acquired images are mapped to a higher dimensional space (k), where k depends on the number of articulated chains of the subject body, so as to extract the 1-D representations of the articulating chains. A bottom-up approach is suggested in order to build a parametric (spline-based) representation of a general articulated body in the high dimensional space followed by a top-down probabilistic approach that registers the segments to an average human body model. The parameters of the model are further optimized using the segmented and registered voxels.
    Type: Grant
    Filed: November 9, 2007
    Date of Patent: September 20, 2011
    Assignee: University of Maryland
    Inventors: Aravind Sundaresan, Ramalingam Chellappa
  • Publication number: 20100033574
    Abstract: Method and system for objects surveillance and real-time activity recognition is based on analysis of spatio-temporal images of individuals under surveillance where a spatio-temporal volume occupied by each individual is decomposed by crossing the same at specific heights to form 2-dimensional slices, each containing representation of trajectory of the motion of corresponding portions of the individual body. The symmetry of the trajectories (Gait DNA) is analyzed and classified to generate data indicative of a type of activity of the individual based on the symmetry or asymmetry of the Gait DNA in each 2-dimensional slice. An effective occlusion handling ability is implemented which permits to restore the occluded silhouette of an individual.
    Type: Application
    Filed: December 5, 2006
    Publication date: February 11, 2010
    Inventors: Yang Ran, Ramalingam Chellappa, Qinfen Zheng
  • Publication number: 20090232353
    Abstract: Completely automated end-to-end method and system for markerless motion capture performs segmentation of articulating objects in Laplacian Eigenspace and is applicable to handling of the poses of some complexity. 3D voxel representation of acquired images are mapped to a higher dimensional space (k), where k depends on the number of articulated chains of the subject body, so as to extract the 1-D representations of the articulating chains. A bottom-up approach is suggested in order to build a parametric (spline-based) representation of a general articulated body in the high dimensional space followed by a top-down probabilistic approach that registers the segments to an average human body model. The parameters of the model are further optimized using the segmented and registered voxels.
    Type: Application
    Filed: November 9, 2007
    Publication date: September 17, 2009
    Applicant: UNIVERSITY OF MARYLAND
    Inventors: ARAVIND SUNDARESAN, RAMALINGAM CHELLAPPA
  • Patent number: 7184071
    Abstract: In a novel method of 3D modeling of an object from a video sequence using an SfM algorithm and a generic object model, the generic model is incorporated after the SfM algorithm generates a 3D estimate of the object model purely and directly from the input video sequence. An optimization framework provides for comparison of the local trends of the 3D estimate and the generic model so that the errors in the 3D estimate are corrected. The 3D estimate is obtained by fusing intermediate 3D reconstructions of pairs of frames of the video sequence after computing the uncertainty of the two frame solutions. The quality of the fusion algorithm is tracked using a rate-distortion function. In order to combine the generic model with the 3D estimate, an energy function minimization procedure is applied to the 3D estimate. The optimization is performed using a Metropolis-Hasting sampling strategy.
    Type: Grant
    Filed: August 21, 2003
    Date of Patent: February 27, 2007
    Assignee: University of Maryland
    Inventors: Ramalingam Chellappa, Amit K. Roy Chowdhury, Sridhar Srinivasan
  • Publication number: 20040051783
    Abstract: In a novel method of 3D modeling of an object from a video sequence using an SfM algorithm and a generic object model, the generic model is incorporated after the SfM algorithm generates a 3D estimate of the object model purely and directly from the input video sequence. An optimization framework provides for comparison of the local trends of the 3D estimate and the generic model so that the errors in the 3D estimate are corrected. The 3D estimate is obtained by fusing intermediate 3D reconstructions of pairs of frames of the video sequence after computing the uncertainty of the two frame solutions. The quality of the fusion algorithm is tracked using a rate-distortion function. In order to combine the generic model with the 3D estimate, an energy function minimization procedure is applied to the 3D estimate. The optimization is performed using a Metropolis-Hasting sampling strategy.
    Type: Application
    Filed: August 21, 2003
    Publication date: March 18, 2004
    Inventors: Ramalingam Chellappa, Amit K. Roy Chowdhury, Sridhar Srinivasan