Patents by Inventor Ramalingam Chellappa
Ramalingam Chellappa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11636328Abstract: Various face discrimination systems may benefit from techniques for providing increased accuracy. For example, certain discriminative face verification systems can benefit from L2-constrained softmax loss. A method can include applying an image of a face as an input to a deep convolutional neural network. The method can also include applying an output of a fully connected layer of the deep convolutional neural network to an L2-normalizing layer. The method can further include determining softmax loss based on an output of the L2-normalizing layer.Type: GrantFiled: March 28, 2018Date of Patent: April 25, 2023Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARKInventors: Rajeev Ranjan, Carlos Castillo, Ramalingam Chellappa
-
Patent number: 11023711Abstract: Various facial recognition systems may benefit from appropriate use of computer systems. For example, certain face analysis systems may benefit from an all-in-one convolutional neural network that has been appropriately configured. A method can include obtaining an image of a face. The method can also include processing the image of the face using a first set of convolutional network layers configured to perform subject-independent tasks. The method can further include subsequently processing the image of the face using a second set of convolutional network layers configured to perform subject-dependent tasks. The second set of convolutional network layers can be integrated with the first set of convolutional network layers to form a single convolutional neural network. The method can additionally include outputting facial image detection results based on the processing and subsequent processing.Type: GrantFiled: October 10, 2017Date of Patent: June 1, 2021Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARKInventors: Rajeev Ranjan, Swaminathan Sankaranarayanan, Carlos Castillo, Ramalingam Chellappa
-
Patent number: 10860837Abstract: Various image processing may benefit from the application deep convolutional neural networks. For example, a deep multi-task learning framework may assist face detection, for example when combined with landmark localization, pose estimation, and gender recognition. An apparatus can include a first module of at least three modules configured to generate class independent region proposals to provide a region. The apparatus can also include a second module of the at least three modules configured to classify the region as face or non-face using a multi-task analysis. The apparatus can further include a third module configured to perform post-processing on the classified region.Type: GrantFiled: July 20, 2016Date of Patent: December 8, 2020Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARKInventors: Rajeev Ranjan, Vishal M. Patel, Ramalingam Chellappa, Carlos D. Castillo
-
Publication number: 20190303754Abstract: Various face discrimination systems may benefit from techniques for providing increased accuracy. For example, certain discriminative face verification systems can benefit from L2-constrained softmax loss. A method can include applying an image of a face as an input to a deep convolutional neural network. The method can also include applying an output of a fully connected layer of the deep convolutional neural network to an L2-normalizing layer. The method can further include determining softmax loss based on an output of the L2-normalizing layer.Type: ApplicationFiled: March 28, 2018Publication date: October 3, 2019Inventors: Rajeev RANJAN, Carlos CASTILLO, Ramalingam CHELLAPPA
-
Publication number: 20190244014Abstract: Various facial recognition systems may benefit from appropriate use of computer systems. For example, certain face analysis systems may benefit from an all-in-one convolutional neural network that has been appropriately configured. A method can include obtaining an image of a face. The method can also include processing the image of the face using a first set of convolutional network layers configured to perform subject-independent tasks. The method can further include subsequently processing the image of the face using a second set of convolutional network layers configured to perform subject-dependent tasks. The second set of convolutional network layers can be integrated with the first set of convolutional network layers to form a single convolutional neural network. The method can additionally include outputting facial image detection results based on the processing and subsequent processing.Type: ApplicationFiled: October 10, 2017Publication date: August 8, 2019Inventors: Rajeev RANJAN, Swaminathan SANKARANARAYANAN, Carlos CASTILLO, Ramalingam CHELLAPPA
-
Publication number: 20180211099Abstract: Various image processing may benefit from the application deep convolutional neural networks. For example, a deep multi-task learning framework may assist face detection, for example when combined with landmark localization, pose estimation, and gender recognition. An apparatus can include a first module of at least three modules configured to generate class independent region proposals to provide a region. The apparatus can also include a second module of the at least three modules configured to classify the region as face or non-face using a multi-task analysis. The apparatus can further include a third module configured to perform post-processing on the classified region.Type: ApplicationFiled: July 20, 2016Publication date: July 26, 2018Inventors: Rajeev RANJAN, Vishal M. PATEL, Ramalingam CHELLAPPA, Carlos D. CASTILLO
-
Publication number: 20170026836Abstract: Various devices and systems may benefit from convenient authentication. For example, certain mobile devices may benefit from attribute-based continuous user authentication. A method can include determining attributes of an authorized user of a mobile device. The method can also include obtaining an unconstrained image of a current user of the mobile device. The method can further include processing the unconstrained image to determine at least one characteristic of the current user. The method can additionally include making an authorization determination based on a comparison between the attributes and the determined characteristic.Type: ApplicationFiled: July 20, 2016Publication date: January 26, 2017Inventors: Pouya SAMANGOUEI, Vishal M. PATEL, Ramalingam CHELLAPPA, Emily HAND
-
Patent number: 9530052Abstract: The sensor adaptation technique applicable to non-contact biometric authentication, specifically in iris recognition, is designed to handle the sensor mismatch problem which occurs when enrollment iris samples and test iris samples are acquired with different sensors. The present system and method are capable of adapting iris data collected from one sensor to another sensor by transforming the iris samples in a fashion bringing the samples belonging to the same person closer than those samples belonging to different persons, irrespective of the sensor acquiring the samples. The sensor adaptation technique is easily incorporable into existing iris recognition systems and uses the training iris samples acquired with different sensors for learning adaptation parameters and subsequently applying the adaptation parameters for sensor adaptation during verification stage to significantly improve the recognition system performance.Type: GrantFiled: March 13, 2014Date of Patent: December 27, 2016Assignee: University of MarylandInventors: Jaishanker K. Pillai, Maria Puertas-Calvo, Ramalingam Chellappa
-
Patent number: 9291711Abstract: A method, apparatus and computer-readable medium is provided that can utilize an undersampling method and can produce a radar image of a target. The radar image of the target can be based on a collection of waveform measurements, where the collection can be based on a significantly reduced number of transmitted and received electromagnetic pulse waveforms.Type: GrantFiled: February 25, 2011Date of Patent: March 22, 2016Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARKInventors: Dennis M. Healy, Jr., Kathy Hart, Vishal M. Patel, Glenn R. Easley, Ramalingam Chellappa
-
Patent number: 8379485Abstract: Compressive Sensing (CS) is an emerging area which uses a relatively small number of non-traditional samples in the form of randomized projections to reconstruct sparse or compressible signals. Direction-of-arrival (DOA) estimation is performed with an array of sensors using CS. Using random projections of the sensor data, along with a full waveform recording on one reference sensor, a sparse angle space scenario can be reconstructed, giving the number of sources and their DOA's. Signal processing algorithms are also developed and described herein for randomly deployable wireless sensor arrays that are severely constrained in communication bandwidth. There is a focus on the acoustic bearing estimation problem and it is shown that when the target bearings are modeled as a sparse vector in the angle space, functions of the low dimensional random projections of the microphone signals can be used to determine multiple source bearings as a solution of an l]-norm minimization problem.Type: GrantFiled: November 3, 2008Date of Patent: February 19, 2013Assignees: University of Maryland, Georgia Tech Research CorporationInventors: Volkan Cevher, Ali Cafer Gurbuz, James H. McClellan, Ramalingam Chellappa
-
Patent number: 8179440Abstract: Method and system for objects surveillance and real-time activity recognition is based on analysis of spatio-temporal images of individuals under surveillance where a spatio-temporal volume occupied by each individual is decomposed by crossing the same at specific heights to form 2-dimensional slices, each containing representation of trajectory of the motion of corresponding portions of the individual body. The symmetry of the trajectories (Gait DNA) is analyzed and classified to generate data indicative of a type of activity of the individual based on the symmetry or asymmetry of the Gait DNA in each 2-dimensional slice. An effective occlusion handling ability is implemented which permits to restore the occluded silhouette of an individual.Type: GrantFiled: December 5, 2006Date of Patent: May 15, 2012Assignee: University of MarylandInventors: Yang Ran, Ramalingam Chellappa, Qinfen Zheng
-
Patent number: 8023726Abstract: Completely automated end-to-end method and system for markerless motion capture performs segmentation of articulating objects in Laplacian Eigenspace and is applicable to handling of the poses of some complexity. 3D voxel representation of acquired images are mapped to a higher dimensional space (k), where k depends on the number of articulated chains of the subject body, so as to extract the 1-D representations of the articulating chains. A bottom-up approach is suggested in order to build a parametric (spline-based) representation of a general articulated body in the high dimensional space followed by a top-down probabilistic approach that registers the segments to an average human body model. The parameters of the model are further optimized using the segmented and registered voxels.Type: GrantFiled: November 9, 2007Date of Patent: September 20, 2011Assignee: University of MarylandInventors: Aravind Sundaresan, Ramalingam Chellappa
-
Publication number: 20100033574Abstract: Method and system for objects surveillance and real-time activity recognition is based on analysis of spatio-temporal images of individuals under surveillance where a spatio-temporal volume occupied by each individual is decomposed by crossing the same at specific heights to form 2-dimensional slices, each containing representation of trajectory of the motion of corresponding portions of the individual body. The symmetry of the trajectories (Gait DNA) is analyzed and classified to generate data indicative of a type of activity of the individual based on the symmetry or asymmetry of the Gait DNA in each 2-dimensional slice. An effective occlusion handling ability is implemented which permits to restore the occluded silhouette of an individual.Type: ApplicationFiled: December 5, 2006Publication date: February 11, 2010Inventors: Yang Ran, Ramalingam Chellappa, Qinfen Zheng
-
Publication number: 20090232353Abstract: Completely automated end-to-end method and system for markerless motion capture performs segmentation of articulating objects in Laplacian Eigenspace and is applicable to handling of the poses of some complexity. 3D voxel representation of acquired images are mapped to a higher dimensional space (k), where k depends on the number of articulated chains of the subject body, so as to extract the 1-D representations of the articulating chains. A bottom-up approach is suggested in order to build a parametric (spline-based) representation of a general articulated body in the high dimensional space followed by a top-down probabilistic approach that registers the segments to an average human body model. The parameters of the model are further optimized using the segmented and registered voxels.Type: ApplicationFiled: November 9, 2007Publication date: September 17, 2009Applicant: UNIVERSITY OF MARYLANDInventors: ARAVIND SUNDARESAN, RAMALINGAM CHELLAPPA
-
Patent number: 7184071Abstract: In a novel method of 3D modeling of an object from a video sequence using an SfM algorithm and a generic object model, the generic model is incorporated after the SfM algorithm generates a 3D estimate of the object model purely and directly from the input video sequence. An optimization framework provides for comparison of the local trends of the 3D estimate and the generic model so that the errors in the 3D estimate are corrected. The 3D estimate is obtained by fusing intermediate 3D reconstructions of pairs of frames of the video sequence after computing the uncertainty of the two frame solutions. The quality of the fusion algorithm is tracked using a rate-distortion function. In order to combine the generic model with the 3D estimate, an energy function minimization procedure is applied to the 3D estimate. The optimization is performed using a Metropolis-Hasting sampling strategy.Type: GrantFiled: August 21, 2003Date of Patent: February 27, 2007Assignee: University of MarylandInventors: Ramalingam Chellappa, Amit K. Roy Chowdhury, Sridhar Srinivasan
-
Publication number: 20040051783Abstract: In a novel method of 3D modeling of an object from a video sequence using an SfM algorithm and a generic object model, the generic model is incorporated after the SfM algorithm generates a 3D estimate of the object model purely and directly from the input video sequence. An optimization framework provides for comparison of the local trends of the 3D estimate and the generic model so that the errors in the 3D estimate are corrected. The 3D estimate is obtained by fusing intermediate 3D reconstructions of pairs of frames of the video sequence after computing the uncertainty of the two frame solutions. The quality of the fusion algorithm is tracked using a rate-distortion function. In order to combine the generic model with the 3D estimate, an energy function minimization procedure is applied to the 3D estimate. The optimization is performed using a Metropolis-Hasting sampling strategy.Type: ApplicationFiled: August 21, 2003Publication date: March 18, 2004Inventors: Ramalingam Chellappa, Amit K. Roy Chowdhury, Sridhar Srinivasan