Patents by Inventor Jayant Kumar

Jayant Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10347141
    Abstract: A method for providing obstacle alerts to an in-flight aircraft has been developed. First, parameters of an in-flight aircraft are transmitted to a ground based processor station. The station calculates an aircraft safety envelope based on these parameters. The station accesses the characteristics of obstacles stored in a terrain database and calculates an obstacle safety envelope. Finally, the station determines if the aircraft safety envelope conflicts with the obstacle safety envelope and generates an alert for the aircraft if a conflict exists.
    Type: Grant
    Filed: April 26, 2017
    Date of Patent: July 9, 2019
    Assignee: HONEYWELL INTERNATIONAL INC.
    Inventors: Jayant Kumar Singh, Nithin Ambika, Subhadeep Pal, Saurabh Gohil
  • Publication number: 20190179755
    Abstract: Systems and methods for controlling cache usage are described and include associating, by a server computing system, a tenant in a multi-tenant environment with a cache cluster formed by a group of cache instances; associating, by the server computing system, a memory threshold and a burst memory threshold with the tenant; enabling, by the server computing system, each of the cache instances to collect metrics information based on the tenant accessing the cache cluster, the metrics information used to determine memory usage information and burst memory usage information of the cache cluster by the tenant; and controlling, by the server computing system, usage of the cache cluster by the tenant based on comparing the memory usage information with the memory threshold and comparing the burst memory usage information with the burst memory threshold.
    Type: Application
    Filed: December 13, 2017
    Publication date: June 13, 2019
    Inventors: Gopi Krishna Mudumbai, Jayant Kumar
  • Patent number: 10227458
    Abstract: Methods of forming a polymeric nanocomposite are provided. The methods include combining one or more monomers to form a mixture and adding a plurality of carbon fibers to the mixture prior to or concurrently with formation of a polymer from the monomers. The methods can also include polymerizing the monomers to form the polymer and adding a hydrophobic agent and a plasticizer to the mixture to form the polymer nanocomposite.
    Type: Grant
    Filed: October 1, 2013
    Date of Patent: March 12, 2019
    Assignee: INDIAN INSTITUTE OF TECHNOLOGY KANPUR
    Inventors: Nishith Verma, Jayant Kumar Singh, Ajit Kumar Sharma
  • Publication number: 20180315324
    Abstract: A method for providing obstacle alerts to an in-flight aircraft has been developed. First, parameters of an in-flight aircraft are transmitted to a ground based processor station. The station calculates an aircraft safety envelope based on these parameters. The station accesses the characteristics of obstacles stored in a terrain database and calculates an obstacle safety envelope. Finally, the station determines if the aircraft safety envelope conflicts with the obstacle safety envelope and generates an alert for the aircraft if a conflict exists.
    Type: Application
    Filed: April 26, 2017
    Publication date: November 1, 2018
    Applicant: HONEYWELL INTERNATIONAL INC.
    Inventors: Jayant Kumar Singh, Nithin Ambika, Subhadeep Pal, Saurabh Gohil
  • Patent number: 10068171
    Abstract: A method and system for domain adaptation based on multi-layer fusion in a convolutional neural network architecture for feature extraction and a two-step training and fine-tuning scheme. The architecture concatenates features extracted at different depths of the network to form a fully connected layer before the classification step. First, the network is trained with a large set of images from a source domain as a feature extractor. Second, for each new domain (including the source domain), the classification step is fine-tuned with images collected from the corresponding site. The features from different depths are concatenated with and fine-tuned with weights adjusted for a specific task. The architecture is used for classifying high occupancy vehicle images.
    Type: Grant
    Filed: June 10, 2016
    Date of Patent: September 4, 2018
    Assignee: Conduent Business Services, LLC
    Inventors: Safwan Wshah, Beilei Xu, Orhan Bulan, Jayant Kumar, Peter Paul
  • Patent number: 10042031
    Abstract: A mobile electronic device processes a sequence of images to identify and re-identify an object of interest in the sequence. An image sensor of the device, receives a sequence of images. The device detects an object in a first image as well as positional parameters of the device that correspond to the object in the first image. The device determines a range of positional parameters within which the object may appear in a field of view of the device. When the device detects that the object of interest exited the field of view it subsequently uses motion sensor data to determine that the object of interest has likely re-entered the field of view, it will analyze the current frame to confirm that the object of interest has re-entered the field of view.
    Type: Grant
    Filed: February 11, 2015
    Date of Patent: August 7, 2018
    Assignee: Xerox Corporation
    Inventors: Jayant Kumar, Qun Li, Edgar A. Bernal, Raja Bala
  • Publication number: 20180217223
    Abstract: A mobile electronic device processes a sequence of images to identify and re-identify an object of interest in the sequence. An image sensor of the device, receives a sequence of images. The device detects an object in a first image as well as positional parameters of the device that correspond to the object in the first image. The device determines a range of positional parameters within which the object may appear in a field of view of the device. When the device detects that the object of interest exited the field of view it subsequently uses motion sensor data to determine that the object of interest has likely re-entered the field of view, it will analyze the current frame to confirm that the object of interest has re-entered the field of view.
    Type: Application
    Filed: March 23, 2018
    Publication date: August 2, 2018
    Inventors: Jayant Kumar, Qun Li, Edgar A. Bernal, Raja Bala
  • Patent number: 9977968
    Abstract: A method and system for identifying content relevance comprises acquiring video data, mapping the acquired video data to a feature space to obtain a feature representation of the video data, assigning the acquired video data to at least one action class based on the feature representation of the video data, and determining a relevance of the acquired video data.
    Type: Grant
    Filed: March 4, 2016
    Date of Patent: May 22, 2018
    Assignee: Xerox Corporation
    Inventors: Edgar A. Bernal, Qun Li, Yun Zhang, Jayant Kumar, Raja Bala
  • Publication number: 20180016445
    Abstract: A method of forming a crosslinked polyphenol, the method comprising: reacting a bio-based phenolic compound comprising at least one phenolic hydroxyl group, with a crosslinking agent comprising at least two functional groups reactive with the phenolic hydroxyl group, wherein the at least two functional groups are each independently a halogen group, acid halide group, sulfonyl halide group, glycidyl group, anhydride group, or a combination comprising at least one of the foregoing, to provide the crosslinked polyphenol.
    Type: Application
    Filed: July 12, 2017
    Publication date: January 18, 2018
    Inventors: Ramaswamy Nagarajan, Jayant Kumar, Ravi Mosurkal, Zhiyu Xia
  • Patent number: 9864931
    Abstract: Methods, systems, and processor-readable media for training data augmentation. A source domain and a target domain are provided, and thereafter an operation is performed to augment data in the source domain with transformations utilizing characteristics learned from the target domain. The augmented data is then used to improve image classification accuracy in a new domain.
    Type: Grant
    Filed: April 13, 2016
    Date of Patent: January 9, 2018
    Assignee: Conduent Business Services, LLC
    Inventors: Jayant Kumar, Beilei Xu, Peter Paul
  • Patent number: 9805255
    Abstract: A multimodal sensing system includes various devices that work together to automatically classify an action. A video camera captures a sequence of digital images. At least one other sensor device captures other sensed data (e.g., motion data). The system will extract video features from the digital images so that each extracted image feature is associated with a time period. It will extract other features from the other sensed data so that each extracted other feature is associated with a time period. The system will fuse a group of the extracted video features and a group of the extracted other features to create a fused feature representation for a time period. It will then analyze the fused feature representation to identify a class, access a data store of classes and actions to identify an action that is associated with the class, and save the identified action to a memory device.
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: October 31, 2017
    Assignee: Conduent Business Services, LLC
    Inventors: Xitong Yang, Edgar A. Bernal, Sriganesh Madhvanath, Raja Bala, Palghat S. Ramesh, Qun Li, Jayant Kumar
  • Patent number: 9807269
    Abstract: The embodiments include systems and methods for guiding a user to capture two flash images of a document page, and selectively fuse the images to produce a binary image of high quality and without loss of any content. Each individual image may have an FSR where the content is degraded/lost due to the flash light. The idea is to first guide the user to take two images such that there is no overlap of flash-spots in the document regions. The flash spots in both images are detected and assessed for quality and extent of degradation in both images. The image with lower degradation is chosen as the primary image and the other image as secondary, to minimize fusing artifacts. The region in secondary image corresponding to the FSR in the primary is aligned to the primary region using a multiscale alignment technique.
    Type: Grant
    Filed: May 15, 2015
    Date of Patent: October 31, 2017
    Assignee: Xerox Corporation
    Inventors: Jayant Kumar, Raja Bala, Martin S. Maltz, Phillip J Emmett
  • Publication number: 20170300783
    Abstract: Methods, systems, and processor-readable media for training data augmentation. A source domain and a target domain are provided, and thereafter an operation is performed to augment data in the source domain with transformations utilizing characteristics learned from the target domain. The augmented data is then used to improve image classification accuracy in a new domain.
    Type: Application
    Filed: April 13, 2016
    Publication date: October 19, 2017
    Inventors: Jayant Kumar, Beilei Xu, Peter Paul
  • Patent number: 9778750
    Abstract: A method, non-transitory computer readable medium, and apparatus for localizing a region of interest using a hand gesture are disclosed. For example, the method acquires an image containing the hand gesture from the ego-centric video, detects pixels that correspond to one or more hands in the image using a hand segmentation algorithm, identifies a hand enclosure in the pixels that are detected within the image, localizes a region of interest based on the hand enclosure and performs an action based on the object in the region of interest.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: October 3, 2017
    Assignee: XEROX CORPORATION
    Inventors: Jayant Kumar, Xiaodong Yang, Qun Li, Edgar A. Bernal, Raja Bala
  • Patent number: 9767349
    Abstract: A method for determining an emotional state of a subject taking an assessment. The method includes eliciting predicted facial expressions from a subject administered questions each intended to elicit a certain facial expression that conveys a baseline characteristic of the subject; receiving a video sequence capturing the subject answering the questions; determining an observable physical behavior experienced by the subject across a series of frames corresponding to the sample question; associating the observed behavior with the emotional state that corresponds with the facial expression; and training a classifier using the associations. The method includes receiving a second video sequence capturing the subject during an assessment and applying features extracted from the second image data to the classifier for determining the emotional state of the subject in response to an assessment item administered during the assessment.
    Type: Grant
    Filed: May 9, 2016
    Date of Patent: September 19, 2017
    Assignee: XEROX CORPORATION
    Inventors: Matthew Adam Shreve, Jayant Kumar, Raja Bala, Phillip J. Emmett, Megan Clar, Jeyasri Subramanian, Eric Harte
  • Publication number: 20170255831
    Abstract: A method and system for identifying content relevance comprises acquiring video data, mapping the acquired video data to a feature space to obtain a feature representation of the video data, assigning the acquired video data to at least one action class based on the feature representation of the video data, and determining a relevance of the acquired video data.
    Type: Application
    Filed: March 4, 2016
    Publication date: September 7, 2017
    Inventors: Edgar A. Bernal, Qun Li, Yun Zhang, Jayant Kumar, Raja Bala
  • Publication number: 20170220854
    Abstract: A multimodal sensing system includes various devices that work together to automatically classify an action. A video camera captures a sequence of digital images. At least one other sensor device captures other sensed data (e.g., motion data). The system will extract video features from the digital images so that each extracted image feature is associated with a time period. It will extract other features from the other sensed data so that each extracted other feature is associated with a time period. The system will fuse a group of the extracted video features and a group of the extracted other features to create a fused feature representation for a time period. It will then analyze the fused feature representation to identify a class, access a data store of classes and actions to identify an action that is associated with the class, and save the identified action to a memory device.
    Type: Application
    Filed: January 29, 2016
    Publication date: August 3, 2017
    Inventors: Xitong Yang, Edgar A. Bernal, Sriganesh Madhvanath, Raja Bala, Palghat S. Ramesh, Qun Li, Jayant Kumar
  • Publication number: 20170140253
    Abstract: A method and system for domain adaptation based on multi-layer fusion in a convolutional neural network architecture for feature extraction and a two-step training and fine-tuning scheme. The architecture concatenates features extracted at different depths of the network to form a fully connected layer before the classification step. First, the network is trained with a large set of images from a source domain as a feature extractor. Second, for each new domain (including the source domain), the classification step is fine-tuned with images collected from the corresponding site. The features from different depths are concatenated with and fine-tuned with weights adjusted for a specific task. The architecture is used for classifying high occupancy vehicle images.
    Type: Application
    Filed: June 10, 2016
    Publication date: May 18, 2017
    Applicant: Xerox Corporation
    Inventors: Safwan Wshah, Beilei Xu, Orhan Bulan, Jayant Kumar, Peter Paul
  • Patent number: 9594949
    Abstract: A method, computer readable medium and apparatus for verifying an identity of an individual based upon facial expressions as exhibited in a query video of the individual are disclosed. The method includes receiving a reference video for each one of a plurality of different individuals, wherein a plurality of facial gesture encoders is extracted from at least one frame of the reference video describing one or more facial expressions of each one of the plurality of different individuals, receiving the query video, calculating a similarity score for the reference video for the each one of the plurality of different individuals based on an analysis that compares the plurality of facial gesture encoders of the at least one frame of the reference video for the each one of the plurality of different individuals to a plurality of facial gesture encoders extracted from at least one frame of the query video.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: March 14, 2017
    Assignee: Xerox Corporation
    Inventors: Matthew Adam Shreve, Jayant Kumar, Qun Li, Edgar A. Bernal, Raja Bala
  • Publication number: 20170061202
    Abstract: A method, computer readable medium and apparatus for verifying an identity of an individual based upon facial expressions as exhibited in a query video of the individual are disclosed. The method includes receiving a reference video for each one of a plurality of different individuals, wherein a plurality of facial gesture encoders is extracted from at least one frame of the reference video describing one or more facial expressions of each one of the plurality of different individuals, receiving the query video, calculating a similarity score for the reference video for the each one of the plurality of different individuals based on an analysis that compares the plurality of facial gesture encoders of the at least one frame of the reference video for the each one of the plurality of different individuals to a plurality of facial gesture encoders extracted from at least one frame of the query video.
    Type: Application
    Filed: August 31, 2015
    Publication date: March 2, 2017
    Inventors: Matthew Adam Shreve, Jayant Kumar, Qun Li, Edgar A. Bernal, Raja Bala