Patents by Inventor Deepak Khosla

Deepak Khosla has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10410092
    Abstract: A system and method of automated classification of rock types includes: partitioning, by a processing device, an image into partitions; extracting, by the processing device, sub-images from each of the partitions; first-level classifying, by an automated classifier, the sub-images into corresponding first classes; and second-level classifying, by the processing device, the partitions into corresponding second classes by, for each partition of the partitions, selecting a most numerous one of the corresponding first classes of the sub-images extracted from the partition. A method of displaying automated classification results on a display device is provided. The method includes: receiving, by a processing device, an image partitioned into partitions and classified into corresponding classes; and manipulating, by the processing device, the display device to display the image together with visual identification of the partitions and their corresponding classes.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: September 10, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Yang Chen, Deepak Khosla, Fredy Monterroza, Ryan M. Uhlenbrock
  • Patent number: 10402699
    Abstract: A method for training an automated classifier of input images includes: receiving, by a processing device, a convolution neural network (CNN) model; receiving, by the processing device, training images and corresponding classes, each of the corresponding classes being associated with several ones of the training images; preparing, by the processing device, the training images, including separating the training images into a training set of the training images and a testing set of the training images; and training, by the processing device, the CNN model utilizing the training set, the testing set, and the corresponding classes to generate the automated classifier.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: September 3, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Yang Chen, Deepak Khosla, Fredy Monterroza, Ryan M. Uhlenbrock
  • Patent number: 10387774
    Abstract: Described is a system for converting convolutional neural networks to spiking neural networks. A convolutional neural network (CNN) is adapted to fit a set of requirements of a spiking neural network (SNN), resulting in an adapted CNN. The adapted CNN is trained to obtain a set of learned weights, and the set of learned weights is then applied to a converted SNN having an architecture similar to the adapted CNN. The converted SNN is then implemented on neuromorphic hardware, resulting in reduced power consumption.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: August 20, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Yongqiang Cao, Yang Chen, Deepak Khosla
  • Publication number: 20190251358
    Abstract: Described is a system for visual activity recognition. In operation, the system detects a set of objects of interest (OI) in video data and determines an object classification for each object in the set of OI, the set including at least one OI. A corresponding activity track is formed for each object in the set of OI by tracking each object across frames. Using a feature extractor, the system determines a corresponding feature in the video data for each OI, which is then used to determine a corresponding initial activity classification for each OI. One or more OI are then detected in each activity track via foveation, with the initial object detection and foveated object detection thereafter being appended into a new detected-objects list. Finally, a final classification is provided for each activity track using the new detected-objects list and filtering the initial activity classification results using contextual logic.
    Type: Application
    Filed: January 14, 2019
    Publication date: August 15, 2019
    Inventors: Deepak Khosla, Ryan M. Uhlenbrock, Huapeng Su, Yang Chen
  • Patent number: 10373335
    Abstract: Described is a system for location recognition for mobile platforms, such as autonomous robotic exploration. In operation, an image in front of the platform is converted into a high-dimensional feature vector. The image reflects a scene proximate the mobile platform. A candidate location identification of the scene is then determined. The candidate location identification is then stored in a history buffer. Upon receiving a cue, the system then determines if the candidate location identification is a known location or a new location.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: August 6, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Yang Chen, Jiejun Xu, Deepak Khosla
  • Publication number: 20190180119
    Abstract: Described is an object recognition system. Using an integral channel features (ICF) detector, the system extracts a candidate target region (having an associated original confidence score representing a candidate object) from an input image of a scene surrounding a platform. A modified confidence score is generated based on a location and height of detection of the candidate object. The candidate target regions are classified based on the modified confidence score using a trained convolutional neural network (CNN) classifier, resulting in classified objects. The classified objects are tracked using a multi-target tracker for final classification of each classified object as a target or non-target. If the classified object is a target, a device can be controlled based on the target.
    Type: Application
    Filed: February 14, 2019
    Publication date: June 13, 2019
    Inventors: Yang Chen, Deepak Khosla, Ryan M. Uhlenbrock
  • Patent number: 10289910
    Abstract: Described is a system for real-time object recognition. During operation, the system extracts convolutional neural network (CNN) feature vectors from an input image. The input image reflects a scene proximate the system, with the feature vector representing an object in the input image. The CNN feature vector is matched against feature vectors stored in a feature dictionary to identify k nearest neighbors for each object class stored in the feature dictionary. The matching results in a probability distribution over object classes stored in the feature dictionary. The probability distribution provides a confidence score that each of the object classes in the feature dictionary are representative of the object in the input image. Based on the confidence scores, the object in the input image is then recognized as being a particular object class when the confidence score for the particular object class exceeds a threshold.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: May 14, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Yang Chen, Deepak Khosla
  • Patent number: 10275668
    Abstract: Described is a system for collision detection and avoidance estimation using sub-region based optical flow. During operation, the system estimates time-to-contact (TTC) values for an obstacle in multiple regions-of-interest (ROI) in successive image frames as obtained from a monocular camera. Based on the TTC values, the system detects if there is an imminent obstacle. If there is an imminent obstacle, a path for avoiding the obstacle is determined based on the TTC values in the multiple ROI. Finally, a mobile platform is caused to move in the path as determined to avoid the obstacle.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: April 30, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Benjamin Nuernberger, Deepak Khosla, Kyungnam Kim, Yang Chen, Fredy Monterroza
  • Patent number: 10255480
    Abstract: A system includes a processor configured to generate a registered first 3D point cloud based on a first 3D point cloud and a second 3D point cloud. The processor is configured to generate a registered second 3D point cloud based on the first 3D point cloud and a third 3D point cloud. The processor is configured to generate a combined 3D point cloud based on the registered first 3D point cloud and the registered second 3D point cloud. The processor is configured to compare the combined 3D point cloud with a mesh model of the object. The processor is configured to generate, based on the comparison, output data indicating differences between the object as represented by the combined 3D point cloud and the object as represented by the 3D model. The system includes a display configured to display a graphical display of the differences.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: April 9, 2019
    Assignee: THE BOEING COMPANY
    Inventors: Ryan Uhlenbrock, Deepak Khosla, Yang Chen, Kevin R. Martin
  • Patent number: 10198689
    Abstract: Described is a system for object detection in images or videos using spiking neural networks. An intensity saliency map is generated from an intensity of an input image having color components using a spiking neural network. Additionally, a color saliency map is generated from a plurality of colors in the input image using a spiking neural network. An object detection model is generated by combining the intensity saliency map and multiple color saliency maps. The object detection model is used to detect multiple objects of interest in the input image.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: February 5, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Yongqiang Cao, Qin Jiang, Yang Chen, Deepak Khosla
  • Publication number: 20190005330
    Abstract: Described is a system and method for accurate image and/or video scene classification. More specifically, described is a system that makes use of a specialized convolutional-neural network (hereafter CNN) based technique for the fusion of bottom-up whole-image features and top-down entity classification. When the two parallel and independent processing paths are fused, the system provides an accurate classification of the scene as depicted in the image or video.
    Type: Application
    Filed: February 8, 2017
    Publication date: January 3, 2019
    Inventors: Ryan M. Uhlenbrock, Deepak Khosla, Yang Chen, Fredy Monterroza
  • Publication number: 20180341832
    Abstract: Described is a system for converting a convolutional neural network (CNN) designed and trained for color (RGB) images to one that works on infrared (IR) or grayscale images. The converted CNN comprises a series of convolution layers of neurons arranged in a set kernels having corresponding depth slices. The converted CNN is used for performing object detection. A mechanical component of an autonomous device is controlled based on the object detection.
    Type: Application
    Filed: March 23, 2018
    Publication date: November 29, 2018
    Inventors: Ryan M. Uhlenbrock, Yang Chen, Deepak Khosla
  • Patent number: 10134135
    Abstract: Described is a system for finding open space for robot exploration. During operation, the system designates a straight forward direction as a default direction. It is determined if a filtered 3D point cloud has a sufficient number of cloud points. If not, the system determines if an obstacle is too close in a previous time. If so, a backward direction is set as a target direction and if not, the straight forward direction is set as the target direction. Alternatively, if there are a sufficient number of points, then calculate a distance of each point to a sensor center and determine if a number of times that the distance is smaller than a distance threshold is greater than a fixed number. The system either sets the backward direction as the target direction or estimates an openness of each candidate direction until a candidate direction is set as the target direction.
    Type: Grant
    Filed: August 29, 2016
    Date of Patent: November 20, 2018
    Assignee: HRL Laboratories, LLC
    Inventors: Lei Zhang, Kyungnam Kim, Deepak Khosla, Jiejun Xu
  • Publication number: 20180330149
    Abstract: A system includes a processor configured to generate a registered first 3D point cloud based on a first 3D point cloud and a second 3D point cloud. The processor is configured to generate a registered second 3D point cloud based on the first 3D point cloud and a third 3D point cloud. The processor is configured to generate a combined 3D point cloud based on the registered first 3D point cloud and the registered second 3D point cloud. The processor is configured to compare the combined 3D point cloud with a mesh model of the object. The processor is configured to generate, based on the comparison, output data indicating differences between the object as represented by the combined 3D point cloud and the object as represented by the 3D model. The system includes a display configured to display a graphical display of the differences.
    Type: Application
    Filed: May 15, 2017
    Publication date: November 15, 2018
    Inventors: Ryan Uhlenbrock, Deepak Khosla, Yang Chen, Kevin R. Martin
  • Publication number: 20180300553
    Abstract: Described is a system for visual activity recognition that includes one or more processors and a memory, the memory being a non-transitory computer-readable medium having executable instructions encoded thereon, such that upon execution of the instructions, the one or more processors perform operations including detecting a set of objects of interest in video data and determining an object classification for each object in the set of objects of interest, the set including at least one object of interest. The one or more processors further perform operations including forming a corresponding activity track for each object in the set of objects of interest by tracking each object across frames. The one or more processors further perform operations including, for each object of interest and using a feature extractor, determining a corresponding feature in the video data. The system may provide a report to a user's cell phone or central monitoring facility.
    Type: Application
    Filed: April 6, 2018
    Publication date: October 18, 2018
    Inventors: Deepak Khosla, Ryan M. Uhlenbrock, Yang Chen
  • Patent number: 10068336
    Abstract: Described is a system for frontal and side doorway detection. Salient line segments are extracted from an image frame captured of an indoor environment. Existence of a vanishing point in the image frame is determined. If a vanishing point is detected with a confidence score that meets or exceeds a predetermined confidence score, then the system performs side doorway detection via a side doorway detection module. If a vanishing point is detected with a confidence score below the predetermined confidence score, then the system performs frontal doorway detection via a frontal doorway detection module. A description of detected doorways is output and used by a mobile robot (Unmanned Aerial Vehicle) to autonomously navigate the indoor environment.
    Type: Grant
    Filed: August 29, 2016
    Date of Patent: September 4, 2018
    Assignee: HRL Laboratories, LLC
    Inventors: Lei Zhang, Fredy Monterroza, Kyungnam Kim, Jiejun Xu, Yang Chen, Deepak Khosla
  • Patent number: 9984326
    Abstract: Described is system for simulating spiking neural networks for image and video processing. The system processes an image with a spiking neural network simulator having a plurality of inter-connected modules. Each module comprises a plurality of neuron elements. Processing the image further comprises performing a neuron state update for each module, that includes aggregating input spikes and updating neuron membrane potentials, and performing spike propagation for each module, which includes transferring spikes generated in a current time step. Finally, an analysis result is output.
    Type: Grant
    Filed: April 6, 2015
    Date of Patent: May 29, 2018
    Assignee: HRL Laboratories, LLC
    Inventors: Yang Chen, Yongqiang Cao, Deepak Khosla
  • Patent number: 9978149
    Abstract: Described is a system for door detection for use with an unmanned aerial vehicle (UAV). The system receive a video input image from a single monocular camera. Edge points are detected in the video, with the edge points connected to form long edge lines. Orientations of the long edge lines are determined, such that long edge lines having a substantially vertical orientation are designated as initial door line candidates and long edge lines having a non-vertical, non-horizontal orientation are designated for use in detecting a vanishing point. A vanishing point is then detected in the video frame. Thereafter, intensity profile and line properties of the door line candidates are calculated. Finally, it is verified if the door line candidates are real world door lines and, if so, an area between the real world door lines is designated as an open door.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: May 22, 2018
    Assignee: HRL Laboratories, LLC
    Inventors: Lei Zhang, Kyungnam Kim, Deepak Khosla
  • Patent number: 9933264
    Abstract: Described is a robotic system for detecting obstacles reliably with their ranges by a combination of two-dimensional and three-dimensional sensing. In operation, the system receives an image from a monocular video and range depth data from a range sensor of a scene proximate a mobile platform. The image is segmented into multiple object regions of interest and time-to-contact (TTC) value are calculated by estimating motion field and operating on image intensities. A two-dimensional (2D) TTC map is then generated by estimating average TTC values over the multiple object regions of interest. A three-dimensional TTC map is then generated by fusing the range depth data with image. Finally, a range-fused TTC map is generated by averaging the 2D TTC map and the 3D TTC map.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: April 3, 2018
    Assignee: HRL Laboratories, LLC
    Inventors: Fredy Monterroza, Kyungnam Kim, Deepak Khosla
  • Patent number: 9934437
    Abstract: Described is a system for collision detection. The system divides an image in a sequence of images into multiple sub-fields comprising complementary visual sub-fields. For each visual sub-field, motion is detected in a direction corresponding to the visual sub-field using a spiking Reichardt detector with a spiking neural network. Motion in a direction complementary to the visual sub-field is also detected using the spiking Reichardt detector. Outputs of the spiking Reichardt detector, comprising data corresponding to one direction of movement from two complementary visual sub-fields, are processed using a movement detector. Based on the output of the movement detector, an impending collision is signaled.
    Type: Grant
    Filed: July 9, 2015
    Date of Patent: April 3, 2018
    Assignee: HRL Laboratories, LLC
    Inventors: Yongqiang Cao, Deepak Khosla, Yang Chen, David J. Huber