Patents by Inventor Deepak Khosla
Deepak Khosla has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240135167Abstract: An example includes a method for training an agent to control an aircraft. The method includes: selecting, by the agent, first actions for the aircraft to perform within a first environment respectively during first time intervals based on first states of the first environment during the first time intervals, updating the agent based on first rewards that correspond respectively to the first states, selecting, by the agent, second actions for the aircraft to perform within a second environment respectively during second time intervals based on second states of the second environment during the second time intervals, and updating the agent based on second rewards that correspond respectively to the second states. At least one first rule of the first environment is different from at least one rule of the second environment.Type: ApplicationFiled: October 24, 2022Publication date: April 25, 2024Inventors: Yang Chen, Fan Hung, Deepak Khosla, Sean Soleyman, Joshua G. Fadaie
-
Patent number: 11941840Abstract: Apparatuses and methods determine the three-dimensional position and orientation of a fiducial marker and tracking the three-dimensional position and orientation across different fields-of-view. Methods include: capturing an image of a first space in which the fiducial marker is disposed with a first sensor having a first field-of-view; determining the three-dimensional location and orientation of the fiducial marker within the first space based on the image of the first space in which the fiducial marker is disposed; capturing an image of a second space in which the fiducial marker is disposed with a second sensor having a second field-of-view; calculating pan and tilt information for the second sensor to move the second field-of-view of the second sensor to acquire an image of the fiducial marker; and determining the three-dimensional location and orientation of the fiducial marker within the second space based on the image of the second space.Type: GrantFiled: September 21, 2021Date of Patent: March 26, 2024Assignee: THE BOEING COMPANYInventors: Yang Chen, Deepak Khosla, David Huber, Brandon M. Courter, Shane E. Arthur, Chris A. Cantrell, Anthony W. Baker
-
Patent number: 11734924Abstract: Described is a system for onboard, real-time, activity detection and classification. During operation, the system detects one or more objects in a scene using a mobile platform and tracks each of the objects as the objects move in the scene to generate tracks for each object. The tracks are transformed using a fuzzy-logic based mapping to semantics that define group activities of the one or more objects in the scene. Finally, a state machine is used to determine whether the defined group activities are normal or abnormal phases of a predetermined group operation.Type: GrantFiled: March 8, 2021Date of Patent: August 22, 2023Assignee: HRL LABORATORIES, LLCInventors: Leon Nguyen, Deepak Khosla
-
Publication number: 20230215042Abstract: Aspects of the disclosure provide fuel receptacle position estimation for aerial refueling (derived from aircraft position estimation). A video stream comprising a plurality of video frames each showing an aircraft to be refueled, is received from a single camera. An initial position estimate is determined for the aircraft for the plurality of video frames, generating an estimated flight history for the aircraft. The estimated flight history for the aircraft is used to determine a temporally consistent refined position estimate, based on known aircraft flight path trajectories in an aerial refueling setting. The position of a fuel receptacle on the aircraft is determined, based on the refined position estimate for the aircraft, and an aerial refueling boom may be controlled to engage the fuel receptacle. Examples may use a deep learning neural network (NN) or optimization (e.g., bundle adjustment) to determine the refined position estimate from the estimated flight history.Type: ApplicationFiled: January 5, 2022Publication date: July 6, 2023Inventors: Leon Nhat Nguyen, Haden Harrison Smith, Fan Hin Hung, Deepak Khosla
-
Publication number: 20230215041Abstract: Aspects of the disclosure provide fuel receptacle position/pose estimation for aerial refueling (derived from aircraft position and pose estimation). A video frame, showing an aircraft to be refueled, is received from a single camera. An initial position/pose estimate is determined for the aircraft, which is used to generating an initial rendering of an aircraft model. The video frame and the initial rendering are used to determining refinement parameters (e.g., a translation refinement and a rotational refinement) for the initial position/pose estimate, providing a refined position/pose estimate for the aircraft. The position/pose of a fuel receptacle on the aircraft is determined, based on the refined position/pose estimate for the aircraft, and an aerial refueling boom may be controlled to engage the fuel receptacle. Examples extract features from the aircraft in the video frame and the aircraft model rendering, and use a deep learning neural network (NN) to determine the refinement parameters.Type: ApplicationFiled: January 5, 2022Publication date: July 6, 2023Inventors: Leon Nhat Nguyen, Haden Harrison Smith, Fan Hin Hung, Deepak Khosla, Taraneh Sadjadpour
-
Publication number: 20230090757Abstract: Apparatuses and methods determine the three-dimensional position and orientation of a fiducial marker and tracking the three-dimensional position and orientation across different fields-of-view. Methods include: capturing an image of a first space in which the fiducial marker is disposed with a first sensor having a first field-of-view; determining the three-dimensional location and orientation of the fiducial marker within the first space based on the image of the first space in which the fiducial marker is disposed; capturing an image of a second space in which the fiducial marker is disposed with a second sensor having a second field-of-view; calculating pan and tilt information for the second sensor to move the second field-of-view of the second sensor to acquire an image of the fiducial marker; and determining the three-dimensional location and orientation of the fiducial marker within the second space based on the image of the second space.Type: ApplicationFiled: September 21, 2021Publication date: March 23, 2023Inventors: Yang CHEN, Deepak KHOSLA, David HUBER, Brandon M. COURTER, Shane E. ARTHUR, Chris A. CANTRELL, Anthony W. BAKER
-
Publication number: 20230086050Abstract: Apparatuses, systems, and methods dynamically model intrinsic parameters of a camera. Methods include: collecting, using a camera having a focus motor, calibration data at a series of discrete focus motor positions; generating, from the calibration data, a set of constant point intrinsic parameters; determining, from the set of constant point intrinsic parameters, a subset of intrinsic parameters to model dynamically; performing, for each intrinsic parameter of the subset of intrinsic parameters, a fit of the point intrinsic parameter values against focus motor positions; generating a model of the intrinsic parameters for the camera based, at least in part, on the fit of the point intrinsic parameter values against the focus motor positions; and determining a position of a fiducial marker within a field of view of the camera based, at least in part, on the model of the intrinsic parameters for the camera.Type: ApplicationFiled: September 21, 2021Publication date: March 23, 2023Inventors: Aaron FELDMAN, Deepak KHOSLA, Yang CHEN, David HUBER
-
Patent number: 11586200Abstract: A method includes receiving, by machine-learning logic, observations indicative of a states associated with a first and second group of vehicles arranged within an engagement zone during a first interval of an engagement between the first and the second group of vehicles. The machine-learning logic determines actions based on the observations that, when taken simultaneously by the first group of vehicles during the first interval, are predicted by the machine-learning logic to result in removal of one or more vehicles of the second group of vehicles from the engagement zone during the engagement. The machine-learning logic is trained using a reinforcement learning technique and on simulated engagements between the first and second group of vehicles to determine sequences of actions that are predicted to result in one or more vehicles of the second group being removed from the engagement zone. The machine-learning logic communicates the plurality of actions to the first group of vehicles.Type: GrantFiled: June 22, 2020Date of Patent: February 21, 2023Assignees: The Boeing Company, HRL Laboratories LLCInventors: Joshua G. Fadaie, Richard Hanes, Chun Kit Chung, Sean Soleyman, Deepak Khosla
-
Publication number: 20220413496Abstract: Training adversarial aircraft controllers is provided. The method comprises inputting current observed states of a number of aircraft into a world model encoder, wherein each current state represents a state of a different aircraft, and wherein each current state comprises a missing parameter value. A number of adversarial control actions for the aircraft are input into the world model encoder concurrently with the current observed state, wherein the adversarial control actions are generated by competing neural network controllers. The world model encoder generates a learned observation from the current observed states and adversarial control actions, wherein the learned observation represents the missing parameter value from the current observed states. The learned observation and current observed states are input into the competing neural network controllers, wherein each current observed state is fed into a respective controller.Type: ApplicationFiled: March 18, 2022Publication date: December 29, 2022Inventors: Sean Soleyman, Yang Chen, Fan Hin Hung, Deepak Khosla, Navid Naderializadeh
-
Publication number: 20220414422Abstract: A computer-implemented method for predicting behavior of aircraft is provided. The method comprises inputting a current state of a number of aircraft into a number of hidden layers of a neural network, wherein the neural network is fully connected. An action applied to the aircraft is input into the hidden layers concurrently with the current state. The hidden layers, according to the current state and current action, determine a residual output that comprises an incremental difference in the state of the aircraft resulting from the current action. A skip connection feeds forward the current state of the aircraft, and the residual output is added to the current state to determine a next state of the aircraft.Type: ApplicationFiled: March 18, 2022Publication date: December 29, 2022Inventors: Sean Soleyman, Yang Chen, Fan Hin Hung, Deepak Khosla, Navid Naderializadeh
-
Publication number: 20220414460Abstract: Training an encoder is provided. The method comprises inputting a current state of a number of aircraft into a recurrent layer of a neural network, wherein the current state comprises a reduced state in which a value of a specified parameter is missing. An action applied to the aircraft is input into the recurrent layer concurrently with the current state. The recurrent layer learns a value for the parameter missing from current state, and the output of the recurrent layer is input into a number of fully connected hidden layers. The hidden layers, according to the current state, learned value, and current action, determine a residual output that comprises an incremental difference in the state of the aircraft resulting from the current action.Type: ApplicationFiled: March 18, 2022Publication date: December 29, 2022Inventors: Sean Soleyman, Yang Chen, Fan Hin Hung, Deepak Khosla, Navid Naderializadeh
-
Publication number: 20220414283Abstract: Training a compressive encoder is provided. The method comprises calculating a difference between a current state of an aircraft and a previous state. The current state comprises a reduced state wherein the value of a specified parameter is missing. The difference is input into compressive layers of a neural network comprising an encoder. The compressive layers learn, according to the difference, a value for the missing parameter. The current state and learned value are concurrently fed into hidden layers of a fully connected neural network comprising a decoder. An action applied to the aircraft is input into the hidden layers concurrently with the current state and learned value. The hidden layers, according to the current state, learned value, and current action, determine a residual output that comprises an incremental difference in the state of the aircraft resulting from the current action.Type: ApplicationFiled: March 18, 2022Publication date: December 29, 2022Inventors: Sean Soleyman, Yang Chen, Fan Hin Hung, Deepak Khosla, Navid Naderializadeh
-
Publication number: 20220404490Abstract: A method, apparatus and computer program product are provided to generate a model of one or more objects relative to a vehicle. In the context of a method, radar information is received in the form of in-phase quadrature (IQ) data and the IQ data is converted to one or more first range-doppler maps. The method further includes evaluating the one or more first range-doppler maps with a machine learning model to generate the model that captures the detection of the one or more objects relative to the vehicle. A corresponding apparatus and computer program product are also provided.Type: ApplicationFiled: March 8, 2022Publication date: December 22, 2022Applicant: THE BOEING COMPANYInventors: Nick Shadbeh EVANS, William K. LEACH, Deepak KHOSLA, Leon NGUYEN, Michelle D. WARREN
-
Publication number: 20220404831Abstract: An example method for training a machine learning algorithm (MLA) to control a first aircraft in an environment that comprises the first aircraft and a second aircraft can involve: determining a first-aircraft action for the first aircraft to take within the environment; sending the first-aircraft action to a simulated environment; generating and sending to both the simulated environment and the MLA, randomly-sampled values for each of a set of parameters of the second aircraft different from predetermined fixed values for the set of parameters; receiving an observation of the simulated environment and a reward signal at the MLA, the observation including information about the simulated environment after the first aircraft has taken the first-aircraft action and the second aircraft has taken a second-aircraft action based on the randomly-sampled values; and updating the MLA based on the observation of the simulated environment, the reward signal, and the randomly-sampled values.Type: ApplicationFiled: May 11, 2022Publication date: December 22, 2022Inventors: Sean Soleyman, Deepak Khosla, Ram Longman
-
Publication number: 20220375222Abstract: Described is a system and method for accurate image and/or video scene classification. More specifically, described is a system that makes use of a specialized convolutional-neural network (hereafter CNN) based technique for the fusion of bottom-up whole-image features and top-down entity classification. When the two parallel and independent processing paths are fused, the system provides an accurate classification of the scene as depicted in the image or video.Type: ApplicationFiled: July 14, 2022Publication date: November 24, 2022Inventors: Ryan M. Uhlenbrock, Deepak Khosla, Yang Chen, Fredy Monterroza
-
Patent number: 11481634Abstract: A device includes a control input generator and a neural network trainer. A flight simulator is configured to generate first state data responsive to a first control input from the control input generator and to provide the first state data to a first neural network to generate a candidate second control input. The control input generator is also configured to select, based on a random value, a second control input from between the candidate second control input and a randomized offset control input that is based on a random offset applied to the first control input. The flight simulator is configured to generate second state data responsive to the second control input from the control input generator. The neural network trainer is configured to update weights of the first neural network based, at least in part, on the first state data and the second state data.Type: GrantFiled: August 29, 2019Date of Patent: October 25, 2022Assignee: THE BOEING COMPANYInventors: Yang Chen, Deepak Khosla, Kevin Martin
-
Patent number: 11455893Abstract: A method includes obtaining multiple sets of trajectory data, each descriptive of trajectories of two or more objects (e.g., first and second objects). The method also includes generating transformed trajectory data based on the trajectory data. Each set of transformed trajectory data is descriptive of the trajectories of the two or more objects in a normalized reference frame in which a movement path of the first object is constrained. The method further includes generating feature data, performing a clustering operation based on the feature data to generate a set of trajectory clusters, and generating training data based on the set of trajectory clusters. The method further includes using the training data to train a machine learning classifier to classify particular trajectory patterns.Type: GrantFiled: March 12, 2020Date of Patent: September 27, 2022Assignee: THE BOEING COMPANYInventors: Nigel Stepp, Sean Soleyman, Deepak Khosla
-
Patent number: 11423651Abstract: Described is a system and method for accurate image and/or video scene classification. More specifically, described is a system that makes use of a specialized convolutional-neural network (hereafter CNN) based technique for the fusion of bottom-up whole-image features and top-down entity classification. When the two parallel and independent processing paths are fused, the system provides an accurate classification of the scene as depicted in the image or video.Type: GrantFiled: February 8, 2017Date of Patent: August 23, 2022Assignee: HRL LABORATORIES, LLCInventors: Ryan M. Uhlenbrock, Deepak Khosla, Yang Chen, Fredy Monterroza
-
Publication number: 20220230348Abstract: Apparatuses and methods train a model and then use the trained model to determine a global three dimensional (3D) position and orientation of a fiduciary marker. In the context of an apparatus for training a model, a wider field-of-view sensor is configured to acquire a static image of a space in which the fiducial marker is disposed and a narrower field-of-view sensor is configured to acquire a plurality of images of at least a portion of the fiducial marker. The apparatus also includes a pan-tilt unit configured to controllably alter pan and tilt angles of the narrower field-of-view sensor during image acquisition. The apparatus further includes a control system configured to determine a transformation of position and orientation information determined from the images acquired by the narrower field-of-view sensor to a coordinate system for the space for which the static image is acquired by the wider field-of-view sensor.Type: ApplicationFiled: October 1, 2021Publication date: July 21, 2022Applicant: THE BOEING COMPANYInventors: David James HUBER, Deepak KHOSLA, Yang CHEN, Brandon COURTER, Luke Charles INGRAM, Jacob MOORMAN, Scott RAD, Anthony Wayne BAKER
-
Publication number: 20220215571Abstract: A system for refining a six degrees of freedom pose estimate of a target object based on a one-dimensional measurement includes a camera and a range-sensing device. The range-sensing device is configured to determine an actual distance measured between the range-sensing device and an actual point of intersection. The range-sensing device projects a line-of-sight that intersects with the target object at the actual point of intersection. The system also includes one or more processors in electronic communication with the camera and the range-sensing device and a memory coupled to the processors. The memory stores data into one or more databases and program code that, when executed by the processors, causes the system to predict the six degrees of freedom pose estimate of the target object. The system also determines a revised six degrees of freedom pose estimate of the target object based on at least an absolute error.Type: ApplicationFiled: December 20, 2021Publication date: July 7, 2022Inventors: William K. Leach, Leon Nguyen, Fan Hung, Yang Chen, Deepak Khosla, Haden H. Smith