Patents by Inventor Kevin R. Martin
Kevin R. Martin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240081802Abstract: Various methods and devices are provided for allowing multiple surgical instruments to be inserted into sealing elements of a single surgical access device. The sealing elements can be movable along predefined pathways within the device to allow surgical instruments inserted through the sealing elements to be moved laterally, rotationally, angularly, and vertically relative to a central longitudinal axis of the device for ease of manipulation within a patient's body while maintaining insufflation.Type: ApplicationFiled: November 16, 2023Publication date: March 14, 2024Inventors: Mark S. Ortiz, David T. Martin, Matthew C. Miller, Mark J. Reese, Wells D. Haberstich, Carl Shurtleff, Charles J. Scheib, Frederick E. Shelton, IV, Jerome R. Morgan, Daniel H. Duke, Daniel J. Mumaw, Gregory W. Johnson, Kevin L. Houser
-
Patent number: 11657147Abstract: Described is a system for detecting adversarial activities. During operation, the system generates a multi-layer temporal graph tensor (MTGT) representation based on an input tag stream of activities. The MTGT representation is decomposed to identify normal activities and abnormal activities, with the abnormal activities being designated as adversarial activities. A device can then be controlled based on the designation of the adversarial activities.Type: GrantFiled: April 24, 2018Date of Patent: May 23, 2023Assignee: HRL LABORATORIES, LLCInventors: Kang-Yu Ni, Charles E. Martin, Kevin R. Martin, Brian L. Burns
-
Patent number: 11150670Abstract: Apparatus and methods for training a machine learning algorithm (MLA) to control a first aircraft in an environment that comprises the first aircraft and a second aircraft are described. Training of the MLA can include: the MLA determining a first-aircraft action for the first aircraft to take within the environment; sending the first-aircraft action from the MLA; after sending the first-aircraft action, receiving an observation of the environment and a reward signal at the MLA, the observation including information about the environment after the first aircraft has taken the first-aircraft action and the second aircraft has taken a second-aircraft action, the reward signal indicating a score of performance of the first-aircraft action based on dynamic and kinematic properties of the second aircraft; and updating the MLA based on the observation of the environment and the reward signal.Type: GrantFiled: May 28, 2019Date of Patent: October 19, 2021Assignee: The Boeing CompanyInventors: Deepak Khosla, Kevin R. Martin, Sean Soleyman, Ignacio M. Soriano, Michael A. Warren, Joshua G. Fadaie, Charles Tullock, Yang Chen, Shawn Moffit, Calvin Chung
-
Publication number: 20200379486Abstract: Apparatus and methods for training a machine learning algorithm (MLA) to control a first aircraft in an environment that comprises the first aircraft and a second aircraft are described. Training of the MLA can include: the MLA determining a first-aircraft action for the first aircraft to take within the environment; sending the first-aircraft action from the MLA; after sending the first-aircraft action, receiving an observation of the environment and a reward signal at the MLA, the observation including information about the environment after the first aircraft has taken the first-aircraft action and the second aircraft has taken a second-aircraft action, the reward signal indicating a score of performance of the first-aircraft action based on dynamic and kinematic properties of the second aircraft; and updating the MLA based on the observation of the environment and the reward signal.Type: ApplicationFiled: May 28, 2019Publication date: December 3, 2020Inventors: Deepak Khosla, Kevin R. Martin, Sean Soleyman, Ignacio M. Soriano, Michael A. Warren, Joshua G. Fadaie, Charles Tullock, Yang Chen, Shawn Moffit, Calvin Chung
-
Publication number: 20200310422Abstract: An autonomous vehicle, cognitive system for operating an autonomous vehicle and method of operating an autonomous vehicle. The cognitive system includes one or more hypothesizer modules, a hypothesis resolver, one or more decider modules, and a decision resolver. Data related to an agent is received at the cognitive system. The one or more hypothesizer modules create a plurality of hypotheses for a trajectory of the agent based on the received data. The hypothesis resolver selects a single hypothesis for the trajectory of the agent from the plurality of hypotheses based on a selection criteria. The one or more decider modules create a plurality of decisions for a trajectory of the autonomous vehicle based on the selected hypothesis for the agent. The decision resolver selects a trajectory for the autonomous vehicle from the plurality of decisions. The autonomous vehicle is operated based on the selected trajectory.Type: ApplicationFiled: March 26, 2019Publication date: October 1, 2020Inventors: Rajan Bhattacharyya, Chong Ding, Vincent De Sapio, Michael J. Daily, Kyungnam Kim, Gavin D. Holland, Alexander S. Graber-Tilton, Kevin R. Martin
-
Patent number: 10671917Abstract: Described is a system for neural decoding of neural activity. Using at least one neural feature extraction method, neural data that is correlated with a set of behavioral data is transformed into sparse neural representations. Semantic features are extracted from a set of semantic data. Using a combination of distinct classification modes, the set of semantic data is mapped to the sparse neural representations, and new input neural data can be interpreted.Type: GrantFiled: October 26, 2016Date of Patent: June 2, 2020Assignee: HRL Laboratories, LLCInventors: Rajan Bhattacharyya, James Benvenuto, Vincent De Sapio, Michael J. O'Brien, Kang-Yu Ni, Kevin R. Martin, Ryan M. Uhlenbrock, Rachel Millin, Matthew E. Phillips, Hankyu Moon, Qin Jiang, Brian L. Burns
-
Graphical display and user-interface for high-speed triage of potential items of interest in imagery
Patent number: 10621461Abstract: The present invention relates to a surveillance system and, more particularly, to a graphical display and user interface system that provides high-speed triage of potential items of interest in imagery. The system receives at least one image of a scene from a sensor. The image is pre-processed to identify a plurality of potential objects of interest (OI) in the image. The potential OI are presented to the user as a series of chips on a threat chip display (TCD), where each chip is a region extracted from the image that corresponds to a potential OI. Finally, the system allows the user to designate, via the TCD, any one of the chips as an actual OI.Type: GrantFiled: March 10, 2014Date of Patent: April 14, 2020Assignee: HRL Laboratories, LLCInventors: David J. Huber, Deepak Khosla, Kevin R. Martin, Yang Chen -
Patent number: 10255480Abstract: A system includes a processor configured to generate a registered first 3D point cloud based on a first 3D point cloud and a second 3D point cloud. The processor is configured to generate a registered second 3D point cloud based on the first 3D point cloud and a third 3D point cloud. The processor is configured to generate a combined 3D point cloud based on the registered first 3D point cloud and the registered second 3D point cloud. The processor is configured to compare the combined 3D point cloud with a mesh model of the object. The processor is configured to generate, based on the comparison, output data indicating differences between the object as represented by the combined 3D point cloud and the object as represented by the 3D model. The system includes a display configured to display a graphical display of the differences.Type: GrantFiled: May 15, 2017Date of Patent: April 9, 2019Assignee: THE BOEING COMPANYInventors: Ryan Uhlenbrock, Deepak Khosla, Yang Chen, Kevin R. Martin
-
Publication number: 20180330149Abstract: A system includes a processor configured to generate a registered first 3D point cloud based on a first 3D point cloud and a second 3D point cloud. The processor is configured to generate a registered second 3D point cloud based on the first 3D point cloud and a third 3D point cloud. The processor is configured to generate a combined 3D point cloud based on the registered first 3D point cloud and the registered second 3D point cloud. The processor is configured to compare the combined 3D point cloud with a mesh model of the object. The processor is configured to generate, based on the comparison, output data indicating differences between the object as represented by the combined 3D point cloud and the object as represented by the 3D model. The system includes a display configured to display a graphical display of the differences.Type: ApplicationFiled: May 15, 2017Publication date: November 15, 2018Inventors: Ryan Uhlenbrock, Deepak Khosla, Yang Chen, Kevin R. Martin
-
Patent number: 10052062Abstract: Described is a system for system for gait intervention and fall prevention. The system is incorporated into a body suit having a plurality of distributed sensors and a vestibulo-muscular biostim array. The sensors are operable for providing biosensor data to the analytics module, while the vestibulo-muscular biostim array includes a plurality of distributed effectors. The analytics module is connected with the body suit and sensors and is operable for receiving biosensor data and analyzing a particular user's gait and predicting falls. Finally, a closed-loop biostim control module is included for activating the vestibulo-muscular biostim array to compensate for a risk of a predicted fall.Type: GrantFiled: February 11, 2016Date of Patent: August 21, 2018Assignee: HRL Laboratories, LLCInventors: Vincent De Sapio, Michael D. Howard, Suhas E. Chelian, Matthias Ziegler, Matthew E. Phillips, Kevin R. Martin, Heiko Hoffmann, David W. Payton
-
Publication number: 20170303849Abstract: Described is a system for system for gait intervention and fall prevention. The system is incorporated into a body suit having a plurality of distributed sensors and a vestibulo-muscular biostim array. The sensors are operable for providing biosensor data to the analytics module, while the vestibulo-muscular biostim array includes a plurality of distributed effectors. The analytics module is connected with the body suit and sensors and is operable for receiving biosensor data and analyzing a particular user's gait and predicting falls. Finally, a closed-loop biostim control module is included for activating the vestibulo-muscular biostim array to compensate for a risk of a predicted fall.Type: ApplicationFiled: February 11, 2016Publication date: October 26, 2017Inventors: Vincent De Sapio, Michael D. Howard, Suhas E. Chelian, Matthias Ziegler, Matthew E. Phillips, Kevin R. Martin, Heiko Hoffmann, David W. Payton
-
Patent number: 9778351Abstract: Described is system for surveillance that integrates radar with a panoramic staring sensor. The system captures image frames of a field-of-view of a scene using a multi-camera panoramic staring sensor. The field-of-view is scanned with a radar sensor to detect an object of interest. A radar detection is received when the radar sensor detects the object of interest. A radar message indicating the presence of the object of interest is generated. Each image frame is marked with a timestamp. The image frames are stored in a frame storage database. The set of radar-based coordinates from the radar message is converted into a set of multi-camera panoramic sensor coordinates. A video clip comprising a sequence of image frames corresponding in time to the radar message is created. Finally, the video clip is displayed, showing the object of interest.Type: GrantFiled: September 30, 2014Date of Patent: October 3, 2017Assignee: HRL Laboratories, LLCInventors: Deepak Khosla, David J. Huber, Yang Chen, Darrel J. VanBuer, Kevin R. Martin
-
Patent number: 9740949Abstract: Described is a system for detecting objects of interest in imagery. The system is configured to receive an input video and generate an attention map. The attention map represents features found in the input video that represent potential objects-of-interest (OI). An eye-fixation map is generated based on a subject's eye fixations. The eye-fixation map also represents features found in the input video that are potential OI. A brain-enhanced synergistic attention map is generated by fusing the attention map with the eye-fixation map. The potential OI in the brain-enhanced synergistic attention map are scored, with scores that cross a predetermined threshold being used to designate potential OI as actual or final OI.Type: GrantFiled: October 15, 2013Date of Patent: August 22, 2017Assignee: HRL Laboratories, LLCInventors: Deepak Khosla, Matthew Keegan, Lei Zhang, Kevin R. Martin, Darrel J. VanBuer, David J. Huber
-
Patent number: 9489596Abstract: Described is system for optimizing rapid serial visual presentation (RSVP) spacing and fusion. The system receives a sequence of a plurality of rapid serial visual presentation (RSVP) image chips. The plurality of RSVP image chips are generated from an image via a pre-processing step and have a high probability of containing a target of interest. The system alters the order of the sequence of the plurality of RSVP image chips to increase the probability of detection of a true target of interest when presented to a human subject.Type: GrantFiled: December 9, 2014Date of Patent: November 8, 2016Assignee: HRL Laboratories, LLCInventors: Deepak Khosla, Matthew S. Keegan, Kevin R. Martin
-
Patent number: 9489732Abstract: Described is a system for improved electroencephalograph (EEG) rapid serial visual presentation (RSVP) target stimuli detection through visual attention distractor insertion. A first RSVP sequence is created comprising a set of image chips. The image chips are a combination of target images containing target events and non-target images containing comment events. A number of visual attention distractors to optimize target event detection is determined, and the determined number of visual attention distractors is inserted into the first RSVP sequence to generate a second RSVP sequence. The second RSVP sequence is reordered to generate a third RSVP sequence. The third RSVP sequence is presented to a user, and an EEG signal is received from the user. Finally, the EEG signal is decoded to identify a true target event via a P300 detection in the EEG signal.Type: GrantFiled: March 12, 2014Date of Patent: November 8, 2016Assignee: HRL Laboratories, LLCInventors: Deepak Khosla, Kevin R. Martin, David J. Huber
-
Patent number: 7831433Abstract: Described is a navigation system. The navigation system comprises a route planning module and a route guidance module. The route planning module is configured to receive a request from a user for guidance to a particular destination. Based on a starting point, the route planning module determines a route from the starting point to the particular destination. The route guidance module is configured to receive the route, and based on the route and current location of the user, provide location-specific instructions to the user. The location-specific instructions include reference to specific visible objects within the vicinity of the user.Type: GrantFiled: February 9, 2007Date of Patent: November 9, 2010Assignee: HRL Laboratories, LLCInventors: Robert Belvin, Michael Daily, Narayan Srinivasa, Kevin R. Martin, Craig A. Lee, Cheryl Hein
-
Patent number: 6198462Abstract: Disclosed is a computerized data display system including a computer operating in accord with a window display management system for the display and control of a plurality of data windows on a display screen of a display device. The data windows are displayed on the display screen in a spatial relation corresponding to the field of view seen from a preselected viewing location selected by means of a control signal provided as an input to the computer. A head coupled image display device, coupled to the display screen of the display device, is adapted to display the data windows appearing on the display screen of the display device separately to each eye of a user to create a binocular, stereoscopic virtual screen image to the user that has a virtual screen size independent of the size of the display screen of the display device.Type: GrantFiled: October 14, 1994Date of Patent: March 6, 2001Assignee: Hughes Electronics CorporationInventors: Michael J. Daily, Michael D. Howard, Kevin R. Martin, Robert L. Seliger
-
Patent number: 5684935Abstract: A method and system (10) for generating a plurality of images of a three-dimensional scene from a database and a specified eye point and field of view. The method includes rendering an image frame, and warping the image frame by changing the eye point and field of view. The warping process is continued on the same image frame in accordance with a predetermined criteria, such as an evaluation of the distortion between the initial image displayed and the last image displayed for this image frame. The warping process utilizes convolution to change the eye point and field of view. The database includes a traversal vector for enabling the eye point and field of view to be determined, and a plurality of voxel elements of at least two different type.Type: GrantFiled: November 27, 1995Date of Patent: November 4, 1997Assignee: Hughes ElectronicsInventors: Nicanor P. Demesa, III, Kenneth S. Herberger, Kevin R. Martin, John R. Wright