Patents by Inventor Benjamin Laxton

Benjamin Laxton has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240036295
    Abstract: A digital adaptive optics encoder module includes an input mounting flange, a collimating lens, a bandpass filter, digital adaptive optic elements, refocusing lens, an output mounting flange, and a housing. The input mounting flange is capable of attaching to a telescope. The collimating lens is capable of expanding light from a target to fill a plurality of primary apertures. The bandpass filter has a bandwidth ranging from about 40 nm to about 100 nm. The digital adaptive optic elements include the plurality of primary apertures, an optical spreader, a focusing optic, and a detector. The refocusing lens is capable of refocusing an output from the digital adaptive optic elements onto a sensor plane. The output mounting flange is capable of attaching to an output connection. The housing encloses all of the interior components of the digital adaptive optics encoder module.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Applicant: THE UNITED STATES OF AMERICA AS REPRESENTED BY THE SECRETARY OF THE NAVY
    Inventors: Benjamin Laxton, Kyle Drexler, Skylar D. Lilledahl
  • Publication number: 20240031546
    Abstract: A homodyne encoder encodes light sampled from an object with respective primary apertures for each spectral band. The multi-band homodyne encoder includes an optical spreader and a focusing optic. The optical spreader spreads apart the light from the respective primary apertures for each spectral band to respective secondary apertures for each spectral band. The optical spreader is arranged to spread, for each spectral band, the light from each one of the primary apertures for the spectral band to a respective one of the secondary apertures for the spectral band. The focusing optic focuses the light from the secondary apertures for all of the spectral bands into one or more composite images of the object. For each spectral band, every pairing of two of the primary apertures for the spectral band contributes distinct spatial frequencies to the composite image or images of the object.
    Type: Application
    Filed: July 25, 2022
    Publication date: January 25, 2024
    Inventors: Kyle Robert Drexler, Benjamin Laxton
  • Patent number: 8165405
    Abstract: A system (and a method) are disclosed for recognizing and representing activities in a video sequence. The system includes an activity dynamic Bayesian network (ADBN), an object/action dictionary, an activity inference engine and a state output unit. The activity dynamic Bayesian network encodes the prior information of a selected activity domain. The prior information of the selected activity domain describes the ordering, temporal constraints and contextual cues among the expected actions. The object/action dictionary detects activities in each frame of the input video stream, represents the activities hierarchically, and generates an estimated observation probability for each detected action. The activity inference engine estimates a likely activity state for each frame based on the evidence provided by the object/action dictionary and the ADBN. The state output unit outputs the likely activity state generated by the activity inference engine.
    Type: Grant
    Filed: October 22, 2007
    Date of Patent: April 24, 2012
    Assignee: Honda Motor Co., Ltd.
    Inventors: Jongwoo Lim, Benjamin Laxton
  • Publication number: 20080144937
    Abstract: A system (and a method) are disclosed for recognizing and representing activities in a video sequence. The system includes an activity dynamic Bayesian network (ADBN), an object/action dictionary, an activity inference engine and a state output unit. The activity dynamic Bayesian network encodes the prior information of a selected activity domain. The prior information of the selected activity domain describes the ordering, temporal constraints and contextual cues among the expected actions. The object/action dictionary detects activities in each frame of the input video stream, represents the activities hierarchically, and generates an estimated observation probability for each detected action. The activity inference engine estimates a likely activity state for each frame based on the evidence provided by the object/action dictionary and the ADBN. The state output unit outputs the likely activity state generated by the activity inference engine.
    Type: Application
    Filed: October 22, 2007
    Publication date: June 19, 2008
    Applicant: HONDA MOTOR CO., LTD.
    Inventors: Jongwoo Lim, Benjamin Laxton