Patents by Inventor Yuri Owechko

Yuri Owechko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10677713
    Abstract: A gas-phase chemical analyzer has at least one gas chromatography column in gas-flow communication with at least one gas carrying tube of an optical absorption cell, a laser for illuminating molecules in a gas mixture flowing though the at least one gas carrying tube of the optical absorption cell, and a photodetector or photodetecting apparatus for measuring absorption spectra of the gas mixture illuminated by the laser. A first module is provided for statically identifying particular molecules in the gas mixture from other molecules in said gas mixture and a second module is provided for comparing at least selected ones of the particular molecules in the gas mixture with a reference library of absorption spectra of previously identified molecules and for determining the likelihood of a correct identification of the particular molecules in the gas mixture and the previously identified molecules in the reference library.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: June 9, 2020
    Assignee: HRL Laboratories, LLC
    Inventors: Daniel Yap, Yuri Owechko
  • Patent number: 10592788
    Abstract: Described is a system for recognition of unseen and untrained patterns. A graph is generated based on visual features from input data, the input data including labeled instances and unseen instances. Semantic representations of the input data are assigned as graph signals based on the visual features. The semantic representations are aligned with visual representations of the input data using a regularization method applied directly in a spectral graph wavelets (SGW) domain. The semantic representations are then used to generate labels for the unseen instances. The unseen instances may represent unknown conditions for an autonomous vehicle.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: March 17, 2020
    Assignee: HRL Laboratories, LLC
    Inventors: Shay Deutsch, Kyungnam Kim, Yuri Owechko
  • Patent number: 10549853
    Abstract: Described herein is a method of displaying an object that includes detecting a first location of the object on a first image frame of image data. The method also includes determining a second location of the object on a second image frame of the image data on which the object is not detectable due to an obstruction. The method includes displaying a representation of the object on the second image frame.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: February 4, 2020
    Assignee: The Boeing Company
    Inventors: Qin Jiang, Yuri Owechko, Kyungnam Kim
  • Patent number: 10528818
    Abstract: Described is a video scene analysis system. The system includes a salience module that receives a video stream having one more pairs of frames (each frame having a background and a foreground) and detects salient regions in the video stream to generate salient motion estimates. The salient regions are regions that move differently than dominant motion in the pairs of video frames. A scene modeling module generates a sparse foreground model based on salient motion estimates from a plurality of consecutive frames. A foreground refinement module then generates a Task-Aware Foreground by refining the sparse foreground model based on task knowledge. The Task-Aware Foreground can then be used for further processing such as object detection, tracking or recognition.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: January 7, 2020
    Assignee: HRL Laboratories, LLC
    Inventors: Shankar R. Rao, Kang-Yu Ni, Yuri Owechko
  • Publication number: 20190313570
    Abstract: Described is a system for determining crop residue fraction. The system includes a color video camera mounted on a mobile platform for generating a two-dimensional (2D) color video image of a scene in front or behind the mobile platform. In operation, the system separates the 2D color video image into three separate one-dimensional (1D) mixture signals for red, green, and blue channels. The three 1D mixture signals are then separated into pure 1D component signals using blind source separation. The 1D component signals are thresholded and converted to 2D binary, pixel-level abundance maps, which can then be integrated to allow the system to determine a total component fractional abundance of crop in the scene. Finally, the system can control a mobile platform, such as a harvesting machine, based on the total component fractional abundance of crop in the scene.
    Type: Application
    Filed: March 11, 2019
    Publication date: October 17, 2019
    Inventor: Yuri Owechko
  • Patent number: 10438408
    Abstract: A method for generating a resolution adaptive mesh for 3-D metrology of an object includes receiving point cloud data from a plurality of sensors. The point cloud data from each sensor defines a point cloud that represents the object. Each point cloud includes a multiplicity of points and each point includes at least location information for the point on the object. The method also includes determining a resolution of each sensor in each of three orthogonal dimensions based on a position of each sensor relative to the object and physical properties of each sensor. The method further includes generating a surface representation of the object from the point clouds using the resolutions of each sensor. The surface representation of the object includes a resolution adaptive mesh corresponding to the object for 3-D metrology of the object.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: October 8, 2019
    Assignee: The Boeing Company
    Inventor: Yuri Owechko
  • Patent number: 10402675
    Abstract: Examples include methods, systems, and articles for localizing a vehicle relative to an imaged surface configuration. Localizing the vehicle may include selecting pairs of features in an image acquired from a sensor supported by the vehicle having corresponding identified pairs of features in a reference representation of the surface configuration. A three-dimensional geoarc may be generated based on an angle of view of the sensor and the selected feature pair in the reference representation. In some examples, a selected portion of the geoarc disposed a known distance of the vehicle away from the portion of the physical surface configuration may be determined. Locations where the selected portions of geoarcs for selected feature pairs overlap may be identified. In some examples, the reference representation may be defined in a three-dimensional space of volume elements (voxels), and voxels that are included in the highest number of geoarcs may be determined.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: September 3, 2019
    Assignee: The Boeing Company
    Inventor: Yuri Owechko
  • Patent number: 10354444
    Abstract: A method for generating a resolution adaptive mesh for 3-D metrology of an object includes receiving point cloud data from a plurality of sensors. The point cloud data from each sensor defines a point cloud that represents the object. Each point cloud includes a multiplicity of points and each point includes at least location information for the point on the object. The method also includes determining a resolution of each sensor in each orthogonal dimension based on a position of each sensor relative to the object. The method additionally includes generating a surface representation of the object from the point clouds using the resolutions of each sensor. The surface representation of the object includes a resolution adaptive mesh corresponding to the object for metrology of the object. Generating the surface representation includes fitting a mesh to the point clouds using an intermediate implicit representation of each point cloud.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: July 16, 2019
    Assignee: The Boeing Company
    Inventor: Yuri Owechko
  • Publication number: 20190205696
    Abstract: Described is a system for controlling a device based on streaming data analysis using blind source separation. The system updates a set of parallel processing pipelines for two-dimensional (2D) tensor slices of streaming tensor data in different orientations, where the streaming tensor data includes incomplete sensor data. In updating the parallel processing pipelines, the system replaces a first tensor slice with a new tensor slice resulting in an updated set of tensor slices in different orientations. At each time step, a cycle of demixing, transitive matching, and tensor factor weight calculations is performed on the updated set of tensor slices. The tensor factor weight calculations are used for sensor data reconstruction, and based on the sensor data reconstruction, hidden sensor data is extracted. Upon recognition of an object in the extracted hidden sensor data, the device is caused to perform a maneuver to avoid a collision with the object.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 4, 2019
    Inventor: Yuri Owechko
  • Patent number: 10295462
    Abstract: A sensor system including a spectrometer with a light source having a plurality of selectable wavelengths, a controller for controlling the sensor system, for selecting wavelengths of illumination light produced by the light source, and for controlling the light source to illuminate a spatial location, a photodetector aligned to detect light received from the spatial location, a blind demixer coupled to the photodetector for separating received spectra in the detected light into a set of sample spectra associated with different demixed or partially demixed chemical components, a memory having a plurality of stored reference spectra, a non-blind demixer coupled to the blind demixer and to the memory for non-blind demixing of the sample spectra using the reference spectra, and a classifier coupled to the non-blind demixer for classifying the set of demixed sample spectra into chemical components using the reference spectra.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: May 21, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Daniel Yap, Yuri Owechko, Richard M. Kremer, Shankar R. Rao
  • Patent number: 10268914
    Abstract: Described is a blind sensing system for hyperspectral surveillance. During operation, hyperspectral data is captured using a hyperspectral camera as mounted on a mobile platform. The system then forms a signal mixture of a plurality of multi-dimensional signals. The multi-dimensional signals being the captured hyperspectral data of a wide area having a background and an object. The plurality of multi-dimensional signals are then demixed using blind source separation, resulting in separated spectra. Finally, the system detects and recognizes a spectral signature of the object in the separated spectra.
    Type: Grant
    Filed: January 9, 2017
    Date of Patent: April 23, 2019
    Assignee: HRL Laboratories, LLC
    Inventor: Yuri Owechko
  • Patent number: 10234377
    Abstract: Described is a system for remote analysis of spectral data. A set of measured spectral mixtures are separated using a blind demixing process, resulting in demixed outputs. A demixed output is selected for further processing, and a spectral library is selected from a set of spectral libraries that is specialized for the selected demixed output. Individual components in the selected demixed output are classified via a non-blind demixing process using the selected spectral library. Trace chemical residues are detected in the set of measured spectral mixtures.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: March 19, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Yuri Owechko, Shankar R. Rao
  • Publication number: 20190080210
    Abstract: Described is a system for sensor data fusion and reconstruction. The system extracts slices from a tensor having multiple tensor modes. Each tensor mode represents a different sensor data stream of incomplete sensor signals. The tensor slices are processed into demixed outputs. The demixed outputs are converted back into tensor slices, and the tensor slices are decomposed into mode factors using matrix decomposition. Mode factors are determined for all of the tensor modes, and the mode factors are assigned to tensor factors by matching mode factors common to two or more demixings. Tensor weight factors are determined and used for fusing the sensor data streams for sensor data reconstruction. Based on the sensor data reconstruction, hidden sensor data is extracted.
    Type: Application
    Filed: July 13, 2018
    Publication date: March 14, 2019
    Inventor: Yuri Owechko
  • Patent number: 10198790
    Abstract: Described in this disclosure is a space-variant Multi-domain Foveated Compressive Sensing (MFCS) system for adaptive imaging with variable resolution in spatial, polarization, and spectral domains simultaneously and with very low latency between multiple adaptable regions of interest (ROIs) across the field of view (FOV). The MFCS system combines space-variant foveated compressive sensing (FCS) imaging covered by a previous disclosure with a unique dual-path high efficiency optical architecture for parallel multi-domain compressive sensing (CS) processing. A single programmable Digital Micromirror Device (DMD) micro-mirror array is used at the input aperture to adaptively define and vary the resolution of multiple variable-sized ROIs across the FOV, encode the light for CS reconstruction, and adaptively divide the input light among multiple optical paths using complementary measurement codes, which can then be reconstructed as desired.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: February 5, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Yuri Owechko, Daniel Yap, Oleg M. Efimov
  • Publication number: 20190035150
    Abstract: A method for generating a resolution adaptive mesh for 3-D metrology of an object includes receiving point cloud data from a plurality of sensors. The point cloud data from each sensor defines a point cloud that represents the object. Each point cloud includes a multiplicity of points and each point includes at least location information for the point on the object. The method also includes determining a resolution of each sensor in each of three orthogonal dimensions based on a position of each sensor relative to the object and physical properties of each sensor. The method further includes generating a surface representation of the object from the point clouds using the resolutions of each sensor. The surface representation of the object includes a resolution adaptive mesh corresponding to the object for 3-D metrology of the object.
    Type: Application
    Filed: July 28, 2017
    Publication date: January 31, 2019
    Inventor: Yuri Owechko
  • Publication number: 20190035148
    Abstract: A method for generating a resolution adaptive mesh for 3-D metrology of an object includes receiving point cloud data from a plurality of sensors. The point cloud data from each sensor defines a point cloud that represents the object. Each point cloud includes a multiplicity of points and each point includes at least location information for the point on the object. The method also includes determining a resolution of each sensor in each orthogonal dimension based on a position of each sensor relative to the object. The method additionally includes generating a surface representation of the object from the point clouds using the resolutions of each sensor. The surface representation of the object includes a resolution adaptive mesh corresponding to the object for metrology of the object. Generating the surface representation includes fitting a mesh to the point clouds using an intermediate implicit representation of each point cloud.
    Type: Application
    Filed: July 28, 2017
    Publication date: January 31, 2019
    Inventor: Yuri Owechko
  • Publication number: 20190025848
    Abstract: Described is a system for object recognition. The system generates a training image set of object images from multiple image classes. Using a training image set and annotated semantic attributes, a model is trained that maps visual features from known images to the annotated semantic attributes using joint sparse representations with respect to dictionaries of visual features and semantic attributes. The trained model is used for mapping visual features of an unseen input image to its semantic attributes. The unseen input image is classified as belonging to an image class, and a device is controlled based on the classification of the unseen input image.
    Type: Application
    Filed: July 12, 2018
    Publication date: January 24, 2019
    Inventors: Soheil Kolouri, Mohammad Rostami, Kyungnam Kim, Yuri Owechko
  • Patent number: 10176557
    Abstract: Described herein is a method for enhancing image data that includes dividing an image into multiple regions. The method includes measuring variations in pixel intensity distribution of the image to determine high pixel intensity variations for identifying an intensity-changing region. The method includes calculating a histogram of intensity distribution of pixel intensity values for the intensity-changing region without calculating a histogram of intensity distribution of pixel intensity values for each region of the multiple regions. The method also includes determining a transformation function based on the intensity distribution for the intensity-changing region. The method includes applying the transformation function to modify an intensity for each pixel in the image to produce an enhanced image in real time. The method also includes detecting in the enhanced image a horizon for providing to an operator of a vehicle an indication of the horizon in the image on a display in the vehicle.
    Type: Grant
    Filed: September 7, 2016
    Date of Patent: January 8, 2019
    Assignee: The Boeing Company
    Inventors: Qin Jiang, Yuri Owechko
  • Patent number: 10176382
    Abstract: Described is system and method for visual media reasoning. An input image is filtered using a first series of kernels tuned to represent objects of general categories, followed by a second series of sparse coding filter kernels tuned to represent objects of specialized categories, resulting in a set of sparse codes. Object recognition is performed on the set of sparse codes to generate object and semantic labels for the set of sparse codes. Pattern completion is performed on the object and semantic labels to recall relevant meta-data in the input image. Bi-directional feedback is used to fuse the input data with the relevant meta-data. An annotated image with information related to who is in the input image, what is in the input image, when the input image was captured, and where the input image was captured is generated.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: January 8, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Yuri Owechko, Shanka R. Rao, Shinko Y. Cheng, Suhas E. Chelian, Rajan Bhattacharyya, Michael D. Howard
  • Patent number: 10176407
    Abstract: Described is a system for library-based spectral demixing. The system simultaneously separates and identifies spectral elements in a set of noisy, cluttered spectral elements using Sparse Representation-based Classification (SRC) by modeling the set of noisy, cluttered spectral elements. The spectral library models each spectral element in the set of noisy, cluttered spectral elements, each spectral element having a corresponding wavenumber measurement. Wavenumber measurements are classified, resulting in salient wavenumber measurements. Target spectral elements representing a target of interest are identified in the set of noisy, cluttered spectral elements using the salient wavenumber measurements.
    Type: Grant
    Filed: October 1, 2016
    Date of Patent: January 8, 2019
    Assignee: HRL Laboratories, LLC
    Inventors: Shankar R. Rao, Yuri Owechko