Patents by Inventor Yuri Owechko
Yuri Owechko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11317870Abstract: Described is a system for health assessment. The system is implemented on a mobile device having at least one of an accelerometer, a geographic location sensor, and a camera. In operation, the system obtains sensor data related to an operator of the mobile device from one of the sensors. A network of networks (NoN) is generated based on the sensor data, the NoN having a plurality of layers with linked nodes. Tuples are thereafter generated. Each tuple contains a node from each layer that optimizes importance, diversity, and coherence. Storylines are created based on the tuples that solves a longest path problem for each tuple. The storylines track multiple symptom progressions of the operator. Finally, a disease prediction of the operator is provided based on the storylines.Type: GrantFiled: February 4, 2019Date of Patent: May 3, 2022Assignee: HRL Laboratories, LLCInventors: Vincent De Sapio, Jaehoon Choe, Iman Mohammadrezazadeh, Kang-Yu Ni, Heiko Hoffmann, Charles E. Martin, Yuri Owechko
-
Patent number: 11025865Abstract: A method is disclosed. The method receives a two dimensional or a three dimensional computer generated representation of an area, receives a plurality of images of the area captured by one or more video cameras, detects a first moving object in the plurality of images, generates a computer representation of the first moving object, correlates the images of the area captured by the one or more video cameras with the two dimensional or three dimensional computer generated representation of the area, and displays the computer representation of the first moving object in the two dimensional or the three dimensional computer generated representation of the area.Type: GrantFiled: June 17, 2011Date of Patent: June 1, 2021Assignee: HRL Laboratories, LLCInventors: Swarup S. Medasani, Yuri Owechko, Kyungnam Kim, Alexander Krayner
-
Publication number: 20210080952Abstract: A method for obstacle detection and navigation of a vehicle using resolution-adaptive fusion includes performing, by a processor, a resolution-adaptive fusion of at least a first three-dimensional (3D) point cloud and a second 3D point cloud to generate a fused, denoised, and resolution-optimized 3D point cloud that represents an environment associated with the vehicle. The first 3D point cloud is generated by a first-type 3D scanning sensor, and the second 3D point cloud is generated by a second-type 3D scanning sensor. The second-type 3D scanning sensor includes a different resolution in each of a plurality of different measurement dimensions relative to the first-type 3D scanning sensor. The method also includes detecting obstacles and navigating the vehicle using the fused, denoised, and resolution-optimized 3D point cloud.Type: ApplicationFiled: September 13, 2019Publication date: March 18, 2021Inventor: Yuri Owechko
-
Patent number: 10908616Abstract: Described is a system for object recognition. The system generates a training image set of object images from multiple image classes. Using a training image set and annotated semantic attributes, a model is trained that maps visual features from known images to the annotated semantic attributes using joint sparse representations with respect to dictionaries of visual features and semantic attributes. The trained model is used for mapping visual features of an unseen input image to its semantic attributes. The unseen input image is classified as belonging to an image class, and a device is controlled based on the classification of the unseen input image.Type: GrantFiled: July 12, 2018Date of Patent: February 2, 2021Assignee: HRL Laboratories, LLCInventors: Soheil Kolouri, Mohammad Rostami, Kyungnam Kim, Yuri Owechko
-
Patent number: 10885928Abstract: A method for increasing accuracy and reducing computational requirements for blind source separation of mixtures of signals in multi-path environments includes receiving a plurality of channel inputs, each channel input comprising a mixture of signals from a plurality of sources, performing a short time Fourier transform on each channel input of the plurality of channels, wherein a respective output of a respective short time Fourier transform on a respective channel is a respective time-frequency distribution for the respective channel, vectorizing each respective time-frequency distribution into a respective mixed frequency and time vector, combining each respective mixed frequency and time vector into a mixed frequency and time matrix, and performing blind source separation on the mixed frequency and time matrix to separate the mixture of signals from the plurality of sources into a plurality of signal source channels, each respective signal source channel comprising signals from a respective source.Type: GrantFiled: September 11, 2018Date of Patent: January 5, 2021Assignee: HRL Laboratories, LLCInventor: Yuri Owechko
-
Patent number: 10872425Abstract: A method includes receiving image data at a first tracking system. The image data represents a region in an image of a sequence of images. The method also includes generating a first tracking fingerprint based on the image data. The method further includes comparing the first tracking fingerprint and a second tracking fingerprint. The method also includes providing an output from the first tracking system to a second tracking system based on a result of the comparison of the first tracking fingerprint and the second tracking fingerprint. The output includes an instruction associated with an object model stored at the second tracking system.Type: GrantFiled: March 20, 2018Date of Patent: December 22, 2020Assignee: THE BOEING COMPANYInventors: Kyungnam Kim, Changsoo Jeong, Terrell N. Mundhenk, Yuri Owechko
-
Publication number: 20200348997Abstract: Described is a system for detection of network activities using transitive tensor analysis. The system divides a tensor into multiple subtensors, where the tensor represents communications on a communications network of streaming network data. Each subtensor is decomposed, separately and independently, into subtensor mode factors. Using transitive mode factor matching, orderings of the subtensor mode factors are determined. A set of subtensor factor coefficients is determined for the subtensor mode factors, and the subtensor factor coefficients are used to determine the relative weighting of the subtensor mode factors, and activity patterns represented by the subtensor mode factors are detected. Based on the detection, an alert of an anomaly is generated, indicating a in the communications network and a time of occurrence.Type: ApplicationFiled: July 22, 2020Publication date: November 5, 2020Inventor: Yuri Owechko
-
Patent number: 10789682Abstract: Described herein is a method of enhancing an image includes determining a level of environmental artifacts at a plurality of positions on an image frame of image data. The method also includes adjusting local area processing of the image frame, to generate an adjusted image frame of image data, based on the level of environmental artifacts at each position of the plurality of positions. The method includes displaying the adjusted image frame.Type: GrantFiled: June 16, 2017Date of Patent: September 29, 2020Assignee: The Boeing CompanyInventors: Yuri Owechko, Qin Jiang
-
Patent number: 10785903Abstract: Described is a system for determining crop residue fraction. The system includes a color video camera mounted on a mobile platform for generating a two-dimensional (2D) color video image of a scene in front or behind the mobile platform. In operation, the system separates the 2D color video image into three separate one-dimensional (1D) mixture signals for red, green, and blue channels. The three 1D mixture signals are then separated into pure 1D component signals using blind source separation. The 1D component signals are thresholded and converted to 2D binary, pixel-level abundance maps, which can then be integrated to allow the system to determine a total component fractional abundance of crop in the scene. Finally, the system can control a mobile platform, such as a harvesting machine, based on the total component fractional abundance of crop in the scene.Type: GrantFiled: March 11, 2019Date of Patent: September 29, 2020Assignee: HRL Laboratories, LLCInventor: Yuri Owechko
-
Patent number: 10755141Abstract: Described is a system for controlling a device based on streaming data analysis using blind source separation. The system updates a set of parallel processing pipelines for two-dimensional (2D) tensor slices of streaming tensor data in different orientations, where the streaming tensor data includes incomplete sensor data. In updating the parallel processing pipelines, the system replaces a first tensor slice with a new tensor slice resulting in an updated set of tensor slices in different orientations. At each time step, a cycle of demixing, transitive matching, and tensor factor weight calculations is performed on the updated set of tensor slices. The tensor factor weight calculations are used for sensor data reconstruction, and based on the sensor data reconstruction, hidden sensor data is extracted. Upon recognition of an object in the extracted hidden sensor data, the device is caused to perform a maneuver to avoid a collision with the object.Type: GrantFiled: March 11, 2019Date of Patent: August 25, 2020Assignee: HRL Laboratories, LLCInventor: Yuri Owechko
-
Patent number: 10732277Abstract: A method for automatic target recognition in synthetic aperture radar (SAR) data, comprising: capturing a real SAR image of a potential target at a real aspect angle and a real grazing angle; generating a synthetic SAR image of the potential target by inputting, from a potential target database, at least one three-dimensional potential target model at the real aspect angle and the real grazing angle into a SAR regression renderer; and, classifying the potential target with a target label by comparing at least a portion of the synthetic SAR image with a corresponding portion of the real SAR image using a processor.Type: GrantFiled: April 29, 2016Date of Patent: August 4, 2020Assignee: THE BOEING COMPANYInventors: Dmitriy V. Korchev, Yuri Owechko, Mark A. Curry
-
Patent number: 10726311Abstract: Described is a system for sensor data fusion and reconstruction. The system extracts slices from a tensor having multiple tensor modes. Each tensor mode represents a different sensor data stream of incomplete sensor signals. The tensor slices are processed into demixed outputs. The demixed outputs are converted back into tensor slices, and the tensor slices are decomposed into mode factors using matrix decomposition. Mode factors are determined for all of the tensor modes, and the mode factors are assigned to tensor factors by matching mode factors common to two or more demixings. Tensor weight factors are determined and used for fusing the sensor data streams for sensor data reconstruction. Based on the sensor data reconstruction, hidden sensor data is extracted.Type: GrantFiled: July 13, 2018Date of Patent: July 28, 2020Assignee: HRL Laboratories, LLCInventor: Yuri Owechko
-
Patent number: 10706617Abstract: Examples include methods, systems, and articles for localizing a vehicle relative to an imaged surface configuration. Localizing the vehicle may include selecting pairs of features in an image acquired from a sensor supported by the vehicle having corresponding identified pairs of features in a reference representation of the surface configuration. A three-dimensional geoarc may be generated based on an angle of view of the sensor and the selected feature pair in the reference representation. In some examples, a selected portion of the geoarc disposed a known distance of the vehicle away from the portion of the physical surface configuration may be determined. Locations where the selected portions of geoarcs for selected feature pairs overlap may be identified. In some examples, the reference representation may be defined in a three-dimensional space of volume elements (voxels), and voxels that are included in the highest number of geoarcs may be determined.Type: GrantFiled: July 2, 2018Date of Patent: July 7, 2020Assignee: The Boeing CompanyInventor: Yuri Owechko
-
Patent number: 10677713Abstract: A gas-phase chemical analyzer has at least one gas chromatography column in gas-flow communication with at least one gas carrying tube of an optical absorption cell, a laser for illuminating molecules in a gas mixture flowing though the at least one gas carrying tube of the optical absorption cell, and a photodetector or photodetecting apparatus for measuring absorption spectra of the gas mixture illuminated by the laser. A first module is provided for statically identifying particular molecules in the gas mixture from other molecules in said gas mixture and a second module is provided for comparing at least selected ones of the particular molecules in the gas mixture with a reference library of absorption spectra of previously identified molecules and for determining the likelihood of a correct identification of the particular molecules in the gas mixture and the previously identified molecules in the reference library.Type: GrantFiled: August 4, 2017Date of Patent: June 9, 2020Assignee: HRL Laboratories, LLCInventors: Daniel Yap, Yuri Owechko
-
Patent number: 10592788Abstract: Described is a system for recognition of unseen and untrained patterns. A graph is generated based on visual features from input data, the input data including labeled instances and unseen instances. Semantic representations of the input data are assigned as graph signals based on the visual features. The semantic representations are aligned with visual representations of the input data using a regularization method applied directly in a spectral graph wavelets (SGW) domain. The semantic representations are then used to generate labels for the unseen instances. The unseen instances may represent unknown conditions for an autonomous vehicle.Type: GrantFiled: December 19, 2017Date of Patent: March 17, 2020Assignee: HRL Laboratories, LLCInventors: Shay Deutsch, Kyungnam Kim, Yuri Owechko
-
Patent number: 10549853Abstract: Described herein is a method of displaying an object that includes detecting a first location of the object on a first image frame of image data. The method also includes determining a second location of the object on a second image frame of the image data on which the object is not detectable due to an obstruction. The method includes displaying a representation of the object on the second image frame.Type: GrantFiled: May 26, 2017Date of Patent: February 4, 2020Assignee: The Boeing CompanyInventors: Qin Jiang, Yuri Owechko, Kyungnam Kim
-
Patent number: 10528818Abstract: Described is a video scene analysis system. The system includes a salience module that receives a video stream having one more pairs of frames (each frame having a background and a foreground) and detects salient regions in the video stream to generate salient motion estimates. The salient regions are regions that move differently than dominant motion in the pairs of video frames. A scene modeling module generates a sparse foreground model based on salient motion estimates from a plurality of consecutive frames. A foreground refinement module then generates a Task-Aware Foreground by refining the sparse foreground model based on task knowledge. The Task-Aware Foreground can then be used for further processing such as object detection, tracking or recognition.Type: GrantFiled: April 29, 2016Date of Patent: January 7, 2020Assignee: HRL Laboratories, LLCInventors: Shankar R. Rao, Kang-Yu Ni, Yuri Owechko
-
Publication number: 20190313570Abstract: Described is a system for determining crop residue fraction. The system includes a color video camera mounted on a mobile platform for generating a two-dimensional (2D) color video image of a scene in front or behind the mobile platform. In operation, the system separates the 2D color video image into three separate one-dimensional (1D) mixture signals for red, green, and blue channels. The three 1D mixture signals are then separated into pure 1D component signals using blind source separation. The 1D component signals are thresholded and converted to 2D binary, pixel-level abundance maps, which can then be integrated to allow the system to determine a total component fractional abundance of crop in the scene. Finally, the system can control a mobile platform, such as a harvesting machine, based on the total component fractional abundance of crop in the scene.Type: ApplicationFiled: March 11, 2019Publication date: October 17, 2019Inventor: Yuri Owechko
-
Patent number: 10438408Abstract: A method for generating a resolution adaptive mesh for 3-D metrology of an object includes receiving point cloud data from a plurality of sensors. The point cloud data from each sensor defines a point cloud that represents the object. Each point cloud includes a multiplicity of points and each point includes at least location information for the point on the object. The method also includes determining a resolution of each sensor in each of three orthogonal dimensions based on a position of each sensor relative to the object and physical properties of each sensor. The method further includes generating a surface representation of the object from the point clouds using the resolutions of each sensor. The surface representation of the object includes a resolution adaptive mesh corresponding to the object for 3-D metrology of the object.Type: GrantFiled: July 28, 2017Date of Patent: October 8, 2019Assignee: The Boeing CompanyInventor: Yuri Owechko
-
Patent number: 10402675Abstract: Examples include methods, systems, and articles for localizing a vehicle relative to an imaged surface configuration. Localizing the vehicle may include selecting pairs of features in an image acquired from a sensor supported by the vehicle having corresponding identified pairs of features in a reference representation of the surface configuration. A three-dimensional geoarc may be generated based on an angle of view of the sensor and the selected feature pair in the reference representation. In some examples, a selected portion of the geoarc disposed a known distance of the vehicle away from the portion of the physical surface configuration may be determined. Locations where the selected portions of geoarcs for selected feature pairs overlap may be identified. In some examples, the reference representation may be defined in a three-dimensional space of volume elements (voxels), and voxels that are included in the highest number of geoarcs may be determined.Type: GrantFiled: August 30, 2016Date of Patent: September 3, 2019Assignee: The Boeing CompanyInventor: Yuri Owechko