Patents by Inventor Garbis Salgian
Garbis Salgian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230419410Abstract: Systems and methods for providing remote farm damage assessment are provided herein. In some embodiments, a system and method for providing remote farm damage assessment may include, determining a set of damage assessment locales for damage assessment; incorporating the set of damage assessment locales into a workflow; providing the workflow to a user device; receiving a first set of damage assessment images from the user device based on the workflow provided, wherein each of the first set of damage assessment images includes geolocation information and camera information; determining a damage assessment based on the first set of damage assessment images using a damage assessment machine learning model; and outputting a damage assessment indication including one or more of whether there is damage, a confidence level of assessing the damage, or a confidence level associated with the level of damage.Type: ApplicationFiled: December 15, 2021Publication date: December 28, 2023Inventors: Supun SAMARASEKERA, Rakesh KUMAR, Garbis SALGIAN, Qiao WANG, Glenn A. MURRAY, Avijit BASU, Alison POLKINHORNE
-
Patent number: 11270426Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.Type: GrantFiled: May 14, 2019Date of Patent: March 8, 2022Assignee: SRI InternationalInventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
-
Patent number: 10810734Abstract: Embodiments of the present invention generally relate to computer aided rebar measurement and inspection systems. In some embodiments, the system may include a data acquisition system configured to obtain fine-level rebar measurements, images or videos of rebar structures, a 3D point cloud model generation system configured to generate a 3D point cloud model representation of the rebar structure from information acquired by the data acquisition system, a rebar detection system configured to detect rebar within the 3D point cloud model generated or the rebar images or videos of the rebar structures, a rebar measurement system to measure features of the rebar and rebar structures detected by the rebar detection system, and a discrepancy detection system configured to compare the measured features of the rebar structures detected by the rebar detection system with a 3D Building Information Model (BIM) of the rebar structures, and determine any discrepancies between them.Type: GrantFiled: June 28, 2019Date of Patent: October 20, 2020Assignee: SRI InternationalInventors: Garbis Salgian, Bogdan C. Matei, Matthieu Henri Lecce, Abhinav Rajvanshi, Supun Samarasekera, Rakesh Kumar, Tamaki Horii, Yuichi Ikeda, Hidefumi Takenaka
-
Publication number: 20200005447Abstract: Embodiments of the present invention generally relate to computer aided rebar measurement and inspection systems. In some embodiments, the system may include a data acquisition system configured to obtain fine-level rebar measurements, images or videos of rebar structures, a 3D point cloud model generation system configured to generate a 3D point cloud model representation of the rebar structure from information acquired by the data acquisition system, a rebar detection system configured to detect rebar within the 3D point cloud model generated or the rebar images or videos of the rebar structures, a rebar measurement system to measure features of the rebar and rebar structures detected by the rebar detection system, and a discrepancy detection system configured to compare the measured features of the rebar structures detected by the rebar detection system with a 3D Building Information Model (BIM) of the rebar structures, and determine any discrepancies between them.Type: ApplicationFiled: June 28, 2019Publication date: January 2, 2020Inventors: Garbis Salgian, Bogdan C. Matei, Matthieu Henri Lecce, Abhinav Rajvanshi, Supun Samarasekera, Rakesh Kumar, Tamaki Horii, Yuichi Ikeda, Hidefumi Takenaka
-
Publication number: 20190347783Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.Type: ApplicationFiled: May 14, 2019Publication date: November 14, 2019Inventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
-
Patent number: 8744122Abstract: The present invention relates to a system and method for detecting one or more targets belonging to a first class (e.g., moving and/or stationary people), from a moving platform in a 3D-rich environment. The framework described here is implemented using a number of monocular or stereo cameras distributed around the vehicle to provide 360 degrees coverage. Furthermore, the framework described here utilizes numerous filters to reduce the number of false positive identifications of the targets.Type: GrantFiled: October 22, 2009Date of Patent: June 3, 2014Assignee: SRI InternationalInventors: Garbis Salgian, John Benjamin Southall, Sang-Hack Jung, Vlad Branzoi, Jiangjian Xiao, Feng Han, Supun Samarasekera, Rakesh Kumar, Jayan Eledath
-
Patent number: 8428344Abstract: The present invention provides an improved method for estimating range of objects in images from various distances comprising receiving a set of images of the scene having multiple objects from at least one camera in motion. Due to the motion of the camera, each of the images are obtained at different camera locations. Then an object visible in multiple images is selected. Data related to approximate camera positions and orientations and the images of the visible object are used to estimate the location of the object relative to a reference coordinate system. Based on the computed data, a projected location of the visible object is computed and the orientation angle of the camera for each image is refined. Additionally, pairs of cameras with various locations can obtain dense stereo for regions of the image at various ranges.Type: GrantFiled: September 23, 2011Date of Patent: April 23, 2013Assignee: SRI InternationalInventors: John Richard Fields, James Russell Bergen, Garbis Salgian
-
Patent number: 8340349Abstract: A method for detecting a moving target is disclosed that receives a plurality of images from at least one camera; receives a measurement of scale from one of a measurement device and a second camera; calculates the pose of the at least one camera over time based on the plurality of images and the measurement of scale; selects a reference image and an inspection image from the plurality of images of the at least one camera; and detects a moving target from the reference image and the inspection image based on the orientation of corresponding portions in the reference image and the inspection image relative to a location of an epipolar direction common to the reference image and the inspection image; and displays any detected moving target on a display. The measurement of scale can derived from a second camera or, for example, a wheel odometer.Type: GrantFiled: June 15, 2007Date of Patent: December 25, 2012Assignee: SRI InternationalInventors: Garbis Salgian, Supun Samarasekera, Jiangjian Xiao, James Russell Bergen, Rakesh Kumar, Feng Han
-
Publication number: 20120082340Abstract: The present invention provides an improved method for estimating range of objects in images from various distances comprising receiving a set of images of the scene having multiple objects from at least one camera in motion. Due to the motion of the camera, each of the images are obtained at different camera locations. Then an object visible in multiple images is selected. Data related to approximate camera positions and orientations and the images of the visible object are used to estimate the location of the object relative to a reference coordinate system. Based on the computed data, a projected location of the visible object is computed and the orientation angle of the camera for each image is refined. Additionally, pairs of cameras with various locations can obtain dense stereo for regions of the image at various ranges.Type: ApplicationFiled: September 23, 2011Publication date: April 5, 2012Applicant: SRI INTERNATIONALInventors: JOHN RICHARD FIELDS, JAMES RUSSELL BERGEN, GARBIS SALGIAN
-
Patent number: 8059887Abstract: The present invention provides an improved system and method for estimating range of the objects in the images from various distances. The method comprises receiving a set of images of the scene having multiple objects from at least one camera in motion. Due to the motion of the camera, each of the images are obtained at different camera locations Then an object visible in multiple images is selected. Data related to approximate camera positions and orientations and the images of the visible object are used to estimate the location of the object relative to a reference coordinate system. Based on the computed data, a projected location of the visible object is computed and the orientation angle of the camera for each image is refined. Additionally, pairs of cameras with various locations can then be chosen to obtain dense stereo for regions of the image at various ranges.Type: GrantFiled: September 25, 2007Date of Patent: November 15, 2011Assignee: SRI InternationalInventors: John Richard Fields, James Russell Bergen, Garbis Salgian
-
Publication number: 20100202657Abstract: The present invention relates to a system and method for detecting one or more targets belonging to a first class (e.g., moving and/or stationary people), from a moving platform in a 3D-rich environment. The framework described here is implemented using a number of monocular or stereo cameras distributed around the vehicle to provide 360 degrees coverage. Furthermore, the framework described here utilizes numerous filters to reduce the number of false positive identifications of the targets.Type: ApplicationFiled: October 22, 2009Publication date: August 12, 2010Inventors: Garbis Salgian, John Benjamin Southall, Sang-Hack Jung, Vlad Branzoi, Jiangjian Xiao, Feng Han, Supun Samarasekera, Rakesh Kumar, Jayan Eledath
-
Publication number: 20080247602Abstract: The present invention provides an improved system and method for estimating range of the objects in the images from various distances. The method comprises receiving a set of images of the scene having multiple objects from at least one camera in motion. Due to the motion of the camera, each of the images are obtained at different camera locations Then an object visible in multiple images is selected. Data related to approximate camera positions and orientations and the images of the visible object are used to estimate the location of the object relative to a reference coordinate system. Based on the computed data, a projected location of the visible object is computed and the orientation angle of the camera for each image is refined. Additionally, pairs of cameras with various locations can then be chosen to obtain dense stereo for regions of the image at various ranges.Type: ApplicationFiled: September 25, 2007Publication date: October 9, 2008Applicant: SARNOFF CORPORATIONInventors: John Richard Fields, James Russell Bergen, Garbis Salgian
-
Publication number: 20080089556Abstract: A method for detecting a moving target is disclosed that receives a plurality of images from at least one camera; receives a measurement of scale from one of a measurement device and a second camera; calculates the pose of the at least one camera over time based on the plurality of images and the measurement of scale; selects a reference image and an inspection image from the plurality of images of the at least one camera; and detects a moving target from the reference image and the inspection image based on the orientation of corresponding portions in the reference image and the inspection image relative to a location of an epipolar direction common to the reference image and the inspection image; and displays any detected moving target on a display. The measurement of scale can derived from a second camera or, for example, a wheel odometer.Type: ApplicationFiled: June 15, 2007Publication date: April 17, 2008Inventors: Garbis Salgian, Supun Samarasekera, Jiangjian Xiao, James Bergen, Rakesh Kumar, Feng Han
-
Patent number: 6307959Abstract: A system that estimates both the ego-motion of a camera through a scene and the structure of the scene by analyzing a batch of images of the scene obtained by the camera employs a correlation-based, iterative, multi-resolution algorithm. The system defines a global ego-motion constraint to refine estimates of inter-frame camera rotation and translation. It also uses local window-based correlation to refine the current estimate of scene structure. The batch of images is divided into a reference image and a group of inspection images. Each inspection image in the batch of images is aligned to the reference image by a warping transformation. The correlation is determined by analyzing respective Gaussian/Laplacian decompositions of the reference image and warped inspection images. The ego-motion constraint includes both rotation and translation parameters. These parameters are determined by globally correlating surfaces in the respective inspection images to the reference image.Type: GrantFiled: July 13, 2000Date of Patent: October 23, 2001Assignee: Sarnoff CorporationInventors: Robert Mandelbaum, Garbis Salgian, Harpreet Singh Sawhney