Patents by Inventor Sarthak Sahu

Sarthak Sahu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210314484
    Abstract: Systems and methods are disclosed for directed image capture of a subject of interest, such as a home. Directed image capture can produce higher quality images such as more centrally located within a display and/or viewfinder of an image capture device, higher quality images have greater value for subsequent uses of captured images such as for information extraction or model reconstruction. Graphical guide(s) facilitate content placement for certain positions and quality assessments for the content of interest can be calculated such as for pixel distance of the content of interest to a centroid of the display or viewfinder, or the effect of obscuring objects. Quality assessments can further include instructions for improving the quality of the image capture for the content of interest.
    Type: Application
    Filed: June 20, 2021
    Publication date: October 7, 2021
    Applicant: Hover Inc.
    Inventors: William Castillo, Adam J. Altman, Ioannis Pavlidis, Sarthak Sahu, Manish Upendran
  • Patent number: 11107227
    Abstract: Distance measurements are received from one or more distance measurement sensors, which may be coupled to a vehicle. A three-dimensional (3D) point cloud are generated based on the distance measurements. In some cases, 3D point clouds corresponding to distance measurements from different distance measurement sensors may be combined into one 3D point cloud. A voxelized model is generated based on the 3D point cloud. An object may be detected within the voxelized model, and in some cases may be classified by object type. If the distance measurement sensors are coupled to a vehicle, the vehicle may avoid the detected object.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: August 31, 2021
    Assignee: GM Cruise Holdings, LLC
    Inventors: Sandeep Gangundi, Sarthak Sahu, Nathan Harada, Phil Ferriere
  • Publication number: 20210256718
    Abstract: Distance measurements are received from one or more distance measurement sensors, which may be coupled to a vehicle. A three-dimensional (3D) point cloud are generated based on the distance measurements. In some cases, 3D point clouds corresponding to distance measurements from different distance measurement sensors may be combined into one 3D point cloud. A voxelized model is generated based on the 3D point cloud. An object may be detected within the voxelized model, and in some cases may be classified by object type. If the distance measurement sensors are coupled to a vehicle, the vehicle may avoid the detected object.
    Type: Application
    Filed: February 19, 2020
    Publication date: August 19, 2021
    Inventors: Sandeep Gangundi, Sarthak Sahu, Nathan Harada, Phil Ferriere
  • Publication number: 20210233295
    Abstract: Systems and methods for data visualization and network extraction in accordance with embodiments of the invention are illustrated. One embodiment includes a method including obtaining a graph comprising a plurality of nodes and a plurality of edges, identifying a plurality of communities in the graph, where each community includes nodes from the plurality of nodes, generating a community graph structure based on the identified communities, where the community graph includes a plurality of supernodes and a plurality of superedges, spatializing the community graph structure, unpacking the spatialized community graph structure into an unpacked graph structure comprising the plurality of nodes and the plurality of edges, where each node in the plurality of nodes is located at approximately the position of the supernode that represented it, spatializing the unpacked graph structure, and providing the spatialized unpacked graph structure.
    Type: Application
    Filed: January 25, 2021
    Publication date: July 29, 2021
    Applicant: Virtualitics, Inc.
    Inventors: Aakash Indurkhya, Ciro Donalek, Michael Amori, Sarthak Sahu, Vaibhav Anand, Justin Gantenberg
  • Patent number: 11070720
    Abstract: Systems and methods are disclosed for directed image capture of a subject of interest, such as a home. Directed image capture can produce higher quality images such as more centrally located within a display and/or viewfinder of an image capture device, higher quality images have greater value for subsequent uses of captured images such as for information extraction or model reconstruction. Graphical guide(s) facilitate content placement for certain positions and quality assessments for the content of interest can be calculated such as for pixel distance of the content of interest to a centroid of the display or viewfinder, or the effect of obscuring objects. Quality assessments can further include instructions for improving the quality of the image capture for the content of interest.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: July 20, 2021
    Assignee: Hover Inc.
    Inventors: William Castillo, Adam J. Altman, Ioannis Pavlidis, Sarthak Sahu, Manish Upendran
  • Publication number: 20210217231
    Abstract: A system and method is provided for measurements of building façade elements by combining ground-level and orthogonal imagery. The measurements of the dimension of building façade elements are based on ground-level imagery that is scaled and geo-referenced using orthogonal imagery. The method continues by creating a tabular dataset of measurements for one or more architectural elements such as siding (e.g., aluminum, vinyl, wood, brick and/or paint), windows or doors. The tabular dataset can be part of an estimate report.
    Type: Application
    Filed: March 30, 2021
    Publication date: July 15, 2021
    Applicant: Hover Inc.
    Inventors: Bo Hu, Sarthak Sahu
  • Publication number: 20210183119
    Abstract: Data visualization processes can utilize machine learning algorithms applied to visualization data structures to determine visualization parameters that most effectively provide insight into the data, and to suggest meaningful correlations for further investigation by users. In numerous embodiments, data visualization processes can automatically generate parameters that can be used to display the data in ways that will provide enhanced value. For example, dimensions can be chosen to be associated with specific visualization parameters that are easily digestible based on their importance, e.g. with higher value dimensions placed on more easily understood visualization aspects (color, coordinate, size, etc.). In a variety of embodiments, data visualization processes can automatically describe the graph using natural language by identifying regions of interest in the visualization, and generating text using natural language generation processes.
    Type: Application
    Filed: December 21, 2020
    Publication date: June 17, 2021
    Applicant: Virtualitics, Inc.
    Inventors: Ciro Donalek, Michael Amori, Justin Gantenberg, Sarthak Sahu, Aakash Indurkhya
  • Patent number: 11004259
    Abstract: A system and method is provided for measurements of building façade elements by combining ground-level (201) and orthogonal imagery (906). The measurements of the dimension of building façade elements are based on ground-level imagery that is scaled and geo-referenced using orthogonal imagery (209). The method continues by creating a tabular dataset (1002) of measurements for one or more architectural elements such as siding (e.g., aluminum, vinyl, wood, brick and/or paint), windows or doors. The tabular dataset can be part of an estimate report (1002).
    Type: Grant
    Filed: October 24, 2014
    Date of Patent: May 11, 2021
    Inventors: Bo Hu, Sarthak Sahu
  • Patent number: 10943134
    Abstract: A disambiguation system for an autonomous vehicle is described herein that disambiguates a traffic light framed by a plurality of regions of interest. The autonomous vehicle includes a localization system that defines the plurality of regions of interest around traffic lights captured in a sensor signal and provides an input to a disambiguation system. When the captured traffic lights are in close proximity, the plurality of regions of interest overlap each other such that a traffic light disposed in the overlapping region is ambiguous to an object detector because it is framed by more than one region of interest. A disambiguation system associates the traffic light with the correct region of interest to disambiguate a relationship thereof and generates a disambiguated directive for controlling the autonomous vehicle. Disambiguation can be achieved according to any of an edge-based technique, a vertex-based technique, and a region of interest distance-based technique.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: March 9, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Clement Creusot, Sarthak Sahu
  • Publication number: 20210048830
    Abstract: An autonomous vehicle incorporating a multimodal multi-technique signal fusion system is described herein. The signal fusion system is configured to receive at least one sensor signal that is output by at least one sensor system (multimodal), such as at least one image sensor signal from at least one camera. The at least one sensor signal is provided to a plurality of object detector modules of different types (multi-technique), such as an absolute detector module and a relative activation detector module, that generate independent directives based on the at least one sensor signal. The independent directives are fused by a signal fusion module to output a fused directive for controlling the autonomous vehicle.
    Type: Application
    Filed: October 31, 2020
    Publication date: February 18, 2021
    Inventors: Clement Creusot, Sarthak Sahu, Matthais Wisniowski
  • Patent number: 10872446
    Abstract: Data visualization processes can utilize machine learning algorithms applied to visualization data structures to determine visualization parameters that most effectively provide insight into the data, and to suggest meaningful correlations for further investigation by users. In numerous embodiments, data visualization processes can automatically generate parameters that can be used to display the data in ways that will provide enhanced value. For example, dimensions can be chosen to be associated with specific visualization parameters that are easily digestible based on their importance, e.g. with higher value dimensions placed on more easily understood visualization aspects (color, coordinate, size, etc.). In a variety of embodiments, data visualization processes can automatically describe the graph using natural language by identifying regions of interest in the visualization, and generating text using natural language generation processes.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: December 22, 2020
    Assignee: Virtualitics, Inc.
    Inventors: Ciro Donalek, Michael Amori, Justin Gantenberg, Sarthak Sahu, Aakash Indurkhya
  • Patent number: 10852743
    Abstract: An autonomous vehicle incorporating a multimodal multi-technique signal fusion system is described herein. The signal fusion system is configured to receive at least one sensor signal that is output by at least one sensor system (multimodal), such as at least one image sensor signal from at least one camera. The at least one sensor signal is provided to a plurality of object detector modules of different types (multi-technique), such as an absolute detector module and a relative activation detector module, that generate independent directives based on the at least one sensor signal. The independent directives are fused by a signal fusion module to output a fused directive for controlling the autonomous vehicle.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: December 1, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Clement Creusot, Sarthak Sahu, Matthais Wisniowski
  • Publication number: 20200302663
    Abstract: Data visualization processes can utilize machine learning algorithms applied to visualization data structures to determine visualization parameters that most effectively provide insight into the data, and to suggest meaningful correlations for further investigation by users. In numerous embodiments, data visualization processes can automatically generate parameters that can be used to display the data in ways that will provide enhanced value. For example, dimensions can be chosen to be associated with specific visualization parameters that are easily digestible based on their importance, e.g. with higher value dimensions placed on more easily understood visualization aspects (color, coordinate, size, etc.). In a variety of embodiments, data visualization processes can automatically describe the graph using natural language by identifying regions of interest in the visualization, and generating text using natural language generation processes.
    Type: Application
    Filed: April 9, 2020
    Publication date: September 24, 2020
    Applicant: Virtualitics, Inc.
    Inventors: Ciro Donalek, Michael Amori, Justin Gantenberg, Sarthak Sahu, Aakash Indurkhya
  • Publication number: 20200260000
    Abstract: Systems and methods are disclosed for directed image capture of a subject of interest, such as a home. Directed image capture can produce higher quality images such as more centrally located within a display and/or viewfinder of an image capture device, higher quality images have greater value for subsequent uses of captured images such as for information extraction or model reconstruction. Graphical guide(s) facilitate content placement for certain positions and quality assessments for the content of interest can be calculated such as for pixel distance of the content of interest to a centroid of the display or viewfinder, or the effect of obscuring objects. Quality assessments can further include instructions for improving the quality of the image capture for the content of interest.
    Type: Application
    Filed: April 30, 2020
    Publication date: August 13, 2020
    Applicant: HOVER, Inc.
    Inventors: William Castillo, Adam J. Altman, Ioannis Pavlidis, Sarthak Sahu, Manish Upendran
  • Patent number: 10681264
    Abstract: A process is provided for graphically guiding a user of a capture device (e.g., smartphone) to more accurately capture a series of images of a building. Images are captured as the picture taker moves around the building—taking a plurality (e.g., 4-16) of images from multiple angles and distances. Before capturing an image, a quality of the image may be determined to prevent low quality images from being captured or to provide instructions on how to improve the quality of the image capture. The series of captured images are uploaded to an image processing system to generate a 3D building model that is returned to the user. The returned 3D building model may incorporate scaled measurements of building architectural elements and may include a dataset of measurements for one or more architectural elements such as siding (e.g., aluminum, vinyl, wood, brick and/or paint), windows, doors or roofing.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: June 9, 2020
    Assignee: HOVER, Inc.
    Inventors: William Castillo, Adam J. Altman, Ioannis Pavlidis, Sarthak Sahu, Manish Upendran
  • Patent number: 10621762
    Abstract: Data visualization processes can utilize machine learning algorithms applied to visualization data structures to determine visualization parameters that most effectively provide insight into the data, and to suggest meaningful correlations for further investigation by users. In numerous embodiments, data visualization processes can automatically generate parameters that can be used to display the data in ways that will provide enhanced value. For example, dimensions can be chosen to be associated with specific visualization parameters that are easily digestible based on their importance, e.g. with higher value dimensions placed on more easily understood visualization aspects (color, coordinate, size, etc.). In a variety of embodiments, data visualization processes can automatically describe the graph using natural language by identifying regions of interest in the visualization, and generating text using natural language generation processes.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: April 14, 2020
    Assignee: Virtualitics, Inc.
    Inventors: Ciro Donalek, Michael Amori, Justin Gantenberg, Sarthak Sahu, Aakash Indurkhya
  • Publication number: 20200081450
    Abstract: An autonomous vehicle incorporating a multimodal multi-technique signal fusion system is described herein. The signal fusion system is configured to receive at least one sensor signal that is output by at least one sensor system (multimodal), such as at least one image sensor signal from at least one camera. The at least one sensor signal is provided to a plurality of object detector modules of different types (multi-technique), such as an absolute detector module and a relative activation detector module, that generate independent directives based on the at least one sensor signal. The independent directives are fused by a signal fusion module to output a fused directive for controlling the autonomous vehicle.
    Type: Application
    Filed: September 7, 2018
    Publication date: March 12, 2020
    Inventors: Clement Creusot, Sarthak Sahu, Matthais Wisniowski
  • Publication number: 20200074194
    Abstract: A disambiguation system for an autonomous vehicle is described herein that disambiguates a traffic light framed by a plurality of regions of interest. The autonomous vehicle includes a localization system that defines the plurality of regions of interest around traffic lights captured in a sensor signal and provides an input to a disambiguation system. When the captured traffic lights are in close proximity, the plurality of regions of interest overlap each other such that a traffic light disposed in the overlapping region is ambiguous to an object detector because it is framed by more than one region of interest. A disambiguation system associates the traffic light with the correct region of interest to disambiguate a relationship thereof and generates a disambiguated directive for controlling the autonomous vehicle. Disambiguation can be achieved according to any of an edge-based technique, a vertex-based technique, and a region of interest distance-based technique.
    Type: Application
    Filed: August 31, 2018
    Publication date: March 5, 2020
    Inventors: Clement Creusot, Sarthak Sahu
  • Publication number: 20190347837
    Abstract: Data visualization processes can utilize machine learning algorithms applied to visualization data structures to determine visualization parameters that most effectively provide insight into the data, and to suggest meaningful correlations for further investigation by users. In numerous embodiments, data visualization processes can automatically generate parameters that can be used to display the data in ways that will provide enhanced value. For example, dimensions can be chosen to be associated with specific visualization parameters that are easily digestible based on their importance, e.g. with higher value dimensions placed on more easily understood visualization aspects (color, coordinate, size, etc.). In a variety of embodiments, data visualization processes can automatically describe the graph using natural language by identifying regions of interest in the visualization, and generating text using natural language generation processes.
    Type: Application
    Filed: September 17, 2018
    Publication date: November 14, 2019
    Applicant: Virtualitics, Inc.
    Inventors: Ciro Donalek, Michael Amori, Justin Gantenberg, Sarthak Sahu, Aakash Indurkhya
  • Patent number: 10454597
    Abstract: Systems and methods for locating telecommunication cell sites in accordance with embodiments of the invention are illustrated. One embodiment includes a method for locating cell sites, including obtaining a plurality of observations, where each observation includes a timestamp, a coordinate, an active record, and a set of passive records, uniquely identifying secondary cell sites in the passive records by cross-matching active records from a first observation with passive records from a second observation, annotating the observations with unique identifiers for each secondary cell site, time-smoothing the received signal strength values, estimating the distance from each observation to the primary cell site and secondary cell sites associated with the observation by providing a machine learning model with at least the time-smoothed signal strength values and the plurality of annotated observations, and locating the primary cell sites based on the estimated distances.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: October 22, 2019
    Assignee: Virtualitics, Inc.
    Inventors: Aakash Indurkhya, Sarthak Sahu, Michael Amori, Ciro Donalek, Yuankun David Wang