Patents by Inventor Abhijit S. Ogale

Abhijit S. Ogale has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9463794
    Abstract: Aspects of the disclosure relate to detecting and responding to stop signs. An object detected in a vehicle's environment having location coordinates may be identified as a stop sign and, it may be determined whether the location coordinates of the identified stop sign correspond to a location of a stop sign in detailed map information. Then, whether the identified stop sign applies to the vehicle may be determined based on the detailed map information or on a number of factors. Then, if the identified stop sign is determined to apply to the vehicle, responses of the vehicle to the stop sign may be determined, and, the vehicle may be controlled based on the determined responses.
    Type: Grant
    Filed: September 4, 2015
    Date of Patent: October 11, 2016
    Assignee: Google Inc.
    Inventors: David Harrison Silver, David Ian Franklin Ferguson, Abhijit S. Ogale, Wan-Yen Lo
  • Patent number: 9438795
    Abstract: A computer-implemented method for method for detecting features in an image. The method includes receiving first and second images at one or more processors. The method also includes processing the first and second images to detect one or more features within the first and second images respectively. The method further includes generating a third image based on processed portions of the first and second images and outputting the third image to another processor. A mobile computing device and GPU are also provided.
    Type: Grant
    Filed: June 29, 2015
    Date of Patent: September 6, 2016
    Assignee: Google Inc.
    Inventor: Abhijit S. Ogale
  • Patent number: 9258681
    Abstract: Aspects of the present disclosure provide systems and methods for generating models of a wireless network environment in an indoor space which may be used to predict an indoor location. The disclosure relates to collecting wireless network access point identifier information and power level observed at various locations are collected to generate various signal maps. The signal maps may be used to generate models of the indoor space. In one example, a voting model may use a probability distribution of a plurality of signal maps in order to identify a location with a highest probability of overlap with current signals received at a client device. Once a location has been identified, it may be used to assist with any number of navigational functions, such as providing turn by turn directions to another indoor location, for example, a conference room or exit, or simply providing information about the current location.
    Type: Grant
    Filed: October 21, 2013
    Date of Patent: February 9, 2016
    Assignee: Google Inc.
    Inventors: Abhijit S. Ogale, Ehud Rivlin
  • Patent number: 9204259
    Abstract: Aspects of this disclosure provide systems and methods for generating models of a wireless network environment in an indoor space which may be used to predict an indoor location. The disclosure relates to collecting wireless network access point identifier information and power level observed at various locations are collected to generate various signal maps. The signal maps may be used to generate models of the indoor space. In one example, a voting model may use a probability distribution of a plurality of signal maps in order to identify a location with a highest probability of overlap with current signals received at a client device. Once a location has been identified, it may be used to assist with any number of navigational functions, such as providing turn by turn directions to another indoor location, for example, a conference room or exit, or simply providing information about the current location.
    Type: Grant
    Filed: November 1, 2013
    Date of Patent: December 1, 2015
    Assignee: Google Inc.
    Inventors: Abhijit S. Ogale, Ehud Rivlin
  • Publication number: 20150256980
    Abstract: Aspects of the present disclosure provide systems and methods for generating models of a wireless network environment in an indoor space which may be used to predict an indoor location. The disclosure relates to collecting wireless network access point identifier information and power level observed at various locations are collected to generate various signal maps. The signal maps may be used to generate models of the indoor space. In one example, a voting model may use a probability distribution of a plurality of signal maps in order to identify a location with a highest probability of overlap with current signals received at a client device. Once a location has been identified, it may be used to assist with any number of navigational functions, such as providing turn by turn directions to another indoor location, for example, a conference room or exit, or simply providing information about the current location.
    Type: Application
    Filed: October 21, 2013
    Publication date: September 10, 2015
    Applicant: GOOGLE INC.
    Inventors: Abhijit S. Ogale, Ehud Rivlin
  • Publication number: 20150256978
    Abstract: Aspects of this disclosure provide systems and methods for generating models of a wireless network environment in an indoor space which may be used to predict an indoor location. The disclosure relates to collecting wireless network access point identifier information and power level observed at various locations are collected to generate various signal maps. The signal maps may be used to generate models of the indoor space. In one example, a voting model may use a probability distribution of a plurality of signal maps in order to identify a location with a highest probability of overlap with current signals received at a client device. Once a location has been identified, it may be used to assist with any number of navigational functions, such as providing turn by turn directions to another indoor location, for example, a conference room or exit, or simply providing information about the current location.
    Type: Application
    Filed: November 1, 2013
    Publication date: September 10, 2015
    Applicant: GOOGLE INC.
    Inventors: Abhijit S. Ogale, Ehud Rivlin
  • Patent number: 9092670
    Abstract: A computer-implemented method for detecting features in an image. The method includes receiving first and second images at one or more processors. The method also includes processing the first and second images to detect one or more features within the first and second images respectively. The method further includes generating a third image based on processed portions of the first and second images and outputting the third image to another processor. A mobile computing device and GPU are also provided.
    Type: Grant
    Filed: January 13, 2014
    Date of Patent: July 28, 2015
    Assignee: Google Inc.
    Inventor: Abhijit S. Ogale
  • Patent number: 9092695
    Abstract: Objects, such as road signs, may be detected in real-time using a camera or other image capture device. As images are received through the camera, candidate signs are first detected. The detection of candidate signs employs constant-time normalized cross correlation, including generation of intermediate images and integral images, and applying a template of concentric, different sized shapes over the integral images. From the pool of candidate signs, false positives may be separated out using shape classification to identify actual road signs.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: July 28, 2015
    Assignee: Google Inc.
    Inventor: Abhijit S. Ogale
  • Patent number: 9057618
    Abstract: Systems and methods provide approximations of latitude and longitude coordinates of objects, for example a business, in street level images. The images may be collected by a camera. An image of a business is collected along with GPS coordinates and direction of the camera. Depth maps of the images may be generated, for example, based on laser depth detection or displacement of the business between two images caused by a change in the position of the camera. After identifying a business in one or more images, the distance from the camera to a point or area relative to the business in the one or more images may be determined based on the depth maps. Using this distance and the direction of the camera which collected the one or more images and GPS coordinates of the camera, the approximate GPS coordinates of the business may be determined.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: June 16, 2015
    Assignee: Google Inc.
    Inventors: Abhijit S. Ogale, Stephane Lafon, Andrea Frome
  • Publication number: 20150153188
    Abstract: Systems and methods provide approximations of latitude and longitude coordinates of objects, for example a business, in street level images. The images may be collected by a camera. An image of a business is collected along with GPS coordinates and direction of the camera. Depth maps of the images may be generated, for example, based on laser depth detection or displacement of the business between two images caused by a change in the position of the camera. After identifying a business in one or more images, the distance from the camera to a point or area relative to the business in the one or more images may be determined based on the depth maps. Using this distance and the direction of the camera which collected the one or more images and GPS coordinates of the camera, the approximate GPS coordinates of the business may be determined.
    Type: Application
    Filed: September 24, 2013
    Publication date: June 4, 2015
    Applicant: GOOGLE INC.
    Inventors: Abhijit S. Ogale, Stephane Lafon, Andrea Frome
  • Patent number: 9036000
    Abstract: A street-level imagery acquisition and selection process identifies which images are published in a street field view. An imagery database includes panoramas each corresponding to a set of images acquired from a single viewpoint. The panoramas are attached to corresponding positions on a road network graph. The graph is divided into a set of selection paths, each of which includes a topologically linear sequence of road segments. Each selection path is evaluated to select a set of panoramas to be published in the path. Panoramas of interior road segments are selected before panoramas at intersections. Selected panorama identifiers for each interior road segment of the selection paths and each intersection correspond to a position along the road network graph. The selected panorama identifiers are then published in the street field view.
    Type: Grant
    Filed: September 27, 2011
    Date of Patent: May 19, 2015
    Assignee: Google Inc.
    Inventors: Abhijit S. Ogale, Rodrigo L. Carceroni, Carole Dulong, Luc Vincent
  • Patent number: 8913083
    Abstract: Systems and methods are provided for manually finding a view for a geographic object in a street level image and associating the view with the geographic object. Information related to a geographic object and a first image related to the geographic object is displayed. User inputs indicating a presence of the geographic object in the image and user input indicating a viewpoint within the image are received and processed. An association of the viewpoint, the image and the geographic object is made and the association is stored in a database. A second image is determined, based on the association, as a default initial image to be displayed for the geographic object in a mapping application.
    Type: Grant
    Filed: July 13, 2011
    Date of Patent: December 16, 2014
    Assignee: Google Inc.
    Inventors: Abhijit S. Ogale, Augusto Roman, Owen Brydon
  • Patent number: 8847951
    Abstract: Methods and systems permit automatic matching of videos with images from dense image-based geographic information systems. In some embodiments, video data including image frames is accessed. The video data may be segmented to determine a first image frame of a segment of the video data. Data representing information from the first image frame may be automatically compared with data representing information from a plurality of image frames of an image-based geographic information data system. Such a comparison may, for example, involve a search for a best match between geometric features, histograms, color data, texture data, etc. of the compared images. Based on the automatic comparing, an association between the video and one or more images of the image-based geographic information data system may be generated. The association may represent a geographic correlation between selected images of the system and the video data.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: September 30, 2014
    Assignee: Google Inc.
    Inventors: Dragomir Anguelov, Abhijit S. Ogale, Ehud Rivlin, Jay Yagnik
  • Patent number: 8718932
    Abstract: Methods and systems for snapping positions from location aware devices to road segments are provided. Data from the location aware device is received, which includes data about the location and direction of the location aware device. Each of the positions of the location aware device is snapped to a position on a road segment based on various factors, including the log likelihoods of snapping all of the previous positions of the location aware device to other possible positions on road segments, the comparison of direction of the location aware device and the direction of the road segment, and the distance between the location of the location aware device and the location of the road segment. Multiple threads can be generated to determine the most likely path for the location aware device. A most likely path of positions on road segments is determined for the location aware device and stored.
    Type: Grant
    Filed: August 31, 2011
    Date of Patent: May 6, 2014
    Assignee: Google Inc.
    Inventors: Jeremy B. Pack, Abhijit S. Ogale, Rodrigo L. Carceroni
  • Patent number: 8666159
    Abstract: A computer-implemented method for method for detecting features in an image, comprising, receiving a first image at a GPU, wherein the GPU comprises a plurality of memory units and wherein the first image is stored in a first memory unit of the plurality of memory units, processing a second image stored in a second memory unit, of the plurality of memory units, to detect one or more features within the second image and writing one or more processed portions of the second image to a third memory unit of the plurality of memory units. In certain aspects, the method further comprises steps for outputting a third image stored in a fourth memory unit of the plurality of memory units. A mobile computing device and GPU are also provided.
    Type: Grant
    Filed: June 4, 2012
    Date of Patent: March 4, 2014
    Assignee: Google Inc.
    Inventor: Abhijit S. Ogale
  • Patent number: 8630965
    Abstract: Provided herein are methods, systems, and apparatuses that can utilize a grammar hierarchy to parse out observable activities into a set of distinguishable actions.
    Type: Grant
    Filed: April 6, 2007
    Date of Patent: January 14, 2014
    Assignee: Yale University
    Inventors: Andreas Savvides, Dimitrios Lymberopoulos, Yiannis Aloimonos, Abhijit S. Ogale
  • Publication number: 20100153321
    Abstract: Provided herein are methods, systems, and apparatuses that can utilize a grammar hierarchy to parse out observable activities into a set of distinguishable actions.
    Type: Application
    Filed: April 6, 2007
    Publication date: June 17, 2010
    Applicant: YALE UNIVERSITY
    Inventors: Andreas Savvides, Dimitrios Lymberopoulos, Yiannis Aloimonos, Abhijit S. Ogale