PERSISTENT VEHICLE LOCATION SERVICE USING GROUND TRUTH IMAGE RENDERING INSTEAD OF GPS

Systems and methods are disclosed herein for computer-implemented method for determining a location of a vehicle. The systems and methods initialize the location of a vehicle based on GPS data of a client device within the vehicle, and receive, from the client device, a rendering of an image captured subsequent to initialization. The systems and methods determine a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle, and compare the received rendering to entries that each include a rendering and a respective associated location that is within the geographical area. The systems and methods determine from the comparing whether the received rendering matches a respective rendering included in a respective entry, and if so, responsively determine that the location of the vehicle is the respective associated location included in the respective entry.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 62/801,010, filed Feb. 4, 2019, U.S. Provisional Application No. 62/795,988, filed Jan. 23, 2019, U.S. Provisional Application No. 62/812,101, filed Feb. 28, 2019, U.S. Provisional Application No. 62/812,098, filed Feb. 28, 2019, U.S. Provisional Application No. 62/801,010, filed Feb. 4, 2019, U.S. Provisional Application No. 62/802,145, filed Feb. 6, 2019, U.S. Provisional Application No. 62/812,107, filed Feb. 28, 2019, which are incorporated by reference in their entirety.

TECHNICAL FIELD

The present disclosure relates to location determination with limited or no reliance on global positioning system (GPS) signals, and in particular to determining a location estimate for a vehicle based on imagery captured by a mobile device mounted within a vehicle.

BACKGROUND

Many systems use global positioning system (GPS) coordinates to estimate the position of vehicles, or to estimate the position of a client device, where the client device's position acts as a proxy for a location of a vehicle in which the client device is located. For example, a person (interchangeably referred to as a “rider” herein) carrying a client device may wish to arrange for transportation from his or her present location to another location, and may execute an application (e.g., a transportation service and/or ridesharing application) on his or her client device to obtain transportation. Existing systems in this scenario match the person with a driver based on an estimated location of various candidate drivers (derived from GPS traces of the drivers' client devices), and show the matched driver's location to the user using indicia on a map that matches the location of the matched driver's client device. However, GPS traces are not always accurate. For example, in areas subject to GPS interference or reception problems, such as an urban canyon with tall buildings that distort satellite signals, the GPS traces of the driver's client device may be inaccurate. This causes a practical inconvenience where a driver that is sub-optimally located may be matched to a rider, where a look at true coordinates of candidate drivers may have caused a match to a different driver. Moreover, this may cause frustration in the rider, as the rider may be viewing an indicated position of a driver that does not match the driver's true location.

Existing systems seek to solve the technical problem of improving driver location estimates by determining when a GPS trace is not located on a road, and snapping the GPS trace to a nearest road. However, these snapping technologies suffer the same source of inaccuracy as the GPS traces themselves, as an urban canyon may cause the GPS trace to be far from the actual road on which the vehicle is driving. Moreover, in a location where roads exist in a high density (e.g., in a city where roads are so close as to be within a GPS sensor's margin of error), the existing systems have no tiebreaker mechanism to determine which road within the margin of error to snap the GPS trace to. The technical problem of how to derive exact driver location without the need for GPS signals where GPS signals are distorted or unavailable is not addressed by existing systems.

SUMMARY

Systems and methods are disclosed herein for determining a location of a vehicle (e.g., where effectiveness of a GPS sensor within the vehicle is limited). To this end, a service (e.g., that connects a rider with a driver in the context of a ridesharing application) initializes a determination of a location of a vehicle at a start of a session based on global positioning system (GPS) data of a client device within the vehicle. For example, when a driver of the vehicle first activates an application for accepting rides, or when the driver of the vehicle accepts a ride request, the service initializes the vehicle's location using a GPS sensor of a client device of the driver that is executing the application. As discussed below, in some embodiments, the service is implemented within the client device.

The service receives, from the client device, a rendering of an image that was captured by the client device at a time subsequent to the start of the session (e.g., an image captured automatically by the driver's client device after the vehicle had traveled for ten seconds, or had traveled for one hundred meters past the point where the initial GPS trace was determined). The service then determines a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle (e.g., a vicinity within which the vehicle is likely to be).

The service compares the received rendering to entries in a database, each respective entry including a respective rendering and a respective associated location that is within the geographical area. In some embodiments, as described below, some or all data of the database is stored at the client device. By limiting the comparison to entries corresponding to locations within the geographical area, processing is efficiently performed, as only a small subset of entries that are likely to correspond to the vehicle's current location are referenced. The service determines from the comparing whether the received rendering matches a respective rendering included in a respective entry in the database of renderings, and when a match is determined, the service determines that the location of the vehicle at the time is the respective associated location included in the respective entry.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram illustrating a location estimation system according to one embodiment.

FIG. 2 is an illustration of GPS traces in a region where GPS signals are inaccurate according to one embodiment.

FIG. 3 is an illustration of a manner in which a vehicle location is initialized and then updated, according to one embodiment.

FIG. 4 is an illustration of a manner in which to identify locations for which renderings do not exist in an image rendering database, according to one embodiment.

FIG. 5 is an illustrative flowchart of a process for estimating vehicle location based on image renderings, according to one embodiment.

FIG. 6 is a block diagram that illustrates a computer system, according to one embodiment.

The figures depict various embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.

DETAILED DESCRIPTION System Environment

FIG. 1 is a block diagram illustrating a location estimation system according to one embodiment. System 100 includes vehicle 101, which includes or is carrying client device 110. The functionality of client device 110 is described in further detail with respect to FIG. 6 below. In some embodiments, client device 110 is integrated into vehicle 100 as a component of vehicle 100. Client device 110 executes an application, such as a transportation service and/or ridesharing application where a rider may request a ride from the rider's current location to a desired destination, and where the rider may be connected to a driver who also uses the ridesharing application, where the driver will provide the ride. While the driver travels toward a rider, and while the driver is transporting a rider to a destination, a map may be viewed by the driver or the rider via the application (e.g., on a client device of the driver or rider) where an indicator of the driver's position is displayed. In an exemplary embodiment, client device 110 is mounted on a dashboard of vehicle 101 and has a forward-facing camera that faces the road. While this exemplary embodiment is referred to throughout, in some embodiments, the application instead commands images to be captured from a stand-alone camera (e.g., embedded in a device that is affixed to a dashboard or windshield of vehicle 101).

In an embodiment, client device 110 automatically captures one or more images based on commands received from the application. For example, client device 110 captures an image upon a driver of vehicle 101 accepting a ride request from a rider, or upon a certain condition being satisfied (e.g., a certain distance has been traveled, or a certain amount of time has passed, from a reference point). Times at which images are automatically captured by client device 110 will be described in further detail below with reference to FIGS. 3-5. Automatic capturing of one or more images may be an opt-in feature, where the application by default does not automatically capture images, and where the application has a setting that, if selected by a user of client device 110 (e.g., a driver of vehicle 101), enables the application to automatically capture the images. While accurate pinpointing of a driver's location using the systems and methods described herein may rely on opting in, the location of the driver may be determined based on GPS traces of client device 110 (even if inaccurate) should a driver of vehicle 101 not opt in.

In some embodiments, client device 110 transmits the image(s) to location determination service 130 over network 120, where location determination service 130 receives the image(s) and compares them to known images, stored at image rendering database 132, to determine the location of client device 110. In some embodiments, the functionality of location determination service 130 and/or image rendering database 132 is located within client device 110, and thus need not be accessed by network 120, as depicted. Functionality of location determination service 130 may be integrated as a module of the application (e.g., the ridesharing application). Image rendering database 132 may be accessed by location determination service 130 directly, or over network 120. Location determination service 130 may be a module of an application, such as a ridesharing application, or may be a component of a transportation service generally, such as a ridesharing service. In some embodiments where location determination service 130 is a module of the application, some or all of the contents of image rendering database 132 are transmitted to the client device for performing localization at the client device. The functionality of location determination service 130 will be described in further detail below with respect to FIGS. 2-5.

Identifying Regions Prone to Erroneous GPS Readings

FIG. 2 is an illustration of GPS traces in a region where GPS signals are inaccurate according to one embodiment. Region 200 includes GPS traces 202 of a client device (e.g., client device 110) as derived from a GPS sensor of client device 110. As an illustrative example, the GPS traces 202 were derived from client device 110 while vehicle 101 was on a road. Because of the existence of tall buildings within region 200, the GPS signals used to derive the GPS traces are distorted and provide inaccurate GPS traces. This is evidenced by the GPS traces being at locations that are not on a road.

Region 200 is exemplary of a location known to location determination service 130 to have or cause erroneous GPS data. The identification of various regions, like region 200, which are associated with erroneous GPS data may be performed automatically by location determination server 130, or may be made based on manual feedback (e.g., performed in advance of executing process 500). For example, location determination service 130 may detect that users of a ridesharing application in a given location set a pickup pin at a location different from their GPS traces at a frequency that exceeds an implementer-defined threshold, and may determine therefrom that GPS data derived from client devices within that region are likely erroneous. As another example, location determination service 130 may detect that GPS traces of users (e.g., drivers) of a ridesharing application are, at a frequency above a threshold, in areas inaccessible to drivers, such as within buildings or parks that do not have road access, and may determine therefrom that GPS data derived from client devices within that region are likely erroneous. As yet another example, location determination service 130 may receive feedback from users that their client devices are determining erroneous locations based on GPS sensors of those client devices, and may determine therefrom that GPS data derived from client devices within that region are likely erroneous.

Exemplary Initialization Using GPS and Transition to Image-Based Localization

FIG. 3 is an illustration of a manner in which a vehicle location is initialized and then updated, according to one embodiment. Environment 300 includes vehicle 301, which begins at position 398, and is subsequently at position 399. Vehicle 301 is an example of a vehicle 101 described in FIG. 1. Position 398 indicates the beginning of a session. As used herein, the beginning of a session refers to a time (referenced in FIG. 3 as time “T1”) at which a location of vehicle 301 is initialized based on data acquired from or by client device 110, which is inside of vehicle 301. The application may detect the beginning of a session at the occurrence of any predefined point in time. A non-exhaustive and illustrative set of examples of when a session begins includes the launch of the application, the application detecting that the vehicle is moving (e.g., based on an accelerometer of client device 110), the application detecting that the driver has accepted a ride request, and the like.

In response to detecting the beginning of the session, the application retrieves GPS data acquired using a GPS sensor of client device 110, and determines an initial location of vehicle 301 based on the GPS data. Taking at least an initial GPS reading (even in a region like region 200), before determining location based on image renderings, enables location determination service 130 to determine the location of vehicle 301 by referencing far fewer entries of image rendering database 132 than would be necessary without knowledge of a general vicinity within which vehicle 301 is located. For example, with knowledge that vehicle 301 is within a margin of error of position 398, as determined using a GPS sensor of client device 110 (e.g., within 50 meters of position 398, which may be a known margin of error in region 200), location determination service 30 is able to determine a vicinity within which position 399 is contained, and thus efficiently reference only entries within image rendering database 132 that correspond to that vicinity.

After taking the initial GPS reading, the application monitors data relating to vehicle 301 to detect a scenario where vehicle 301 reaches position 399. Position 399 represents a position of vehicle 301 at a time subsequent to the beginning of the session (referenced in FIG. 3 as time “T2”). The application determines that vehicle 301 has reached position 399 upon the occurrence of a condition (or any condition of a set of predefined conditions). In some embodiments, the condition may be detecting that a predefined amount of time has passed (e.g., five or ten seconds since time T1). In some embodiments, the condition may be detecting that vehicle 301 has traveled a predefined distance, such as distance 315, since the beginning of the session. The application may detect that vehicle 301 has traveled the predefined distance based on data from one or more of an accelerometer, a GPS sensor, or other sensors of client device 110. In some embodiments, the condition may be detecting that vehicle 301 has entered a region where GPS signals are known to be inaccurate, such as region 200.

Regardless of which condition, or which combination of conditions, are used to determine that vehicle 301 has reached position 399, in response to determining that vehicle 301 has reached position 399, the application commands client device 310 to capture an image. As will be discussed below with respect to FIG. 5, the captured image will be used by location determination service 130 to determine the location of vehicle 301 at position 399 without further use of a GPS sensor of client device 310. As an illustrative example of what will be discussed below with respect to FIG. 5, the application causes client device 110 to transmit to location determination service 130 a rendering of the captured image. Location determination service 130 identifies a subset of entries of image rendering database 132 that correspond to a location determined from the initial GPS reading at position 398 (e.g., including an offset corresponding to direction and distance detected using sensors of client device 110 between times T1 and T2). Location determination service 130 then compares the rendering to renderings of each entry of the subset, and in response to finding a matching rendering, determines the location of vehicle 301 at position 399 to be a location indicated in the entry that includes the matching rendering.

While FIG. 3 only indicates two positions, following determining the location of vehicle 301 at position 399, a similar process may be used by location determination service 130 to determine the position of vehicle 301 at subsequent positions. For example, each subsequent time the application detects a condition, the application may cause client device 110 to capture another image and transmit that image to location determination service 130, which may isolate a subset of entries based on the last known location of vehicle 301 (e.g., the location determined for position 399), and use that subset to find a matching of the newly captured rendering. The subset of entries may be determined by adding an offset corresponding to distance and direction detected using sensors of client device 110 between times T2 and a subsequent time, similar to the initial determination described in the prior paragraph. In this manner, following initialization of the location of vehicle 301, the GPS sensor of client device 110 need not be used again to locate the position of vehicle 301 for the remainder of the session.

In some embodiments, position 398 may be determined (and thus the initial location of vehicle 301 at the start of a session may be determined) based on a captured image, instead of GPS data. To this end, location determination service 130 may receive a captured image (in the manner described above) and search image rendering database 132 for a matching image. In some embodiments, rather than using a subset of entries, location determination service 130 may compare the rendering to all entries of image rendering database 132. In some embodiments, location determination service 130 may, during the initialization, identify a subset of entries for comparison based on last-known location during a prior session (rather than GPS data) to preserve computational resources.

Anchoring Unknown Road Segments for Future Database Population

FIG. 4 is an illustration of a manner in which to identify locations for which renderings do not exist in an image rendering database, according to one embodiment. In an embodiment where image localization is being used to determine driver location, as opposed to GPS sensor data from client device 110, such as environment 300, there may be times where location determination service 130 is unable to determine a location of vehicle 101. For example, location determination service 130 may be unable to identify a matching rendering in any entry investigated in image rendering database 132. Environment 400 includes route 450. The shaded portions of route 450, as illustrated in FIG. 4, depict locations at which image localization was successful (in the manner described above with reference to FIG. 3, and which will be further detailed below with respect to FIG. 5). A vehicle traveling along route 450 from point A to point B may reach location 460, where location determination service 130 may fail to find a matching rendering in image rendering database 132 of an image captured at location 460 by client device 110 that is within the vehicle.

As the vehicle progresses along route 450, location determination service 130 may continue to attempt to determine the location of the vehicle, and, based on an image captured at location 470, location determination service 130 may find a matching rendering and successfully localize the vehicle and may continue to do so for the remainder of route 450 based on further matching renderings. In some embodiments, location service 130 may use GPS sensor data of client device 110 to find a subset of entries of image rendering database 132 that correspond to location 470. In other embodiments, location service 130 may use sensor data (e.g., directional sensor in combination with an accelerometer) to determine distance and direction traveled from location 460 in order to find a subset of locations to which location 470 is likely to correspond. These embodiments may be combined (e.g., by first using sensor data in combination with location 460 to identify a subset, and if no rendering matches, going on to use GPS data).

In some embodiments, location determination service 130 may anchor an unknown region, such as the region of route 450 including question marks, with a last known location before the unknown region was entered (e.g., location 460), and with a last known location after the unknown region was exited (e.g., location 470). Location determination service 130 may alert an administrator of the unknown region's existence, and the anchors. The administrator may then take action to update image rendering database 132 with renderings corresponding to the unknown region. For example, the administrator may send employees to capture images of the unknown region, or may incentivize riders to take pictures of portions of the unknown region.

Location Determination Service Functionality

FIG. 5 is an illustrative flowchart of a process for estimating vehicle location based on image renderings, in accordance with some embodiments of the disclosure. Process 500 begins by location determination service 130 initializing 502 a location of a vehicle at a start of a session based on GPS data of a client device within the vehicle. The details of the initialization process are described above with respect to FIG. 3, the details of which apply fully hereto. The location determination service receives 504, from the client device (e.g., client device 110 within vehicle 101), a rendering of an image that was captured by the client device at a time subsequent to the start of the session. The rendering may be an image itself, or a transformation of the image. In the case where the rendering is a transformation of the image, the client device may generate the rendering, or a module of location determination service 130 may generate the rendering upon receiving the image. To generate the rendering in the case where the rendering is a transformation of the image, client device 110 or location determination service 130 may generate a three-dimensional model of the captured image, and may register the three-dimensional model to three-dimensional content stored at image rendering database 132.

In some embodiments, the rendering is generated as part of a localization process (e.g., 2D-3D or 3D-3D localization). For example, client device 110 or location determination service 130 extracts 2D image features, e.g., using scale invariant feature transform (“SIFT”), object request broker (“ORB”), speed up robust features (“SURF”), or the like. In some embodiments, location determination service 130 or client device 110 builds a three-dimensional model from the captured image using a machine learned model.

In some embodiments, when receiving the rendering of the image that was captured by the client device at the time subsequent to the start of the session retrieving a plurality of progress benchmarks, location determination service 130 determines that, at a given time (e.g., time T2 depicted in FIG. 3), a progress benchmark of the plurality of progress benchmarks has been reached (e.g., a predetermined time lapse is equal to the time difference between times T2 and T1 depicted in FIG. 3, or that distance 315 equals a predefined distance). In response to determining that the progress benchmark has been reached, location determination service 130 commands the client device to capture the image (e.g., as depicted at position 399 of FIG. 3). As discussed above with respect to the conditions, the plurality of progress benchmarks may be any of, or a combination of, a threshold period of time from either initialization or from a last capture of an image and a threshold distance from a location where either initialization or a last capture of an image occurred. A progress benchmark may also be a threshold change in direction from a direction the vehicle was approaching at either initialization or at a last capture of an image (e.g., a right angle turn was made, which may trigger a need to capture a new image in the new direction that the vehicle is directed toward).

Process 500 continues, with location determination service 130 determines 506 a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle. In some embodiments, when determining the geographical area, location determination service 130 determines, from the data, a distance and direction in which the vehicle has traveled since the start of the session, and determines a scope of the geographical area based on the distance and the direction. For example, as described above, location determination service 130 may determine a last known location (e.g., location 398 of FIG. 3), and may determine a distance and direction traveled using sensor data of client device 110 to determine a likely location of vehicle 101 at a time a next image was captured. The geographical area may be defined to be a predefined radius (e.g., ten meters, a quarter mile, etc.) surrounding the likely location of vehicle 101 at the time the next image was captured. Further details about this are described above with respect to FIGS. 3-4. Location determination service 130 may cause a user interface of a driver or rider client device to display the likely location of the vehicle.

Location determination service 130 goes on to compare 508 the received rendering to entries in a database, each respective entry including a rendering and a respective associated location that is within the geographical area. For example, keypoints of the received rendering may be extracted and compared to keypoints of candidate renderings to determine whether a threshold amount of keypoints match (to be described in connection with 510 below). In some embodiments, to improve computational efficiency, the location determination service 130 compares the received rendering to the entries by extracting geolocation data from the captured rendering (e.g., data corresponding to position 399). For example, even if GPS data obtained by client device 110 is erroneous, it is likely to be within a threshold distance from the actual location of client device 110. Location determination service 130 then determines a subset of the entries corresponding to the geolocation data. For example, location determination service 130 determines a radius of actual GPS coordinates that are within a threshold distance of a location indicated by the geolocation data. Location determination service 130 limits the comparing of the received rendering to the subset of the entries, thus ensuring a savings of processing time and power, as only entries that are within a threshold radius of a given location will be searched, as opposed to all entries of image rendering database 132.

Location determination service 130 determines 510 whether the received rendering matches a respective rendering included in a respective entry of the database of renderings. In some embodiments, in order to perform this determination, location determination service 130 determines that the received rendering does not completely match any entry of the entries. For example, when comparing two-dimensional renderings, location determination service 130 may determine that not all keypoints of the received rendering match any candidate rendering. When comparing three-dimensional renderings, location determination service 130 may determine that the keypoints of the image do not match all keypoints of any perspective of any candidate rendering.

Matching can be performed coarsely (e.g., as a first part of a process) by leveraging GPS to reduce the search space (e.g., to reduce the amount of database entries to be referenced, as discussed above and below). By using some large radius around a query image GPS position, the application isolates candidate renderings (e.g., images or 3D sections of the scene to match against). In some embodiments, the application performs further filtering by using the heading direction of the query image or 3D scene coordinates to align them to the base map (e.g., of a 2D or 3D model of known renderings) more readily. Additional techniques like vocab trees, bag of words or even machine learning can be used to quickly retrieve a matching set of images or 3D content.

The process of determining whether a received rendering matches a candidate rendering is also referred to as a process of “alignment” herein. Alignment refers to aligning a captured image to either stored isolated renderings that have known corresponding locations, or to a portion of a “base map” that stitches together known renderings into a model of the world, where each portion of the base map corresponds to a different location and is built from captured images of all locations that are informed by the base map. Location determination service 130 may perform 3D-3D alignment in a variety of ways. In some embodiments, location determination service 130 executes an iterative closest point (ICP) module to determine the 3D-3D alignment. Location determination service 130 may seed the 3D-3D alignment using machine-learned models that generate a segmentation by semantically segmenting the 3D scene of the base map. With that segmentation, location determination service 130 may determine a coarse alignment between similar semantic structures, such as car-to-car alignments, light post-to-light post alignments, and the like. With that coarse alignment, location determination service 130 may then revert to traditional ICP to perform the final precision alignment in an accelerated fashion.

In response to determining that the received rendering does not completely match any entry of the entries, location determination service 130 determines that a percentage of characteristics of the received rendering match characteristics of the given entry, and determines whether the percentage exceeds a threshold. In response to determining that the percentage exceeds the threshold, location determination service 130 determines that the received rendering matches the given entry based on the partial match. Likewise, in response to determining that the percentage does not exceed the threshold, location determination service 130 determines that the received rendering does not match the given entry notwithstanding the partial match.

In response to determining that the received rendering matches a rendering of an entry, location determination service 130 determines 512 that the location of the vehicle (e.g., vehicle 101) is the location associated with the matching rendering. For example, location determination service 130 retrieves a location indicated by the entry that includes the matching rendering, and determines that the location indicated by this entry is the location of client device 110. Location determination service 130 optionally transmits the location to a rider's client device 110, and causes the location to be generated for display on the rider's client device. In some embodiments, in response to determining that the received rendering does not match the given entry notwithstanding the partial match, location determination service 130 transmits 514 a prompt to an administrator to add an entry corresponding to the location of the client device (e.g., in a scenario like the unknown region of route 450 in FIG. 4, as described above). In a scenario where location determination service has a rendering at a location matching the location of the captured image, location determination service 130 may command image rendering database 132 to mark the rendering as stale and requiring an update.

The above description relates to localization in a rider/driver environment using images. However, the techniques described herein may be used to localize anyone with a client device, such as a smart phone (e.g., as they are moving through an urban canyon where GPS is insufficient). Thus, all examples disclosed herein describing identifying driver and/or rider location may apply to any user with a client device, regardless of whether they are engaged in the above-described rider/driver scenarios.

Computing Hardware

The entities shown in FIG. 1 are implemented using one or more computers. FIG. 6 is a block diagram that illustrates a computer system 600 for acting as a client 110 or location determination service 130, according to one embodiment. Illustrated are at least one processor 602 coupled to a chipset 604. Also coupled to the chipset 604 are a memory 606, a storage device 608, a keyboard 610, a graphics adapter 612, a pointing device 614, and a network adapter 616. A display 618 is coupled to the graphics adapter 612. In one embodiment, the functionality of the chipset 604 is provided by a memory controller hub 620 and an I/O controller hub 622. In another embodiment, the memory 606 is coupled directly to the processor 602 instead of the chipset 604.

The storage device 608 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory 606 holds instructions and data used by the processor 602. The pointing device 614 may be a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 610 to input data into the computer system 600. The graphics adapter 612 displays images and other information on the display 618. The network adapter 616 couples the computer system 600 to the network 120.

As is known in the art, a computer 600 can have different and/or other components than those shown in FIG. 6. In addition, the computer 600 can lack certain illustrated components. For example, the computer acting as the location determination service 130 can be formed of multiple blade servers linked together into one or more distributed systems and lack components such as keyboards and displays. Moreover, the storage device 608 can be local and/or remote from the computer 600 (such as embodied within a storage area network (SAN)).

Additional Considerations

The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.

The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

Claims

1. A computer-implemented method for determining a location of a vehicle, the method comprising:

initializing a determination of a location of a vehicle at a start of a session based on global positioning system (GPS) data of a client device within the vehicle;
receiving, from the client device, a rendering of an image that was captured by the client device at a time subsequent to the start of the session;
determining a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle;
comparing the received rendering to entries in a database, each respective entry including a respective rendering and a respective associated location that is within the geographical area;
determining from the comparing whether the received rendering matches a respective rendering included in a respective entry in the database of renderings; and
in response to determining that the received rendering matches the respective rendering included in the respective entry, determining that the location of the vehicle at the time subsequent to the start of the session is the respective associated location included in the respective entry.

2. The computer-implemented method of claim 1, wherein receiving the rendering of the image that was captured by the client device at the time subsequent to the start of the session comprises:

retrieving a plurality of progress benchmarks;
determining that, at the time subsequent to the start of the session, a progress benchmark of the plurality of progress benchmarks has been reached; and
in response to determining that the progress benchmark has been reached, instructing the client device to capture the image.

3. The computer-implemented method of claim 2, wherein the plurality of progress benchmarks comprises at least one of a threshold period of time from either initialization or from a last capture of an image, a threshold distance from a location where either initialization or a last capture of an image occurred, or a threshold change in direction from a direction the vehicle was approaching at either initialization or at a last capture of an image.

4. The computer-implemented method of claim 1, wherein determining the geographical area comprises:

determining, from the data, a distance and direction in which the vehicle has traveled since the start of the session; and
determining a scope of the geographical area based on the distance and the direction.

5. The computer-implemented method of claim 4, further comprising:

causing a user interface of a rider client device to display the location of the vehicle.

6. The computer-implemented method of claim 1, further comprising:

receiving updated renderings of updated images as the vehicle progresses;
determining whether each respective updated rendering of the updated renderings matches an image rendering of an entry of the entries; and
in response to determining that a respective updated rendering does not match an image rendering of an entry of the entries, designating a respective geographical area corresponding to the respective updated rendering as having stale information.

7. The computer-implemented method of claim 6, further comprising pin-pointing a subsection of the respective geographical area as having the stale information by:

determining a most recent known location of the vehicle prior to when the respective updated rendering was received;
determining an oldest known location of the vehicle subsequent to when the respective updated rendering was received; and
determining the subsection to include an area between the most recent known location and the oldest known location.

8. A non-transitory computer-readable storage medium storing computer program instructions executable by a processor to perform operations for determining a location of a vehicle, the operations comprising:

initializing a determination of a location of a vehicle at a start of a session based on global positioning system (GPS) data of a client device within the vehicle;
receiving, from the client device, a rendering of an image that was captured by the client device at a time subsequent to the start of the session;
determining a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle;
comparing the received rendering to entries in a database, each respective entry including a respective rendering and a respective associated location that is within the geographical area;
determining from the comparing whether the received rendering matches a respective rendering included in a respective entry in the database of renderings; and
in response to determining that the received rendering matches the respective rendering included in the respective entry, determining that the location of the vehicle at the time subsequent to the start of the session is the respective associated location included in the respective entry.

9. The non-transitory computer-readable storage medium of claim 8, wherein receiving the rendering of the image that was captured by the client device at the time subsequent to the start of the session comprises:

retrieving a plurality of progress benchmarks;
determining that, at the time subsequent to the start of the session, a progress benchmark of the plurality of progress benchmarks has been reached; and
in response to determining that the progress benchmark has been reached, instructing the client device to capture the image.

10. The non-transitory computer-readable storage medium of claim 9, wherein the plurality of progress benchmarks comprises at least one of a threshold period of time from either initialization or from a last capture of an image, a threshold distance from a location where either initialization or a last capture of an image occurred, or a threshold change in direction from a direction the vehicle was approaching at either initialization or at a last capture of an image.

11. The non-transitory computer-readable storage medium of claim 8, wherein determining the geographical area comprises:

determining, from the data, a distance and direction in which the vehicle has traveled since the start of the session; and
determining a scope of the geographical area based on the distance and the direction.

12. The non-transitory computer-readable storage medium of claim 11, the operations further comprising:

causing a user interface of a rider client device to display the location of the vehicle.

13. The non-transitory computer-readable storage medium of claim 8, the operations further comprising:

receiving updated renderings of updated images as the vehicle progresses;
determining whether each respective updated rendering of the updated renderings matches an image rendering of an entry of the entries; and
in response to determining that a respective updated rendering does not match an image rendering of an entry of the entries, designating a respective geographical area corresponding to the respective updated rendering as having stale information.

14. The computer-implemented method of claim 13, the operations further comprising pin-pointing a subsection of the respective geographical area as having the stale information by:

determining a most recent known location of the vehicle prior to when the respective updated rendering was received;
determining an oldest known location of the vehicle subsequent to when the respective updated rendering was received; and
determining the subsection to include an area between the most recent known location and the oldest known location.

15. A system for determining a location of a vehicle, comprising:

a processor for executing computer program instructions; and
a non-transitory computer-readable storage medium storing computer program instructions executable by the processor to perform operations for estimating a location of a client device, the operations comprising:
initializing a determination of a location of a vehicle at a start of a session based on global positioning system (GPS) data of a client device within the vehicle;
receiving, from the client device, a rendering of an image that was captured by the client device at a time subsequent to the start of the session;
determining a geographical area corresponding to the received rendering using the GPS data and data obtained from a sensor within the vehicle;
comparing the received rendering to entries in a database, each respective entry including a respective rendering and a respective associated location that is within the geographical area;
determining from the comparing whether the received rendering matches a respective rendering included in a respective entry in the database of renderings; and
in response to determining that the received rendering matches the respective rendering included in the respective entry, determining that the location of the vehicle at the time subsequent to the start of the session is the respective associated location included in the respective entry.

16. The system of claim 15, wherein receiving the rendering of the image that was captured by the client device at the time subsequent to the start of the session comprises:

retrieving a plurality of progress benchmarks;
determining that, at the time subsequent to the start of the session, a progress benchmark of the plurality of progress benchmarks has been reached; and
in response to determining that the progress benchmark has been reached, instructing the client device to capture the image.

17. The system of claim 16, wherein the plurality of progress benchmarks comprises at least one of a threshold period of time from either initialization or from a last capture of an image, a threshold distance from a location where either initialization or a last capture of an image occurred, or a threshold change in direction from a direction the vehicle was approaching at either initialization or at a last capture of an image.

18. The system of claim 15, wherein determining the geographical area comprises:

determining, from the data, a distance and direction in which the vehicle has traveled since the start of the session; and
determining a scope of the geographical area based on the distance and the direction.

19. The system of claim 18, the operations further comprising:

causing a user interface of a rider client device to display the location of the vehicle.

20. The system of claim 15, the operations further comprising:

receiving updated renderings of updated images as the vehicle progresses;
determining whether each respective updated rendering of the updated renderings matches an image rendering of an entry of the entries; and
in response to determining that a respective updated rendering does not match an image rendering of an entry of the entries, designating a respective geographical area corresponding to the respective updated rendering as having stale information.
Patent History
Publication number: 20200234062
Type: Application
Filed: Dec 12, 2019
Publication Date: Jul 23, 2020
Inventor: Aaron Matthew Rogan (Westminster, CO)
Application Number: 16/712,883
Classifications
International Classification: G06K 9/00 (20060101); G06K 9/62 (20060101); G01S 19/48 (20060101);