METHOD FOR COMMUNICATING INTENT OF AN AUTONOMOUS VEHICLE

One variation of a method for communicating intent includes, at an autonomous vehicle: autonomously approaching an intersection; autonomously navigating to a stop proximal the intersection; while stopped proximal the intersection, projecting an intent icon onto pavement at a first distance ahead of the autonomous vehicle; calculating a confidence score for possession of right of way of the autonomous vehicle to enter the intersection based on objects detected in a field around the autonomous vehicle; projecting the intent icon onto pavement at distances ahead of the autonomous vehicle proportional to the confidence score and greater than the first distance; and in response to the confidence score exceeding a threshold score, autonomously entering the intersection.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

This invention relates generally to the field of autonomous vehicles and more specifically to a new and useful method for communicating intent of an autonomous vehicle in the field of autonomous vehicles.

BRIEF DESCRIPTION OF THE FIGURES

FIG. 1 is a flowchart representation of a method.

DESCRIPTION OF THE EMBODIMENTS

The following description of embodiments of the invention is not intended to limit the invention to these embodiments but rather to enable a person skilled in the art to make and use this invention. Variations, configurations, implementations, example implementations, and examples described herein are optional and are not exclusive to the variations, configurations, implementations, example implementations, and examples they describe. The invention described herein can include any and all permutations of these variations, configurations, implementations, example implementations, and examples.

1. Method

As shown in FIG. 1, a method S100 for communicating intent includes, at an autonomous vehicle: autonomously approaching an intersection in Block S110; autonomously navigating to a stop proximal the intersection in Block S112; while stopped proximal the intersection, projecting an intent icon onto pavement at a first distance ahead of the autonomous vehicle in Block S120; calculating a confidence score for possession of right of way of the autonomous vehicle to enter the intersection based on objects detected in a field around the autonomous vehicle in Block S130; projecting the intent icon onto pavement at distances ahead of the autonomous vehicle proportional to the confidence score and greater than the first distance in Block S122; and, in response to the confidence score exceeding a threshold score, autonomously entering the intersection in Block S140.

One variation of the method S100 includes, at the autonomous vehicle: autonomously approaching an intersection in Block S110; while slowing upon approach to the intersection, projecting an intent icon onto pavement at a distance ahead of the autonomous vehicle proportional to a speed of the autonomous vehicle in Block S120; while stopped proximal the intersection, projecting an intent icon onto pavement at a first distance ahead of the autonomous vehicle in Block S120; predicting possession of right of way of the autonomous vehicle to enter the intersection based on objects detected in a field around the autonomous vehicle in Block S130; in preparation for navigating into the intersection, projecting the intent icon onto pavement at a second distance ahead of the autonomous vehicle greater than the first distance in Block S122; and autonomously entering the intersection in Block S140.

2. Applications

Generally, the method S100 can be executed by an autonomous vehicle to visually indicate its intent—to other vehicles, vehicle operators, and pedestrians nearby—through a simple intent icon projected onto nearby pavement. In particular, rather than project text or scenario-specific symbols onto pavement or rather than render text or scenario-specific symbols on displays integrated into the autonomous vehicle, the autonomous vehicle can instead project a simple intent icon (e.g., a green circle or “dot” approximately 25-centimeters in diameter) onto nearby pavement at distances from the autonomous vehicle that correspond to the autonomous vehicle's intent to advance forward from its current location, such as into an intersection, through a turn lane, or out of a parking space. By projecting a simple intent icon into its surrounding field at positions (i.e., distances from the autonomous vehicle) linked to the autonomous vehicle's intent to remain stopped or advance forward, the autonomous vehicle can enable other vehicle operators and pedestrians nearby to quickly intuit the autonomous vehicle's next action and the autonomous vehicle's perception of the field (i.e., whether the autonomous vehicle “sees” these other vehicles and pedestrians).

The autonomous vehicle can execute Blocks of the method S100 in order to simulate eye contact, body language, and gestures between human operators of road vehicles and between vehicle operators and pedestrians nearby via a simple intent icon projected onto pavement near the autonomous vehicle. For example, a human standing on a sidewalk or in an office may look down—such as at her phone or notebook or focusing downwardly on her current task—if she intends to remain in her current location rather than walk forward. However, this human may naturally look up from her smartphone, notebook, or task when she eventually does intend to walk forward, including looking in the direction she intends to walk. The autonomous vehicle can execute Blocks of the method S100 to mimic this behavior, including: projecting an intent icon onto the pavement just ahead of the autonomous vehicle when the autonomous vehicle intends to remain stopped in its current location (e.g., at a traffic light, a stop sign, a crosswalk, a turn lane, a parking space); and projecting the intent icon further ahead of the front of the autonomous vehicle when the autonomous vehicle intends to move forward. Human operators and pedestrians nearby may: witness this intent icon; intuit the autonomous vehicle's intent to remain stopped when the projected intent icon remains close to the autonomous vehicle, since this intent icon position may simulate a user looking down at her phone, notebook, or other task; and intuit the autonomous vehicle's increasing intent to move forward (or backward, such as out of a parking space) as the autonomous vehicle projects the intent icon at great distances from the autonomous vehicle, since this intent icon position may simulate a user looking up from her phone, notebook, or other task when preparing to move from her current location.

The autonomous vehicle can therefore execute Blocks of the method S100 in order to enable vehicle operators and pedestrians nearby to quickly comprehend the autonomous vehicle's intent (e.g., its next elected navigational action) without reading textual content, interpreting scenario-specific icons, or otherwise holding prior knowledge of such language or iconography projected or rendered by the autonomous vehicle.

The method S100 is described below as executed by an autonomous vehicle to project an intent icon onto pavement in the field near the autonomous vehicle in order to communicate the autonomous vehicle's perception of its field, its perception of its right of way, and its intent, such as in addition to or in combination with left and right turn signals integrated into the autonomous vehicle. Alternatively, the autonomous vehicle can implement similar methods and techniques to render a similar intent icon on exterior displays integrated into the autonomous vehicle. Furthermore, the autonomous vehicle can include a dedicated autonomous rideshare vehicle, an autonomous personal mobility vehicle, an autonomous fleet vehicle, an autonomous delivery or other commercial-type vehicle, an autonomous truck, etc.

3. Autonomous Vehicle

The autonomous vehicle can include: a suite of sensors configured to collect information about the autonomous vehicle's environment; local memory storing a navigation map defining a route for execution by the autonomous vehicle and a localization map that the autonomous vehicle implements to determine its location in real space; and a controller. The controller can: determine the location of the autonomous vehicle in real space based on sensor data collected from the suite of sensors and the localization map; determine the context of a scene around the autonomous vehicle based on these sensor data; elect a future navigational action (e.g., a navigational decision) based on the context of the scene around the autonomous vehicle, the real location of the autonomous vehicle, and the navigation map, such as by implementing a deep learning and/or artificial intelligence model; and control actuators within the vehicle (e.g., accelerator, brake, and steering actuators) according to elected navigation decisions.

In one implementation, the autonomous vehicle includes a set of 360° LIDAR sensors arranged on the autonomous vehicle, such as one LIDAR sensor arranged at the front of the autonomous vehicle and a second LIDAR sensor arranged at the rear of the autonomous vehicle or a cluster of LIDAR sensors arranged on the roof of the autonomous vehicle. Each LIDAR sensor can output one three-dimensional distance map (or depth image)—such as in the form of a 3D point cloud representing distances between the LIDAR sensor and external surface within the field of view of the LIDAR sensor—per rotation of the LIDAR sensor (i.e., once per scan cycle). The autonomous vehicle can additionally or alternatively include: a set of infrared emitters configured to project structured light into a field near the autonomous vehicle; a set of infrared detectors (e.g., infrared cameras); and a processor configured to transform images output by the infrared detector(s) into a depth map of the field.

The autonomous vehicle can also include one or more color cameras facing outwardly from the front, rear, and left lateral and right lateral sides of the autonomous vehicle. For example, each camera can output a video feed containing a sequence of digital photographic images (or “frames”), such as at a rate of 20 Hz. The autonomous vehicle can also include a set of infrared proximity sensors arranged along the perimeter of the base of the autonomous vehicle and configured to output signals corresponding to proximity of objects and pedestrians within one meter of the autonomous vehicle. The controller in the autonomous vehicle can thus fuse data streams from the LIDAR sensor(s), the color camera(s), and the proximity sensor(s), etc. into one optical scan of the field around the autonomous vehicle—such as in the form of a 3D color map or 3D point cloud of roads, sidewalks, vehicles, pedestrians, etc. in the field around the autonomous vehicle—per scan cycle. The autonomous vehicle can also collect data broadcast by other vehicles and/or static sensor systems nearby and can incorporate these data into an optical scan to determine a state and context of the scene around the vehicle and to elect subsequent actions.

Furthermore, the autonomous vehicle can compare features extracted from this optical scan to like features represented in the localization map—stored in local memory on the autonomous vehicle—in order to determine its geospatial location and orientation in real space and then elect a future navigational action or other navigational decision accordingly. The autonomous vehicle can also: implement a perception model—such as including integrated or discrete vehicle, pedestrian, traffic sign, traffic signal, and lane marker detection models—to detect and identify mutable and immutable objects in its proximity; and implement a navigation or path planning model (e.g., in the form of a convolutional neural network) to elect acceleration, braking, and turning actions based on these mutable and immutable objects and the autonomous vehicle's target destination. For example, the autonomous vehicle can implement a localization map, a navigational model, and a path planning model to: autonomously approach an intersection in Block Silo; detect a stop sign, yield sign, or traffic signal near this intersection; detect another vehicle or pedestrian near this intersection; perceive that the other vehicle or pedestrian has right of way to enter the intersection or that the autonomous vehicle is otherwise required to stop at the intersection; and then autonomously navigate to a stop proximal the intersection in Block S112. As the autonomous vehicle slows upon approach to the intersection, while the autonomous vehicle is stopped at the intersection, and/or as the autonomous vehicle then accelerates into the intersection, the autonomous vehicle can execute subsequent Blocks of the method S100 to visually communicate its intent by projecting the intent icon onto pavement nearby, as described below.

However, the autonomous vehicle can include any other sensors and can implement any other scanning, signal processing, and autonomous navigation techniques or models to determine its geospatial position and orientation, to perceive objects in its vicinity, and to elect navigational actions based on sensor data collected through these sensors.

4. Projector

The autonomous vehicle also includes a front projector configured to project light onto pavement ahead of the autonomous vehicle. For example, the front projector can include a DLP or LCD projector arranged under or integrated into the front bumper of the autonomous vehicle and configured to project light over a distance ahead of the autonomous vehicle, such as including from 50 centimeters ahead of the autonomous vehicle to five meters ahead of the autonomous vehicle. Alternatively, the front projector can be configured to output a beam of light (e.g., an approximately-collimated light beam) and can be pivotably mounted to the front of the autonomous vehicle in order to project the beam of light along a length of pavement ahead of the autonomous vehicle. However, the forward project can be mounted to or integrated into the autonomous vehicle in any other way, can include a light source of any other type, and can project light into the field—at varying distances from the front of the autonomous vehicle—in any other way.

The autonomous vehicle can similarly include: a rear projector configured to project light onto pavement behind the autonomous vehicle; a right projector configured to project light onto pavement to the right of the autonomous vehicle; and/or a left projector configured to project light onto pavement to the left of the autonomous vehicle.

5. Intent Icon

Block S120 of the method S100 recites, while stopped proximal the intersection, projecting an intent icon onto pavement at a first distance ahead of the autonomous vehicle; Block S130 of the method S100 recites calculating a confidence score for possession of right of way of the autonomous vehicle to enter the intersection based on objects detected in a field around the autonomous vehicle; and Block S122 of the method S100 recites projecting the intent icon onto pavement at distances ahead of the autonomous vehicle proportional to the confidence score and greater than the first distance. Generally, in Blocks S120, S122, and S130, the autonomous vehicle can project the intent icon onto pavement near the autonomous vehicle at a distance from the autonomous vehicle (or at a position relative to the autonomous vehicle) that corresponds to the autonomous vehicle's intent to either remain stopped in its current location or to enter the intersection. In particular, the autonomous vehicle can project a simple intent icon—from which a human nearby may quickly intuit the autonomous vehicle's intent to either remain stopped or to move from its current location—onto pavement in the field near the autonomous vehicle.

In one implementation shown in FIG. 1, the autonomous vehicle projects a circular “dot”—such as approximately 25 centimeters in diameter and in a solid color (e.g., white, orange, green, yellow, violet, pink)—onto pavement near the autonomous vehicle in Blocks S120 and S122.

The autonomous vehicle can also animate the intent icon in order to garner attention from vehicle operators and pedestrians nearby. For example, the autonomous vehicle can pulsate the intent icon, including expanding and contracting the projected intent icon by +/− 30% of its nominal diameter, in order to increase likelihood that a human onlooker will notice the intent icon. In this implementation, the autonomous vehicle can also pulsate the intent icon at a rate corresponding to its intended rate of acceleration or deceleration from its current location.

In another implementation shown in FIG. 1, the autonomous vehicle projects the intent icon in the form of a “dot” into the field with a “tail” trailing the dot back to the autonomous vehicle in order to visually link the projected dot to the autonomous vehicle, such as if other autonomous vehicles nearby are implementing similar methods and techniques to project their own intent icons onto pavement nearby (e.g., when multiple autonomous vehicles converge on one intersection). In a similar implementation, the autonomous vehicle can project the intent icon that defines a tapered line or arc extending from the autonomous vehicle and widening at greater distances from the autonomous vehicle.

The autonomous vehicle can also project the intent icon in a color matched to the autonomous vehicle's exterior color or matched to a graphic on the autonomous vehicle's exterior. For example, the autonomous vehicle: can include green and yellow graphics arranged on its exterior; and can project a circular intent icon with concentric green and yellow rings onto pavement nearby in order to enable humans nearby to visually link this projected intent icon to this autonomous vehicle. In this example, a second autonomous vehicle: can include orange and white graphics arranged on its exterior; and can project a circular intent icon with concentric orange and white rings onto pavement nearby in order to enable humans nearby to visually link this projected intent icon to this second autonomous vehicle and to distinguish this intent icon projected by the second autonomous vehicle from intent icons projected by other autonomous vehicles nearby.

However, the autonomous vehicle can project an intent icon of any other size, geometry, and/or color, etc. and animated in any other way.

6. Intent Icon Activation Scenarios

Generally, the autonomous vehicle can activate the front projector to project the intent icon into the field nearby in select scenarios—that is, when the autonomous vehicle is preparing to execute certain navigational actions.

6.1 Stop Sign

In one implementation, the autonomous vehicle activates the front projector to project the intent icon into the field upon stopping at a stop sign and preparing to move into the intersection ahead. In this implementation, the autonomous vehicle can implement autonomous navigation and perception techniques to: autonomously navigate along a segment of road; detect a stop sign ahead of the autonomous vehicle; slow upon approach to the stop sign; stop at or ahead of the intersection; detect and track other vehicles at the intersection; perceive right of way of these vehicles based on detected arrival times at the intersection; and remain stopped at the intersection while waiting for other vehicles—with right of way preceding that of the autonomous vehicle—to enter the intersection.

While stopped at the stop sign and waiting for other vehicles to enter the intersection before entering the intersection itself, the autonomous vehicle can activate the front projector to project the intent icon just ahead (e.g., within 50 centimeters) of the front bumper of the autonomous vehicle. During this period, the autonomous vehicle can continue to: scan its surrounding field for other vehicles; track locations of these other vehicles with and near the intersection; and derive the autonomous vehicle's right of way to enter the intersection. As the autonomous vehicle detects a last other vehicle with right of way preceding that of the autonomous vehicle entering the intersection, the autonomous vehicle can trigger the front projector to move the intent icon further ahead of the front of the autonomous vehicle in order to visually indicate the autonomous vehicle's intent to enter the intersection For example, the front projector can “animate” the intent icon moving from its projected position proximal the autonomous vehicle's front bumper to a position further ahead of the autonomous vehicle—such as in the intersection—in order to visually indicate the autonomous vehicle's intent to advance into the intersection soon thereafter.

In this implementation, the autonomous vehicle can also: calculate a confidence score that it possesses right of way to enter the intersection based on presence of other vehicles in or near the intersection in Block S130; project the intent icon into the field at a distance from the front of the autonomous vehicle as a function of this confidence score in Block S122; and enter the intersection and resume navigation once this confidence score exceeds a threshold score in Block S140. Therefore, in this implementation, the autonomous vehicle can project the intent icon at a distance ahead of the autonomous vehicle as a function of the autonomous vehicle's confidence in its right of way, which may correspond to its intent to enter the intersection.

6.2 Crosswalk

In another implementation, the autonomous vehicle activates the front projector to project the intent icon into the field upon stopping ahead of a crosswalk and preparing to advance through the crosswalk. In this implementation, the autonomous vehicle can execute methods and techniques similar to those described above to: autonomously navigate along a road segment; detect a crosswalk ahead of the autonomous vehicle; detect and track a pedestrian near the crosswalk; slow to a stop at or ahead of the crosswalk; and remain stopped at the crosswalk while waiting for the pedestrian to enter and then exit the crosswalk. While stopped at the crosswalk and waiting for the pedestrian to exit the crosswalk, the autonomous vehicle can activate the front projector to project the intent icon just ahead of the front bumper of the autonomous vehicle in order to visually communicate—to the pedestrian—that the autonomous vehicle intends to remain stopped at the crosswalk for the pedestrian.

The autonomous vehicle can continue to track the pedestrian moving through the crosswalk. As the pedestrian approaches an adjacent sidewalk (or median) or once the pedestrian steps onto the sidewalk (or median), the autonomous vehicle can move the intent icon further ahead of the autonomous vehicle in order to visually indicate the autonomous vehicle's intent to advance past the crosswalk. For example, the autonomous vehicle can: estimate a remaining time for the pedestrian to reach the adjacent sidewalk based on a speed of the pedestrian and a remaining distance between the pedestrian and the sidewalk; animate the intent icon moving outwardly from the autonomous vehicle (e.g., into or past the crosswalk) once the autonomous vehicle estimates the pedestrian to be within a threshold time (e.g., four seconds) of the sidewalk; and continue to move the intent icon outwardly from the front of the autonomous vehicle as the pedestrian approaches the sidewalk in order to visually communicate a sense of “urgency” and to visually indicate the autonomous vehicle's intent to pass through the crosswalk once the pedestrian enters the sidewalk.

In this implementation, the autonomous vehicle can: calculate a confidence score that it possesses right of way to enter the crosswalk based on presence of a pedestrian in or near the crosswalk in Block S130; project the intent icon into the field at a distance from the front of the autonomous vehicle as a function of this confidence score in Block S122; and then accelerate through the crosswalk once this confidence score exceeds a threshold score in Block S140. Therefore, in this implementation, the autonomous vehicle can project the intent icon at a distance ahead of the autonomous vehicle as a function of the autonomous vehicle's confidence in its right of way, which may correspond to its intent to enter the crosswalk.

6.3 Right Turn

In yet another implementation, the autonomous vehicle activates the front projector to project the intent icon into the field upon stopping at a right turn lane and preparing to turn right. In this implementation, the autonomous vehicle can execute methods and techniques similar to those described above to: autonomously navigate along a road segment, up to an intersection, and into a right-turn lane; detect and track other vehicles in or approaching the intersection; identify another vehicle heading toward the road segment just ahead of and perpendicular to the autonomous vehicle and with right of way to pass through the intersection; and yield to this other vehicle accordingly. While stopped in this right turn late and waiting for this other vehicle to pass the autonomous vehicle, the autonomous vehicle can activate the front projector to project the intent icon just ahead of the front bumper of the autonomous vehicle in order to visually communicate—to the other vehicle and/or its operator—that the autonomous vehicle intends to remain stopped in the right-turn lane for the other vehicle to pass.

The autonomous vehicle can continue to track the other vehicle and can move the intent icon further ahead of the autonomous vehicle—in order to visually indicate the autonomous vehicle's intent to turn right from the right-turn lane—once the other vehicle passes in front of the autonomous vehicle. In a similar example, the autonomous vehicle can: estimate a remaining time for the other vehicle to pass the autonomous vehicle based on the position and speed of the other vehicle relative to the autonomous vehicle; animate the intent icon moving outwardly from the autonomous vehicle (e.g., into the road segment ahead of and perpendicular to the autonomous vehicle) once the autonomous vehicle estimates the other vehicle to be within a threshold time (e.g., two seconds) of passing the autonomous vehicle; and continue to move the intent icon outwardly from the front of the autonomous vehicle as the other vehicle passes and moves beyond the autonomous vehicle. Once the autonomous vehicle determines that the other autonomous vehicle has moved beyond the autonomous vehicle by at least a minimum distance and that other vehicles are not approaching the road segment just ahead of the autonomous vehicle, the autonomous vehicle can resume autonomous navigate and execute a right-turn action onto this road segment.

In this implementation, the autonomous vehicle can: calculate a confidence score that it possesses right of way to make a right turn at this intersection based on presence of other vehicles nearby in Block S130; project the intent icon into the field at a distance from the front of the autonomous vehicle as a function of this confidence score in Block S122; and then autonomously execute a right-turn action once this confidence score exceeds a threshold score in Block S140. Therefore, in this implementation, the autonomous vehicle can project the intent icon at a distance ahead of the autonomous vehicle as a function of the autonomous vehicle's confidence in its right of way, which may correspond to its intent to execute a right turn action.

6.4 Parking Space

In another implementation, the autonomous vehicle activates the rear projector to project the intent icon into the field when stopped in a parking space and preparing to back out of the parking space. In this implementation, the autonomous vehicle can remain parked in a parking space with its powertrain “OFF” when not in use (e.g., when “idle”). When a user subsequently enters the autonomous vehicle in preparation for a ride to a dropoff location or when the autonomous vehicle receives a new ride request specifying a pickup location other than the parking space, the autonomous vehicle can: power up its powertrain; activate the rear projector to project the intent icon just behind the rear bumper of the autonomous vehicle in order to visually communicate—to the other vehicles and/or vehicle operators nearby—that the autonomous vehicle is active but intends to remain stopped in its current parking space; and scan the field behind and to the sides of the autonomous vehicle for an approaching vehicle, a pedestrian, and/or other obstacle. If the autonomous vehicle detects an approaching vehicle or a pedestrian within a threshold distance of a planned path out of the parking space, the autonomous vehicle can remain stopped in the parking space and continue to project the intent icon just behind the rear bumper of the autonomous vehicle via the rear projector. As this other vehicle passes the autonomous vehicle's parking space and/or as this pedestrian moves outside of the autonomous vehicle's planned path out of the parking space, the rear projector can project the intent icon further from the rear of the autonomous vehicle in order to visually communicate its intent to back out of the parking space. Once the autonomous vehicle confirms that other vehicles and pedestrians are sufficiently remote from all or a portion of the planned path out of the parking space, the autonomous vehicle can autonomously back out of the parking space along the planned path.

In this implementation, the autonomous vehicle can: calculate a confidence score that it possesses right of way to back out of the parking space based on presence of other vehicles and pedestrians nearby in Block S130; project the intent icon into the field at a distance behind the autonomous vehicle as a function of this confidence score in Block S122; and then autonomously navigate a planned path to back out of the parking space once this confidence score exceeds a threshold score in Block S140. Therefore, in this implementation, the autonomous vehicle can project the intent icon at a distance behind the autonomous vehicle as a function of the autonomous vehicle's confidence in its right of way, which may correspond to its intent to back out of the parking space.

Furthermore, as the autonomous vehicle approaches the end of this planned path out of the parking space, the rear project can shift the intent icon closer to the rear of the autonomous vehicle in order to communicate the autonomous vehicle's intent to slow and then change directions. For example, the rear projector can project the intent icon at a distance from the rear of the autonomous vehicle as an inverse function of the distance remaining along the planned path or as a function of the autonomous vehicle's intended speed.

As (or once) the autonomous vehicle slows to a stop at the end of this planned path out of the parking space, the autonomous vehicle can: deactivate the rear projector and trigger the front projector to project the intent icon proximal the front of the autonomous vehicle; while scanning the field ahead of the autonomous vehicle for another vehicle or pedestrian. Once the autonomous vehicle confirms that it has right of way to move forward—such as in the absence of another vehicle or pedestrian in or near a planned route forward from the autonomous vehicle's current location—the front projector can move the intent icon further ahead of the front of the autonomous vehicle in order to communicate the autonomous vehicle's intent to move forward, such as described above.

6.5 Rider Pickup and Dropoff

In another implementation, while stopped at a pickup location and waiting for a rider to enter the autonomous vehicle, such as at a curb or loading zone, the autonomous vehicle (or a side projector specifically) can project the intent icon onto pavement adjacent a door of the autonomous vehicle in order to visually communicate to other vehicle operators and pedestrians nearby that the autonomous vehicle is waiting for a rider to enter the autonomous vehicle. Once the rider has entered the autonomous vehicle, the autonomous vehicle can animate the intent icon moving toward the front of the autonomous vehicle and project the intent icon adjacent the front of the autonomous vehicle while the rider prepares for departure. Once the rider confirms that she is ready to depart and as the autonomous vehicle verifies that it possesses right of way, the autonomous vehicle can project the intent icon at greater distances from the front of the autonomous vehicle in order to visually communicate its intent to depart from the pickup location.

Similarly, as the autonomous vehicle approaches a drop-off location, the autonomous vehicle can project the intent icon ahead of the autonomous vehicle and shift the intent icon closer to the front of the autonomous vehicle in order to visually communicate its intent to slow down. After stopping at the dropoff location and while waiting for a rider to exit the autonomous vehicle, the autonomous vehicle (or the side projector) can shift the intent icon to pavement adjacent a door of the autonomous vehicle (e.g., adjacent the rider's location inside the autonomous vehicle or adjacent a door of the autonomous vehicle opposite a nearby curb) in order to visually communicate to other vehicle operators and pedestrians nearby that the autonomous vehicle is waiting for a rider to exit the autonomous vehicle. The autonomous vehicle can then transition the intent icon back to the front of the autonomous vehicle in preparation for subsequent departure, as described above.

6.6 Scenario Approach

The autonomous vehicle can implement similar methods and techniques to activate the front projector when approaching select scenarios or when preparing to execute a navigational change. In one example, as the autonomous vehicle approaches a right turn lane and prepares to turn right, the autonomous vehicle can project the intent icon onto pavement far (e.g., five meters) ahead of the autonomous vehicle once the speed of the autonomous vehicle drops below a speed threshold (e.g., 25 miles per hour). The autonomous vehicle can project the intent icon onto pavement at closer distances to the front of the autonomous vehicle as the autonomous vehicle slows further upon approach to the right turn lane. If the autonomous vehicle then stops in the right turn lane to yield to oncoming traffic, the autonomous vehicle can implement methods and techniques described above to project the intent icon just fore of the front of the autonomous vehicle while stopped and to then shift the intent icon further ahead of the autonomous vehicle as the autonomous vehicle prepares to advance forward and execute a right turn maneuver. However, if the autonomous vehicle determines that no traffic is oncoming and prepares to execute the right turn maneuver as the autonomous vehicle approaches the right turn lane without stopping, the autonomous vehicle can then transition to projecting the intent icon at a further distance from the front of the autonomous vehicle—even as the autonomous vehicle slows upon approach to the right turn lane—in order to visually communicate the autonomous vehicle's intent to execute the right turn maneuver without stopping.

7. Intent Icon Deactivation

Once the autonomous vehicle begins to navigate forward into an intersection or out of a parking space, etc., the autonomous vehicle can deactivate projection of the intent icon into the field. Alternatively, the autonomous vehicle can continue to project the intent icon until the autonomous vehicle has exited the intersection, exited the parking space, passed the crosswalk, merge into a new lane, or otherwise completed the current navigational action. Yet alternately, the autonomous vehicle can disable projection of the intent icon at the earlier of: reaching a threshold speed (e.g., 10 miles per hour, 25 miles per hour); and coming within a threshold distance (e.g., five meters) of another vehicle directly ahead of the autonomous vehicle.

The autonomous vehicle can also disable projection of the intent icon when the autonomous vehicle is behind another vehicle and approaching or stopped at an intersection; rather the autonomous vehicle can limit projection of the intent icon into the field to when the autonomous vehicle reaches the front of intersection (i.e., is the leading vehicle in its lane at the intersection).

However, the autonomous vehicle can selectively disable projection of the intent icon in response to any other event.

8. Direction

In the foregoing implementations, the autonomous vehicle can project the intent icon into the field along the longitudinal axis of the autonomous vehicle and at varying distances from the autonomous vehicle and leverage turn signals integrated into the autonomous vehicle to indicate the autonomous vehicle's intended direction of navigation. Alternatively, when preparing to execute a turn—such as when stopped at a stoplight, traffic signal, or turn lane—the autonomous vehicle can shift the projected intent icon laterally (i.e., off of the longitudinal axis of the autonomous vehicle) in order to visually communicate the autonomous vehicle's intent to execute this turn. In particular, by projecting the intent icon in the direction that the autonomous vehicle intends to navigate while also projecting the intent icon at a distance from the autonomous vehicle corresponding to the autonomous vehicle's intent to execute this navigational action, the autonomous vehicle can move the intent icon closer to another vehicle or pedestrian nearer to the autonomous vehicle's intended path, which may improve perception and comprehension of the intent icon for vehicle operators and pedestrians.

The systems and methods described herein can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated with the application, applet, host, server, network, website, communication service, communication interface, hardware/firmware/software elements of a user computer or mobile device, wristband, smartphone, or any suitable combination thereof. Other systems and methods of the embodiment can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated by computer-executable components integrated with apparatuses and networks of the type described above. The computer-readable medium can be stored on any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component can be a processor but any suitable dedicated hardware device can (alternatively or additionally) execute the instructions.

As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the embodiments of the invention without departing from the scope of this invention as defined in the following claims.

Claims

1. A method for communicating intent comprising, at an autonomous vehicle:

autonomously approaching an intersection;
autonomously navigating to a stop proximal the intersection;
while stopped proximal the intersection, projecting an intent icon onto pavement at a first distance ahead of the autonomous vehicle;
calculating a confidence score for possession of right of way of the autonomous vehicle to enter the intersection based on objects detected in a field around the autonomous vehicle;
projecting the intent icon onto pavement at distances ahead of the autonomous vehicle proportional to the confidence score and greater than the first distance; and
in response to the confidence score exceeding a threshold score, autonomously entering the intersection.

2. A method for communicating intent comprising, at an autonomous vehicle:

autonomously approaching an intersection;
while slowing upon approach to the intersection, projecting an intent icon onto pavement at a distance ahead of the autonomous vehicle decreasing with decreasing speed of the autonomous vehicle;
while stopped proximal the intersection, projecting an intent icon onto pavement at a first distance ahead of the autonomous vehicle;
predicting possession of right of way of the autonomous vehicle to enter the intersection based on objects detected in a field around the autonomous vehicle;
in preparation for navigating into the intersection, projecting the intent icon onto pavement at a second distance ahead of the autonomous vehicle greater than the first distance; and
autonomously entering the intersection.
Patent History
Publication number: 20200001779
Type: Application
Filed: May 23, 2019
Publication Date: Jan 2, 2020
Inventor: Chip J. Alexander (Mountain View, CA)
Application Number: 16/421,423
Classifications
International Classification: B60Q 1/50 (20060101); G05D 1/00 (20060101);