CONDITIONAL AVAILABILITY OF VEHICULAR MIXED-REALITY
Methods, apparatuses, and computer-readable media are presented for providing a mixed-reality scene involving a lead vehicle and a following vehicle. The system may present a sequence of mixed-reality images to a driver of the following vehicle, resulting from merging (a) an image captured by a camera aboard the lead vehicle and (b) an image captured by a camera aboard the following vehicle, to generate a merged image. The system may discontinue or diminish mixed-reality content of the sequence of mixed-reality images in response to one or more detected conditions, which may comprise detection of an object between the lead vehicle and the following vehicle.
Aspects of the disclosure relate to mixed reality and more specifically to providing “see-through” functionality to the driver of a vehicle. In a scenario where a lead vehicle is ahead of a following vehicle, the lead vehicle can often obscure the view of the driver of the following vehicle. This can lead to unsafe conditions. Mixed reality has been proposed as an effective solution to combat such problems, by providing a view to the driver of the following vehicle which simulates an ability to see through the lead vehicle and make objects blocked by the lead vehicle become visible. However, many challenges arise in providing such see-through functionality in a safe and effective manner.
BRIEF SUMMARYCertain embodiments are described for providing a mixed-reality scene involving a lead vehicle and a following vehicle. In one embodiment, the system may present a sequence of mixed-reality images to a driver of the following vehicle, wherein at least one image in the sequence of mixed-reality images results from merging (a) an image captured by a camera aboard the lead vehicle and (b) an image captured by a camera aboard the following vehicle, to generate a merged image. The merging may comprise de-emphasizing an occluded portion of the image captured by the camera aboard the following vehicle, the occluded portion corresponding to occlusion by the lead vehicle, and emphasizing an unoccluded portion of the image captured by the camera aboard the lead vehicle. In response to one or more detected conditions, the system may discontinue or diminish mixed-reality content of the sequence of mixed-reality images presented to the driver of the following vehicle. In particular, the one or more detected conditions may comprise detection of an object between the lead vehicle and the following vehicle, wherein in the merged image, a view of the object is potentially masked as result of de-emphasizing the occluded portion of the image captured by the camera aboard the following vehicle and emphasizing the unoccluded portion of the image captured by the camera aboard the lead vehicle.
In one embodiment, the de-emphasizing of the occluded portion of the image captured by the camera aboard the following vehicle and the emphasizing of the unoccluded portion of the image captured by the lead vehicle may comprise blending the image captured by the camera aboard the following vehicle and the image captured by the camera aboard the lead vehicle.
In another embodiment, the de-emphasizing of the occluded portion of the image captured by the camera aboard the following vehicle and the emphasizing of the unoccluded portion of the image captured by the lead vehicle may comprise replacing the occluded portion of the image captured by the camera aboard the following vehicle with the unoccluded portion of the image captured by the lead vehicle.
The diminishing of the mixed-reality content of the sequence of mixed-reality images may comprise emphasizing the occluded portion of the image captured by the camera aboard the following vehicle and de-emphasizing the unoccluded portion of the image captured by the camera aboard the lead vehicle.
In addition, the one or more detected conditions may further comprise detection of a braking condition associated with the lead vehicle. The one or more detected conditions may further comprise detection of at least one relevant maneuver performed by the following vehicle. The at least one relevant maneuver may be selected from an over-taking maneuver, a right turn, a mid-block pedestrian crossing event, or an unexpected lead vehicle stop. The one or more detected conditions may further comprise detection of improper camera alignment associated with the camera aboard the lead vehicle and the camera aboard the following vehicle. The one or more detected conditions may further comprise determination that a candidate lead vehicle is not positioned in front of the following vehicle. Finally, the one or more detected conditions may further comprise determination that no vehicle equipped with a camera for supporting see-through functionality is found nearby.
According to various embodiments, discontinuing or diminishing mixed-reality content of the sequence of mixed-reality images comprises, in the merged image, presenting a representation of the object between the lead vehicle and the following vehicle. The representation of the object between the lead vehicle and the following vehicle may be presented by (1) defining a region in the merged image containing the object between the lead vehicle and the following vehicle and (2) in the defined region, presenting the representation of the object between the lead vehicle and the following vehicle instead of the unoccluded portion of image captured by the lead vehicle. In one embodiment, the region in the merged image containing the object between the lead vehicle and the following vehicle is defined to follow contours of the object between the lead vehicle and the following vehicle. In another embodiment, the region in the merged image containing the object between the lead vehicle and the following vehicle is defined as a bounding box. For example, the bounding box may have a rectangular shape.
Aspects of the disclosure are illustrated by way of example. In the accompanying figures, like reference numbers indicate similar elements.
Several illustrative embodiments will now be described with respect to the accompanying drawings, which form a part hereof. While particular embodiments, in which one or more aspects of the disclosure may be implemented, are described below, other embodiments may be used and various modifications may be made without departing from the scope of the disclosure or the spirit of the appended claims.
To address this and similar scenarios, a mixed-reality image 106 can be presented to the driver of the following vehicle to “see through” the lead vehicle. The mixed-reality image 106 can be presented on a display mounted in the dashboard of the following vehicle, integrated into the windshield of the following vehicle, implemented as a “heads-up” display of the following vehicle, etc. For example, the display may be a liquid crystal display (LCD), a head mounted display (HUD), or other augmented reality (AR) display. The mixed-reality image 106 can be presented as a single image, e.g., a still frame, or as a part of a sequence of mixed-reality images that make up a video stream presented to the driver of the following vehicle. In various embodiments, the generation and presentation of the mixed-reality images is associated with minimal time lag, such that the video stream may be considered a live video stream and may be used by the driver of the following vehicle as an effective visual aid while driving.
The mixed-reality image 106 can be generated by merging an image captured from a camera aboard the lead vehicle with an image captured from a camera aboard the following vehicle, to form a merged image. In various embodiments, the mixed-reality image 106 may include a see-through region 108. Outside the see-through region 108, the mixed-reality image 106 may simply be the same as the image captured by the camera aboard the following vehicle. Inside the see-through region 108, the mixed-reality image 106 may be formed by emphasizing an occluded portion of the image captured by the camera aboard the following vehicle and de-emphasizing an unoccluded portion of the image captured by the lead vehicle. The occluded portion of the image captured by the camera aboard the following vehicle may be a portion of the image that corresponds to occlusion by the lead vehicle. For example, the occluded portion may be defined as the area in the image occupied by the lead vehicle (or a part of such an area).
De-emphasizing and emphasizing may be performed in different ways. In the embodiment shown in
These components aboard the lead vehicle 202 and the following vehicle 222 may work together to communicate data and construct a mixed-reality scene, e.g., a “see-through” video stream, that is presented to the driver of the following vehicle 202. Cameras 204 aboard the lead vehicle 202 may provide a “see-through” view to the driver of the following vehicle 222, so that objects in front of the vehicle that would otherwise be occluded from view can become visible. Aboard lead vehicle 202, the raw images from cameras 204 may be forwarded to the video ECU 206 over the vehicle data bus 210. Here, the video ECU 206 may select the appropriate camera view or stitch together views of several of the cameras 204, to form the images provided by the lead vehicle 202. As shown, the video ECU 206 is implemented as a separate device on the vehicle data bus 210. However, in alternative embodiments, the video ECU 206 may be part of one or more of the cameras 204 or integrated into the telematics and GPS ECU 208. Other alternative implementations are also possible for the components shown in
Connectivity between the lead vehicle 202 and the following vehicle 222 may be provided by telematics and GPS ECU 208 aboard the lead vehicle 202 and the telematics and GPS ECU 232 aboard the following vehicle 222. For example, the images provided by the lead vehicle 202 may be forwarded over a vehicle-to-vehicle (V2V) communications link established between telematics and GPU ECUs 208 and 232. Different types of V2V links may be established, such as WLAN V2V (DSRC), cellular V2V, Li-Fi, etc. Also, connectivity between the lead vehicle 202 and the following vehicle 222 isn't necessarily restricted to V2V communications. Alternatively or additionally, the connectivity between the two vehicles may be established using vehicle-to-network (V2N) communications, e.g., forwarding data through an intermediate node.
At the following vehicle 222, similar components (e.g., one or more cameras 224, a video ECU 230, a telematics and GPS ECU 232, etc.) and additional components, including the LIDAR and/or RADAR detectors 226 and display 228 may be deployed. The LIDAR and/or RADAR detectors 226 aboard the following vehicle 222 facilitate precise determination of the position of the lead vehicle 202 relative to the following vehicle 222. The relative position determination may be useful in a number of ways. For example, the precise relative position of the lead vehicle 202 may be used to confirm that the lead vehicle is the correct partner with which to establish V2V communications. The precise relative position of the lead vehicle 202 may also be used to enable and disable “see-through” functionality under appropriate circumstances, as well as control how images from the two vehicles are superimposed to form the see-through video stream. The video ECU 230 aboard the following vehicle 222 may perform the merger of the images from the lead vehicle 202 and the images from the following vehicle 222, to generate the see-through video stream. Finally the see-through video stream is presented to the driver of the following vehicle on the display 228.
At a step 304, the following vehicle may detect the lead vehicle camera availability. For example, the following vehicle may poll available data sources, e.g., registries, for all nearby vehicle camera systems available to support “see-through” functionality. This could be a list received from the cloud based on the following vehicle's current GPS coordinates, or it can be a compiled list of nearby vehicles whose broadcasts have been received by the following vehicle. As mentioned previously, such broadcasts may be received through links such as DSRC, Cellular, Li-Fi, or other V2V communication channels. Next, the following vehicle may compare its own GPS position and heading with the GPS positions and headings of nearby vehicles that have indicated camera availability. By calculating differences in measures such as compass heading, relative bearing, and distance, the list of nearby vehicles with available cameras can be filtered down to more restricted list of candidate vehicles with cameras that could potentially be in front of the following vehicle. Next, readings from the LIDAR and/or RADAR detectors abroad the following vehicle may be used to select and confirm that a vehicle hypothesized to be the lead vehicle is indeed directly in front of the following vehicle. For example, if a candidate vehicle with an available camera is 100 meters away and traveling at 20 mph, but the LIDAR and/or RADAR detectors of the following vehicle indicates that the vehicle in front of the following vehicle is actually 50 meters away and traveling at 30 mph, then the candidate vehicle may be rejected as a potential lead vehicle. In another embodiment, license plate number of the candidate vehicle may be compared with the license plate of the vehicle in front of the following vehicle to verify the selection. Step 304 may be performed by an ECU, e.g., a video ECU and/or a telematics and GPS ECU, aboard the following vehicle.
At a step 306, the following vehicle may request for a transmission of the lead vehicle's video stream, and the lead vehicle may transmit the requested video stream. When the following vehicle determines that it is indeed following a lead vehicle with an available camera, the following vehicle may send a request for the video stream of the lead vehicle. In one embodiment, the following vehicle sends V2V-based video request directly to the lead vehicle. This may be a request sent from the telematics and GPS ECU of the following vehicle to the telematics and GPS ECU of the lead vehicle. In another embodiment, the following vehicle sends a cloud-based video request to a registry, e.g., a server. Data on how to request the video stream, such as an IP address, may be stored in the cloud along with the lead vehicle's GPS record. Next, the following vehicle may provide the lead vehicle with contextual information for the video request such as following distance, heading, desired camera view, preferred communication protocol, negotiated video quality based on signal strength, etc. In response, the lead vehicle may transmit the requested video stream to the following vehicle. The lead vehicle may do so based on the contextual information provided by the following vehicle, to customize the video stream sent to the following vehicle. The lead vehicle may also adjust the video quality and compression ratio for the transmission based on factors such as available communication bandwidth and signal strength. In addition, the lead vehicle may expand (e.g., using multiple cameras) or crop the video stream field of view to better match the needs of the following vehicle, based on information such as the following distance of the following vehicle. For example, if the following vehicle is very close, the lead vehicle may need a wider field of view to eliminate blind spots. Thus, the lead vehicle may decide to combine views from multiple forward and side cameras to create a customized video stream for the following vehicle. As another example, if the following vehicle is relatively far away, such that the area of interest is only a narrow field of view in the forward direction, the lead vehicle may respond by providing a video stream of a narrower field of view at a higher resolution or bit rate, to accommodate the needs of the following vehicle. In this manner, the lead vehicle may respond to the request by providing an appropriate video stream for the following vehicle. The following vehicle may receive the lead vehicle's video stream. Certain portions of step 306 such as making the request for the lead vehicle video stream and receiving the video stream may be performed by an ECU, e.g., a video ECU and/or a telematics and GPS ECU, aboard the following vehicle. Other portions of step 306 such as responding to the request, generating the lead vehicle video stream, and sending the lead vehicle video stream may be performed by an ECU, e.g., a video ECU and/or a telematics and GPS ECU, aboard the lead vehicle.
At a step 308, the lead vehicle and following vehicle video streams are merged to form the “see-through” video stream. Each image in the lead vehicle video stream may be overlaid on a corresponding image from the following vehicle's video stream, to generate a sequence of merged images that form a “see-through” video stream which mixes the realities seen by the lead vehicle and the following vehicle. As discussed previously, techniques for blending images, such as digital compositing, may also be used in certain embodiments. The merging or stitching together of a first image from the lead vehicle and with a second image from the following vehicle may involve properly shifting, sizing, and/or distorting the first and second images so that features may be properly aligned. This process may take into account vehicle and camera position and orientation information, such as known lead vehicle and following vehicle camera information, known GPS information for both vehicles, and the following vehicle's LIDAR and/or RADAR information on the position of the lead vehicle. Step 308 may be performed by an ECU, e.g., a video ECU and/or a telematics and GPS ECU, aboard the following vehicle.
At a step 310, the “see-through” video stream is displayed to the driver of the following vehicle. The resulting mixed-reality video stream provides a view that is consistent with the perspective of the driver of the following vehicle. The merged video stream may be displayed to the driver of the following vehicle on a user interface such as an LCD, HUD, or other augmented reality (AR) display. The depiction of the mixed-reality view can be implemented in different ways. In one example, the lead vehicle may be “disappeared” completely from the mixed-reality scene presented to the driver of the following vehicle. In another example, the lead vehicle may appear as a partially transparent object in the mixed-reality scene presented to the driver of the following vehicle. In another example, the lead vehicle may appear as only an outline in the mixed-reality scene presented to the driver of the following vehicle. In yet another example, using a dynamic video point-of-view transition, the mixed-reality scene may “zoom in” on or appear to “fly through” the lead vehicle, to give the viewer (driver of the following vehicle) the impression that the perspective has shifted from that of the following vehicle to that of the lead vehicle. Step 310 may be performed by an ECU, e.g., a video ECU and/or a telematics and GPS ECU, aboard the following vehicle.
As another example, the same misalignment problem causes the merged image of
At a step 602, a check is performed to determine whether nearby cameras are available to support providing “see-through” functionality. As discussed previously, this may be done by the following vehicle in various ways, including (1) receiving a broadcast message directly from a lead vehicle announcing that it has camera(s) available for supporting see-through functionality and/or (2) receiving a list of records from a registry, e.g., from a cloud server, of nearby vehicles having camera(s) to support such functionality. The camera availability message or record may include useful data for confirming that a candidate lead vehicle is an appropriate lead vehicle for the following vehicle, as well as data useful for determining whether see-through functionality should otherwise be enabled or disabled. Such data may include, for example:
-
- Time,
- GPS location (latitude, longitude)
- Speed, acceleration, braking flag
- Heading/travel direction
- Vehicle size information (length, width, height)
- Camera mounting information (X, Y, Z, mounting location)
- Camera direction and field of view information
If no vehicle equipped with a camera to support see-through functionality is found nearby, then the process 600 proceeds to a step 614 to automatically disable the functionality. Otherwise, the process 600 proceeds to a subsequent check step.
At a step 604, a check is performed to determine whether the position and orientation of the candidate lead vehicle indicate that it is indeed the vehicle immediately preceding the following vehicle. Just as an example, the check may involve evaluating whether the relative bearing from the following vehicle to the candidate lead vehicle (e.g., from GPS readings) match the following vehicle's direction of travel (e.g., also from GPS readings), to within acceptable tolerance limits to attain a level of confidence that the candidate lead vehicle is in front of the following vehicle. As another example, the check may involve determining whether the candidate lead vehicle is detected by forward sensors (e.g., LIDAR, RADAR, and/or camera(s)) aboard the following vehicle. As another example, the check may involve comparing the distance between the candidate lead vehicle and following vehicle, as computed from GPS positions, with the distance between the candidate lead vehicle and the following vehicle, as computed from forward sensors (e.g., LIDAR, RADAR, and/or camera(s)) aboard the following vehicle. The comparison may evaluate whether the difference between such distances is within an acceptable tolerance limit. As another example, the check may involve comparing the speed reported by the candidate lead vehicle with the speed of the candidate lead vehicle as detected using forward sensors aboard the following vehicle, the comparison may evaluate whether the difference between such speeds is within an acceptable tolerance limit. As yet another example, the check may involve evaluating whether the candidate lead vehicle and the following vehicle are traveling in the same lane of roadway, as determined based on GPS traces over time. As a result of step 604, if the position and orientation of the candidate lead vehicle indicate that it is not the vehicle immediately preceding the following vehicle, the process 600 proceeds to step 614 to automatically disable the see-through functionality. Otherwise, the process 600 proceeds to a subsequent check step.
At a step 606, a check is performed to determine whether the camera(s) aboard the lead vehicle and the camera(s) aboard the following vehicle are in alignment. Here, proper alignment may not necessarily require the cameras to point in exactly the same direction or the two vehicles to be perfectly aligned. Rather, being in alignment may refer to the cameras being within a tolerable range of relative orientations in view of their position, angle, and field of view. Proper alignment of cameras in this context is explained in more detail with respect to
At a step 608, various relevant vehicle maneuvers are detected. See-through functionality may be useful in these particular vehicle maneuvers. Thus, if any of the relevant maneuvers identified in the non-exhaustive list provided below are detected, the system may allow the process to go forward for enabling see-through functionality:
-
- Overtaking—following vehicle is behind a slower lead vehicle, e.g., on a two-lane road, and the a passing situation is appropriate
- Right turn—following vehicle is behind a lead vehicle that is intending to turn right (e.g., as indicated by right turn signal, which may be communicated to the following vehicle). Here, the lead vehicle may be stopped due to traffic or pedestrians crossing. See-through functionality may allow the following vehicle to see the traffic or pedestrians crossing in front of the lead vehicle. This can help the driver of the following vehicle understand the reason the lead vehicle is stopped, thus easing impatience on the part of the driver of driver and potentially preventing an unsafe driver reaction (e.g., attempting to overtake the lead vehicle).
- Mid-block pedestrian crossing—lead vehicle comes to a stop in the middle of a block (i.e., not at an intersection) due to a pedestrian or other obstruction in the roadway. See-through functionality may help the driver of the following vehicle see the pedestrian or obstruction and understand the reason why the lead vehicle is stopped, thus easing driver impatience and potentially preventing an unsafe driver reaction.
- After any unexpected lead vehicle stop—any lead vehicle stop that is unexpected may be a situation where see-through functionality may be useful. For example, after the following vehicle has safely stopped, a see-through view may be presented to help the driver of the following vehicle see what is ahead of the lead vehicle and understand the reason for the stop.
- Automated vehicle platooning—the following vehicle is part of an automated vehicle platoon. Vehicles in the platoon may be following one another at a very close distance. See-through functionality may allow a driver of a following vehicle to see through preceding vehicles to the front of the platoon.
The various relevant vehicle maneuvers may be detected by the following vehicle using equipment such as an ECU and sensors such as cameras, LIDAR, and/or RADAR. Computer vision/machine learning techniques may also be employed. In another embodiment, the vehicle may receive an input from the driver about his/her intention to perform a maneuver (e.g., overtaking etc.). As a result of step 608, if none of the relevant vehicle maneuvers are detected, the process 600 proceeds to step 614 to automatically disable see-through functionality. Otherwise, the process 600 proceeds to a subsequent check step.
At a step 610, a check is performed to determine whether an object may have come between the lead vehicle and the following vehicle. Such an object can potentially be masked by the see-through functionality and thus create a dangerous condition, as illustrated and discussed previously with respect to
At a step 612, a check is performed to determine if the lead vehicle is braking. As discussed previously with respect to
-
- Lead vehicle V2V speed data—As mentioned previously, the lead vehicle may broadcast its status information such as current speed, acceleration, and a braking flag. This may be done over a V2V communication link, for example. The following vehicle may receive such status information and use it to determine whether lead vehicle braking has exceeded a threshold value, e.g., a deceleration threshold.
- Following vehicle forward sensor data—Using forward sensor(s), the following vehicle may monitor the lead vehicle's speed, distance, and acceleration. Such information can be used to determine whether lead vehicle braking has exceeded a threshold value.
- Video-based brake light or looming distance measurement—Camera(s) aboard the following vehicle may be used to capture images which can be processed using computer vision/machine learning techniques to watch for signs of lead vehicle breaking, such as break light activation or objects quickly approaching or looming. Such visually-based techniques can also be used to determine whether lead vehicle braking has exceeded a threshold value.
As a result of step 612, if it is determined that the lead vehicle is breaking, the process proceeds to step 614 to automatically disable see-through functionality. Otherwise, the process 600 proceeds to step 616 to automatically enable see-through functionality.
While not explicitly shown in
The terms “enabling” and “disabling” are used here in a broad sense. For example, disabling see-through functionality may involve discontinuing the merger of images captured by the lead vehicle and the following vehicle, to present only images captured by the following vehicle. Alternatively, disabling see-through functionality may involve diminishing the mixed-reality content of the images presented. A presented image may still be merged, just with less emphasis on the contribution by the image captured by the lead vehicle. For example, a portion of image from the camera of the lead vehicle may be de-emphasized, and a portion of image from the camera of the following vehicle may be emphasized, and the two image portions may be blended to create the merged image. Presentation of such merged image(s) may constitute “disabling” see-through functionality, because the view in front of the lead vehicle has been de-emphasized. Similarly, enabling see-through functionality may involve presenting merged images in which a portion of view as seen by the following vehicle is completely replaced with a portion of view as seen by the lead vehicle. Alternatively, enabling see-through functionality may involve amplifying the mixed-reality content of the images presented to emphasize the contribution of the image captured by the lead vehicle. For example, a portion of image from the camera of the lead vehicle may be emphasized, and a portion of image from the camera of the following vehicle may be de-emphasized, and the two image portions may be blended to create the merged image. Presentation of such merged image(s) may constitute “enabling” see-through functionality, because the view in front of the lead vehicle has been emphasized.
While the resulting mixed-reality image shown in
A representation of object(s) 804 is then presented in the mixed-reality image. This may be accomplished by defining a region in the merged image that contains the object(s) 804 between the following vehicle and the lead vehicle 802. In the defined region, a representation of object(s) 804 is presented instead of the occluded portion of image captured by the lead vehicle 802. In the embodiment shown in
In one implementation, the logic for triggering presentation of a representation of object(s) 804 may be as follows. Upon detection of object(s) 804 between the following vehicle and the lead vehicle 802, the defined region containing object(s) 804 is compared with the first bounding box 806. If there is overlap between the defined region and the first bounding box 806, then see-through functionality is switched off for the overlapping area. The process may also be envisioned as simply changing the shape of the see-through window, to avoid the defined region containing object(s) 804. In other words, the shape of the see-through window, in which the view of the lead vehicle 802 is presented, may be defined as the portion of the first bounding box 806 that does not include the defined region containing object(s) 804. Thus, the mixed-reality image provides both (1) see-through functionality, e.g., the first bounding box 806 which presents a view of the scene in front of the lead vehicle 802 and (2) a representation of objects(s) 804 positioned between the following vehicle and the lead vehicle 802.
The ECU 900 is shown comprising hardware elements that can be electrically coupled via a bus 905 (or may otherwise be in communication, as appropriate). The hardware elements may include a processing unit(s) 910 which can include without limitation one or more general-purpose processors, one or more special-purpose processors (such as digital signal processing (DSP) chips, graphics acceleration processors, application specific integrated circuits (ASICs), and/or the like), and/or other processing structure or means. As shown in
The ECU 900 might also include a wireless communication interface 930, which can include without limitation a modem, a network card, an infrared communication device, a wireless communication device, and/or a chipset (such as a Bluetooth device, an IEEE 802.11 device, an IEEE 802.16.4 device, a WiFi device, a WiMax device, cellular communication facilities including 4G, 5G, etc.), and/or the like. The wireless communication interface 930 may permit data to be exchanged with a network, wireless access points, other computer systems, and/or any other electronic devices described herein. The communication can be carried out via one or more wireless communication antenna(s) 932 that send and/or receive wireless signals 934.
Depending on desired functionality, the wireless communication interface 930 can include separate transceivers to communicate with base transceiver stations (e.g., base stations of a cellular network) and/or access point(s). These different data networks can include various network types. Additionally, a Wireless Wide Area Network (WWAN) may be a Code Division Multiple Access (CDMA) network, a Time Division Multiple Access (TDMA) network, a Frequency Division Multiple Access (FDMA) network, an Orthogonal Frequency Division Multiple Access (OFDMA) network, a WiMax (IEEE 802.16), and so on. A CDMA network may implement one or more radio access technologies (RATs) such as cdma2000, Wideband-CDMA (W-CDMA), and so on. Cdma2000 includes IS-95, IS-2000, and/or IS-856 standards. A TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), or some other RAT. An OFDMA network may employ LTE, LTE Advanced, and so on, including 4G and 5G technologies.
The ECU 900 can further include sensor controller(s) 940. Such controllers can control, without limitation, one or more accelerometer(s), gyroscope(s), camera(s), magnetometer(s), altimeter(s), microphone(s), proximity sensor(s), light sensor(s), and the like.
Embodiments of the ECU 900 may also include a Satellite Positioning System (SPS) receiver 980 capable of receiving signals 984 from one or more SPS satellites using an SPS antenna 982. The SPS receiver 980 can extract a position of the device, using conventional techniques, from satellites of an SPS system, such as a global navigation satellite system (GNSS) (e.g., Global Positioning System (GPS)), Galileo, Glonass, Compass, Quasi-Zenith Satellite System (QZSS) over Japan, Indian Regional Navigational Satellite System (IRNSS) over India, Beidou over China, and/or the like. Moreover, the SPS receiver 1780 can be used various augmentation systems (e.g., an Satellite Based Augmentation System (SBAS)) that may be associated with or otherwise enabled for use with one or more global and/or regional navigation satellite systems. By way of example but not limitation, an SBAS may include an augmentation system(s) that provides integrity information, differential corrections, etc., such as, e.g., Wide Area Augmentation System (WAAS), European Geostationary Navigation Overlay Service (EGNOS), Multi-functional Satellite Augmentation System (MSAS), GPS Aided Geo Augmented Navigation or GPS and Geo Augmented Navigation system (GAGAN), and/or the like. Thus, as used herein an SPS may include any combination of one or more global and/or regional navigation satellite systems and/or augmentation systems, and SPS signals may include SPS, SPS-like, and/or other signals associated with such one or more SPS.
The ECU 900 may further include and/or be in communication with a memory 960. The memory 960 can include, without limitation, local and/or network accessible storage, a disk drive, a drive array, an optical storage device, a solid-state storage device, such as a random access memory (“RAM”), and/or a read-only memory (“ROM”), which can be programmable, flash-updateable, and/or the like. Such storage devices may be configured to implement any appropriate data stores, including without limitation, various file systems, database structures, and/or the like.
The memory 960 of the device 900 also can comprise software elements (not shown), including an operating system, device drivers, executable libraries, and/or other code embedded in a computer-readable medium, such as one or more application programs, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. In an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.
It will be apparent to those skilled in the art that substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.
With reference to the appended figures, components that can include memory can include non-transitory machine-readable media. The term “machine-readable medium” and “computer-readable medium” as used herein, refer to any storage medium that participates in providing data that causes a machine to operate in a specific fashion. In embodiments provided hereinabove, various machine-readable media might be involved in providing instructions/code to processing units and/or other device(s) for execution. Additionally or alternatively, the machine-readable media might be used to store and/or carry such instructions/code. In many implementations, a computer-readable medium is a physical and/or tangible storage medium. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Common forms of computer-readable media include, for example, magnetic and/or optical media, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.
The methods, systems, and devices discussed herein are examples. Various embodiments may omit, substitute, or add various procedures or components as appropriate. For instance, features described with respect to certain embodiments may be combined in various other embodiments. Different aspects and elements of the embodiments may be combined in a similar manner. The various components of the figures provided herein can be embodied in hardware and/or software. Also, technology evolves and, thus, many of the elements are examples that do not limit the scope of the disclosure to those specific examples.
Having described several embodiments, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may merely be a component of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not limit the scope of the disclosure.
Claims
1. A method for providing a mixed-reality scene involving a lead vehicle and a following vehicle comprising:
- presenting a sequence of mixed-reality images to a driver of the following vehicle, wherein at least one image in the sequence of mixed-reality images results from merging (a) an image captured by a camera aboard the lead vehicle and (b) an image captured by a camera aboard the following vehicle, to generate a merged image;
- wherein the merging comprises de-emphasizing an occluded portion of the image captured by the camera aboard the following vehicle, the occluded portion corresponding to occlusion by the lead vehicle, and emphasizing an unoccluded portion of the image captured by the camera aboard the lead vehicle;
- in response to one or more detected conditions, discontinuing or diminishing mixed-reality content of the sequence of mixed-reality images presented to the driver of the following vehicle, wherein the one or more detected conditions comprise: detection of an object between the lead vehicle and the following vehicle, wherein in the merged image, a view of the object is at least partially masked as result of de-emphasizing the occluded portion of the image captured by the camera aboard the following vehicle and emphasizing the unoccluded portion of the image captured by the camera aboard the lead vehicle.
2. The method of claim 1, wherein the de-emphasizing of the occluded portion of the image captured by the camera aboard the following vehicle and the emphasizing of the unoccluded portion of the image captured by the lead vehicle comprise:
- blending the image captured by the camera aboard the following vehicle and the image captured by the camera aboard the lead vehicle.
3. The method of claim 1, wherein the de-emphasizing of the occluded portion of the image captured by the camera aboard the following vehicle and the emphasizing of the unoccluded portion of the image captured by the lead vehicle comprise:
- replacing the occluded portion of the image captured by the camera aboard the following vehicle with the unoccluded portion of the image captured by the lead vehicle.
4. The method of claim 1, wherein the diminishing of the mixed-reality content of the sequence of mixed-reality images comprises emphasizing the occluded portion of the image captured by the camera aboard the following vehicle and de-emphasizing the unoccluded portion of the image captured by the camera aboard the lead vehicle.
5. The method of claim 1, wherein the one or more detected conditions further comprise detection of a braking condition associated with the lead vehicle.
6. The method of claim 1,
- wherein the one or more detected conditions further comprise detection of at least one relevant maneuver performed by the following vehicle,
- wherein the at least one relevant maneuver is selected from an over-taking maneuver, a right turn, a mid-block pedestrian crossing event, or an unexpected lead vehicle stop.
7. The method of claim 1, wherein the one or more detected conditions further comprise detection of improper camera alignment associated with the camera aboard the lead vehicle and the camera aboard the following vehicle.
8. The method of claim 1, wherein the one or more detected conditions further comprise determination that a candidate lead vehicle is not positioned in front of the following vehicle.
9. The method of claim 1, wherein discontinuing or diminishing mixed-reality content of the sequence of mixed-reality images comprises:
- in the merged image, presenting a representation of the object between the lead vehicle and the following vehicle.
10. The method of claim 9, wherein the representation of the object between the lead vehicle and the following vehicle is presented by:
- defining a region in the merged image containing the object between the lead vehicle and the following vehicle; and
- in the defined region, presenting the representation of the object between the lead vehicle and the following vehicle instead of the unoccluded portion of image captured by the lead vehicle.
11. An apparatus for providing a mixed-reality scene involving a lead vehicle and a following vehicle comprising:
- an electronic control unit (ECU); and
- a display,
- wherein the ECU is configured to: generate a sequence of mixed-reality images for presentation to a driver of the following vehicle, wherein the ECU is configured to generate the at least one image in the sequence of mixed-reality images results by merging (a) an image captured by a camera aboard the lead vehicle and (b) an image captured by a camera aboard the following vehicle, to generate a merged image; wherein the merging comprises de-emphasizing an occluded portion of the image captured by the camera aboard the following vehicle, the occluded portion corresponding to occlusion by the lead vehicle, and emphasizing an unoccluded portion of the image captured by the camera aboard the lead vehicle; in response to one or more detected conditions, discontinue or diminish mixed-reality content of the sequence of mixed-reality images presented to the driver of the following vehicle, wherein the one or more detected conditions comprise: detection of an object between the lead vehicle and the following vehicle, wherein in the merged image, a view of the object is at least partially masked as result of de-emphasizing the occluded portion of the image captured by the camera aboard the following vehicle and emphasizing the unoccluded portion of the image captured by the camera aboard the lead vehicle, and wherein the display is configured to present the sequence of mixed-reality images to the driver of the following vehicle.
12. The apparatus of claim 11, wherein the ECU is configured to de-emphasize the occluded portion of the image captured by the camera aboard the following vehicle and the emphasize the unoccluded portion of the image captured by the lead vehicle by:
- blending the image captured by the camera aboard the following vehicle and the image captured by the camera aboard the lead vehicle.
13. The apparatus of claim 11, wherein the ECU is configured to de-emphasize the occluded portion of the image captured by the camera aboard the following vehicle and the emphasize the unoccluded portion of the image captured by the lead vehicle by:
- replacing the occluded portion of the image captured by the camera aboard the following vehicle with the unoccluded portion of the image captured by the lead vehicle.
14. The apparatus of claim 11, wherein the ECU is configured to diminish the mixed-reality content of the sequence of mixed-reality images by emphasizing the occluded portion of the image captured by the camera aboard the following vehicle and de-emphasizing the unoccluded portion of the image captured by the camera aboard the lead vehicle.
15. The apparatus of claim 11, wherein the one or more detected conditions further comprise detection of a braking condition associated with the lead vehicle.
16. The apparatus of claim 11, wherein the one or more detected conditions further comprise detection of at least one relevant maneuver performed by the following vehicle.
17. The apparatus of claim 11, wherein the one or more detected conditions further comprise detection of improper camera alignment associated with the camera aboard the lead vehicle and the camera aboard the following vehicle.
18. The apparatus of claim 11, wherein the ECU is configured to discontinue or diminish mixed-reality content of the sequence of mixed-reality images by:
- in the merged image, presenting a representation of the object between the lead vehicle and the following vehicle.
19. The apparatus of claim 11, wherein the representation of the object between the lead vehicle and the following vehicle is presented by:
- defining a region in the merged image containing the object between the lead vehicle and the following vehicle; and
- in the defined region, presenting the representation of the object between the lead vehicle and the following vehicle instead of the unoccluded portion of image captured by the lead vehicle.
20. A computer-readable storage medium containing instructions that, when executed by one or more processors of a computer, cause the one or more processors to:
- cause a sequence of mixed-reality images to be presented to a driver of the following vehicle, wherein at least one image in the sequence of mixed-reality images results from merging (a) an image captured by a camera aboard the lead vehicle and (b) an image captured by a camera aboard the following vehicle, to generate a merged image;
- wherein the merging comprises de-emphasizing an occluded portion of the image captured by the camera aboard the following vehicle, the occluded portion corresponding to occlusion by the lead vehicle, and emphasizing an unoccluded portion of the image captured by the camera aboard the lead vehicle;
- in response to one or more detected conditions, discontinue or diminish mixed-reality content of the sequence of mixed-reality images presented to the driver of the following vehicle, wherein the one or more detected conditions comprise: detection of an object between the lead vehicle and the following vehicle, wherein in the merged image, a view of the object is at least partially masked as result of de-emphasizing the occluded portion of the image captured by the camera aboard the following vehicle and emphasizing the unoccluded portion of the image captured by the camera aboard the lead vehicle.
Type: Application
Filed: Aug 30, 2018
Publication Date: Mar 5, 2020
Patent Grant number: 10607416
Inventors: Christopher Steven NOWAKOWSKI (San Francisco, CA), Siav-Kuong KUOCH (Saint Maur des fossés), Alexandre Jacques GARNAULT (San Mateo, CA), Mohamed Amr Mohamed Nader ABUELFOUTOUH (San Mateo, CA)
Application Number: 16/118,018