Collaborative navigation system with campfire display

- General Motors

A method of using a system for generating a centrally located floating image display includes displaying a first map image with a first display, receiving the first map image with a first reflector, reflecting the first map image with the first reflector, displaying, a second map image with a second display, receiving the second map image with a second reflector, reflecting the second map image with the second reflector, displaying first private information to the first passenger and second private information to the second passenger with a transparent display positioned between the first passenger and the first reflector and between the second passenger and the second reflector, receiving input from the first and second passengers with the system controller, and collecting images with an external scene camera.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
INTRODUCTION

The present disclosure relates to a system for generating a floating image viewable by a plurality of passengers within a vehicle.

Current vehicle systems include navigations systems that allow an occupant within the vehicle to look at area maps, select points of interests and routes to be taken by the vehicle. However, such systems generally only allow direct interaction by a single occupant. Current virtual holographic systems do not include the ability for annotation and for information that cannot be embedded within the virtual holographic image to be presented with the virtual holographic image. In addition, current systems do not include tactile properties that allow a passenger to interact with the virtual holographic image, such as by making selections or choosing different images to view. Known systems incorporate inverse head-up-display architectures that use beams splitters that must be attached to structure within the vehicle compartment and must be constantly re-adjusted to accommodate height and position variations of the passengers within the vehicle compartment.

While current systems achieve their intended purpose, there is a need for a new and improved system in communication with a navigation system that provides a floating three-dimensional map image that appears centrally located within the vehicle to all the passengers within the vehicle, allowing collaborative navigation decisions by multiple passengers within the vehicle.

SUMMARY

According to several aspects of the present disclosure, a method of using a system for generating a centrally located floating three-dimensional image display for a plurality of passengers positioned within a vehicle compartment includes displaying, with a first display of an image chamber in communication with a system controller, a first map image from a navigation system, receiving, with a first reflector individually associated with a first passenger, the first map image from the first display, reflecting, with the first reflector, the first map image to the first passenger, wherein the first passenger perceives the first map image floating at a central location within the image chamber, displaying, with a second display of the image chamber in communication with the system controller, a second map image from the navigation system, receiving, with a second reflector individually associated with a second passenger, the second map image from the second display, reflecting, with the second reflector, the second map image to the second passenger, wherein the second passenger perceives the second map image floating at the central location within the image chamber, displaying, with a transparent display in communication with the system controller and positioned between eyes of the first passenger and the first reflector and between the eyes of the second passenger and the second reflector, first private information to the first passenger within an image plane positioned in front of the first image floating at the central location within the image chamber and second private information to the second passenger within an image plane positioned in front of the second image floating at the central location within the image chamber, receiving, with the system controller, input from the first passenger and the second passenger, collecting, with an external scene camera, images of an external environment outside the vehicle compartment.

According to another aspect, the method further includes displaying, with an augmented reality display in communication with the system controller and positioned within the vehicle compartment remotely from the image chamber, one of the first map image and the second map image.

According to another aspect, the augmented reality display includes a transparent substrate, having light emitting particles dispersed therein, positioned on a window within the vehicle compartment, the displaying, with the augmented reality display positioned within the vehicle compartment remotely from the image chamber, one of the first map image and the second map image including generating, with a primary graphic projection device in communication with the system controller, a first set of images upon the window within the vehicle compartment based on visible light, wherein the first set of images are displayed upon a primary area of the window, and generating, with a secondary graphic projection device in communication with the system controller, a second set of images upon a secondary area of the window based on an excitation light, wherein the light emitting particles in the windscreen emit visible light in response to absorbing the excitation light, and wherein the first set of images displayed upon the primary area of the windscreen cooperate with the second set of images displayed upon the secondary area of the windscreen to create an edge-to-edge augmented reality image.

According to another aspect, the receiving, with the system controller, input from the first passenger and the second passenger, further includes receiving input, with the system controller, via wireless communication with a mobile device, receiving, with the system controller, via the transparent display, input from the first passenger and the second passenger, receiving, with the system controller, via at least one first sensor, input comprising a position of a head and eyes of the first passenger, receiving, with the system controller, via at least one first gesture sensor, information related to gestures made by the first passenger, collecting, with the system controller, via a first microphone, audio input from the first passenger, collecting, with the system controller, via a second microphone, audio input from the second passenger, and the method further including broadcasting, with the system controller, via a first zonal speaker, audio output for the first passenger, and broadcasting, with the system controller, via a second zonal speaker, audio output for the second passenger.

According to another aspect, the receiving, with the system controller, input from the first passenger and the second passenger, further includes receiving, from the first passenger, input to adjust a perspective, zoom level, position and angle of view of the first map image, and modifying, with the system controller, the perspective, zoom level, position and angle of view of the first map image based on the input from the first passenger, and receiving, from the second passenger, input to adjust a perspective, zoom level, position and angle of view of the second map image and modifying, with the system controller, the perspective, zoom level, position and angle of view of the second map image based on the input from the second passenger.

According to another aspect, the method further includes selectively sharing the first map image, as modified based on input from the first passenger, with the second passenger, and selectively sharing the second map image, as modified based on input from the second passenger, with the first passenger.

According to another aspect, the displaying, with the first display of the image chamber in communication with the system controller, the first map image from the navigation system further includes highlighting, within the first map image, points of interest for the first passenger, and highlighting, within the first map image, at least one route between a current location and a point of interest selected by the first passenger, and, the displaying, with the second display of the image chamber in communication with the system controller, the second map image from the navigation system further includes highlighting, within the second map image, points of interest for the second passenger, and highlighting, within the second map image, at least one route between a current location and a point of interest selected by the second passenger.

According to another aspect, the highlighting, within the first map image, points of interest for the first passenger further includes collecting, with the system controller, from remote data sources, information related to interests of the first passenger, receiving, with the system controller, input from the first passenger related to points of interest that the first passenger is interested in, identifying at least one point of interest that the first passenger may be interested in based on the information collected from remote data sources and input from the first passenger, and the highlighting, within the second map image, points of interest for the second passenger further includes collecting, with the system controller, from remote data sources, information related to interests of the second passenger, receiving, with the system controller, input from the second passenger related to points of interest that the second passenger is interested in, identifying at least one point of interest that the second passenger may be interested in based on the information collected from the remote data sources and input from the second passenger.

According to another aspect, the method further includes selectively sharing the highlighted points of interest and the highlighted at least one route between a current location and a point of interest selected by the first passenger, from the first map image, with the second passenger, and selectively sharing the highlighted points of interest and the highlighted at least one route between a current location and a point of interest selected by the second passenger, from the second map image, with the first passenger.

According to another aspect, the receiving, with the system controller, input from the first passenger and the second passenger, further includes receiving, from the first passenger, input selecting at least one of a point of interest within the first map image and a highlighted route within the first map image, and receiving, from the second passenger, input selecting at least one of a point of interest within the second map image and a highlighted route within the second map image, selectively sharing the at least one of a point of interest within the first map image and a highlighted route within the first map image, selected by the first passenger, with the second passenger, and selectively sharing the at least one of a point of interest within the second map image and a highlighted route within the second map image, selected by the second passenger, with the first passenger.

According to another aspect, the receiving, with the system controller, input from the first passenger and the second passenger, further includes receiving, from the first passenger, input for one of voting, ranking, approving and suggesting alternatives related to a selected at least one of a point of interest within the second map image and a highlighted route within the second map image that has been shared, by the second passenger, with the first passenger, and sharing the input from the first passenger with the second passenger, and receiving, from the second passenger, input for one of voting, ranking, approving or suggesting alternatives related to a selected at least one of a point of interest within the first map image and a highlighted route within the first map image that has been shared, by the first passenger, with the second passenger, and sharing the input from the second passenger with the first passenger.

According to another aspect, the method further includes highlighting, with the system controller, within the first map image and the second map image, at least one drop off location at a selected point of interest, and receiving, from each of the first and second passengers, input indicating which of the at least one drop-off location the first and second passengers want to be dropped off at.

According to another aspect, the first map image is a view of a map image from the navigation system from a first perspective for the first passenger, and the second map image is a view of the map image from the navigation system from a second perspective for the second passenger.

According to another aspect, the displaying, with the first display of the image chamber in communication with the system controller, the first map image from the navigation system further includes, including, within the first map image, an icon representing the second passenger to illustrate the second passenger's perspective.

According to another aspect, the method further includes, with the system controller, receiving, via the first microphone, a command from the first passenger, receiving, via the at least one first sensor and the at least one first gesture sensor data related to gestures made by the first passenger and the direction of a gaze of the first passenger, identifying, based on the data related to gestures made by the first passenger, the direction of the gaze of the first passenger and images of the external environment outside the vehicle compartment, a location outside of the vehicle compartment that the first passenger is looking at, and highlighting the location within the first map image and the second map image.

According to several aspects of the present disclosure, a system for generating a centrally located floating three-dimensional image display for a plurality of passengers positioned within a vehicle compartment within a vehicle includes a system controller in communication with a navigation system, an image chamber including a first display adapted to project a first map image from the navigation system, a first reflector individually associated with the first display and a first one of the plurality of passengers, the first reflector adapted to receive the first image from the first display and to reflect the first image to the first passenger, wherein the first passenger perceives the first image floating at a central location within the image chamber, a second display adapted to project a second map image from the navigation system, and a second reflector individually associated with the second display and a second one of the plurality of passengers, the second reflector adapted to receive the second image from the second display and to reflect the second image to the second passenger, wherein, the second passenger perceives the second image floating at the central location within the image chamber, and a transparent touch screen display positioned between the first reflector and the first passenger and between the second reflector and the second passenger and adapted to display first private information to the first passenger within an image plane positioned in front of the first image floating at the central location within the image chamber and to receive input from the first passenger, and adapted to display second private information to the second passenger within an image plane positioned in front of the second image floating at the central location within the image chamber and to receive input from the second passenger, and an external scene camera adapted to collect images of an external environment outside the vehicle compartment.

According to another aspect, the system further includes an augmented reality display positioned within the vehicle compartment remotely from the image chamber and adapted to display one of the first map image and the second map image.

According to another aspect, the augmented reality display includes a transparent substrate, having light emitting particles dispersed therein, positioned on a window within the vehicle compartment, a primary graphic projection device for generating a first set of images upon the window of the vehicle based on visible light, wherein the first set of images are displayed upon a primary area of the window, a secondary graphic projection device for generating a second set of images upon a secondary area the window of the vehicle based on an excitation light, wherein the light emitting particles in the window emit visible light in response to absorbing the excitation light, and wherein the first set of images displayed upon the primary area of the window cooperate with the second set of images displayed upon the secondary area of the window to create an edge-to-edge augmented reality view of a surrounding environment of the vehicle, a primary graphics processing unit in electronic communication with the primary graphic projection device and the system controller, and a secondary graphics processing unit in electronic communication with the secondary graphic projection device and the system controller.

According to another aspect, the system is selectively moveable vertically up and down along a vertical central axis, the first display and the first reflector are unitarily and selectively rotatable about the vertical central axis, and the second display and the second reflector are unitarily and selectively rotatable about the vertical central axis, the system further including first sensors adapted to monitor a position of a head and eyes of the first passenger, wherein, the first display and first reflector are adapted to rotate in response to movement of the head and eyes of the first passenger, and second sensors adapted to monitor a position of a head and eyes of the second passenger, wherein, the second display and the second reflector are adapted to rotate in response to movement of the head and eyes of the second passenger, the system adapted to move up and down along the vertical axis in response to movement of the head and eyes of the first passenger and movement of the head and eyes of the second passenger, and a first gesture sensor adapted to gather information related to gestures made by the first passenger, and a second gesture sensor adapted to gather information related to gestures made by the second passenger, wherein, the system is adapted to receive input from the first and second passengers via data collected by the first and second gesture sensors.

Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.

FIG. 1 is a schematic side view of a system in accordance with an exemplary embodiment of the present disclosure;

FIG. 2A is a schematic top view of a vehicle compartment having a system in accordance with an exemplary embodiment of the present disclosure;

FIG. 2B is a perspective view of a vehicle compartment using as system of the present disclosure wherein the system displays a common map image for all passengers within the vehicle compartment;

FIG. 2C is a perspective view of the vehicle compartment shown in FIG. 2B wherein the system highlights a point of interest and routes to the point of interest within the common map image;

FIG. 2D is a perspective view of the vehicle compartment shown in FIG. 2B wherein the system highlights multiple drop-off points at a selected point of interest;

FIG. 2E is a close-up of the common map image shown in FIG. 2C including icons illustrating the perspective of the second and third passengers;

FIG. 3 is a schematic diagram of the system shown in FIG. 1;

FIG. 4 is a schematic top view of the system shown in FIG. 1 with a first and second passenger;

FIG. 5 is a schematic perspective view of the system shown in FIG. 1;

FIG. 6 is a schematic top view of the system shown in FIG. 3, wherein the position of the second passenger has moved;

FIG. 7 is a schematic view illustrating a passenger viewing an image and annotation information through an associated beam splitter and passenger interface; and

FIG. 8 is a schematic diagram of an augmented reality display in accordance with the present disclosure;

FIG. 9 is an enlarged view of a portion of FIG. 8, as indicated by the area labelled “FIG. 9” in FIG. 8;

FIG. 10A is a schematic view of a window having an augmented reality display thereon having a single plane display;

FIG. 10B is a schematic view of a window having an augmented reality display thereon having a dual plane display;

FIG. 11A is flow chart illustrating a method in accordance with the present disclosure; and

FIG. 11B is a flow chart illustrating an exemplary embodiment of the method shown in FIG. 11A.

The figures are not necessarily to scale, and some features may be exaggerated or minimized, such as to show details of particular components. In some instances, well-known components, systems, materials or methods have not been described in detail in order to avoid obscuring the present disclosure. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure.

DETAILED DESCRIPTION

The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. As used herein, the term module refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including without limitation: application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. Although the figures shown herein depict an example with certain arrangements of elements, additional intervening elements, devices, features, or components may be present in actual embodiments. It should also be understood that the figures are merely illustrative and may not be drawn to scale.

As used herein, the term “vehicle” is not limited to automobiles. While the present technology is described primarily herein in connection with automobiles, the technology is not limited to automobiles. The concepts can be used in a wide variety of applications, such as in connection with aircraft, marine craft, other vehicles, and consumer electronic components.

Referring to FIG. 1 and FIG. 3, a system 10 for generating a centrally located floating three-dimensional image 12 display for a plurality of passengers 14 positioned within a vehicle, includes an image chamber 16 that includes a first display 18 in communication with a system controller 19 and is adapted to project a first three-dimensional image 12A and a first reflector 20 individually associated with the first display 18 and a first one 14A of the plurality of passengers 14, and a second display 22 that is adapted to project a second three-dimensional image 12B and a second reflector 24 individually associated with the second display 22 and a second one 14B of the plurality of passengers 14. As shown in FIG. 1, the system 10 includes two displays 18, 22, reflectors 20, 24 and passengers 14A, 14B. It should be understood that the system 10 may be adapted to accommodate any suitable number of passengers 14.

The system controller 19 is a non-generalized, electronic control device having a preprogrammed digital computer or processor, memory or non-transitory computer readable medium used to store data such as control logic, software applications, instructions, computer code, data, lookup tables, etc., and a transceiver [or input/output ports]. computer readable medium includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device. Computer code includes any type of program code, including source code, object code, and executable code.

Referring to FIG. 2A, a vehicle compartment 26 includes a plurality of seating positions occupied by a plurality of passengers 14A, 14B, 14C, 14D. As shown, the vehicle compartment 26 includes four seating positions for four passengers 14A, 14B, 14C, 14D. Each reflector 20, 24, 28, 30 is adapted to be viewed by one of the passengers 14A, 14B, 14C, 14D. Each reflector 20, 24, 28, 30 is adapted to receive an image from the associated display 18, 22, and to reflect the image to the associated passenger 14. The associated passenger 14 perceives the image 12 floating at a central location within the image chamber 16. Referring again to FIG. 1, the first reflector 20 is adapted to receive the first image 12A from the first display 18, as indicated by arrows 32, and to reflect the first image 12A to the first passenger 14A, as indicated by arrows 34, wherein the first passenger 14A perceives the first image 12A floating at a central location within the image chamber 16, as indicated by arrows 36. The second reflector 24 is adapted to receive the second image 12B from the second display 22, as indicated by arrows 38, and to reflect the second image 12B to the second passenger 14B, as indicated by arrows 40, wherein, the second passenger 14B perceives the second image 12B floating at the central location within the image chamber 16, as indicated by arrows 42.

Referring to FIG. 2A, each of the four passengers 14A, 14B, 14C, 14D perceives an image 12 reflected to them by respective associated reflectors 20, 24, 28, 30 and the passengers 14A, 14B, 14C, 14D perceive the image 12 reflected to them within the image chamber 16, as indicated by lines 44. Each of the displays 18, 22 can project the same image to each of the reflectors 20, 24, 28, 30 and thus to each of the passengers 14A, 14B, 14C, 14D. Alternatively, each of the displays 18, 22 can display a different perspective of the same image, or a different image altogether to each of the reflectors 20, 24, 28, 30. Thus the system 10 is capable of presenting the same floating image 12 to all the passengers 14 so they can view simultaneously, or alternatively, each passenger 14 can view a different perspective of the floating image 12 or a completely different three-dimensional image 12.

Referring to FIG. 3, in an exemplary embodiment, the system controller is in communication with a navigation system 45 within the vehicle. Referring to FIG. 2B, as shown, three passengers are within the vehicle compartment. The first image 12A is a first map image for the first passenger 14A, and the second image 12B is a second map image for the second passenger 14B, and the third image 12C is a third map image for the third passenger 14C. The first, second and third map images 12A, 12B, 12C may be identical, as shown in FIG. 2B, wherein the first, second and third passengers 14A, 14B, 14C all see the same perspective of the same map image. Alternatively, each of the first, second and third map images 12A, 12B, 12C may be an image of the same map area from a different perspective for each of the first, second and third passengers 14A, 14B, 14C, based on the position/orientation of the passenger within the vehicle compartment 26 and/or selected preferences of each of the first, second and third passengers 14A, 14B, 14C. Further, each of the first, second and third passengers 14A, 14B, 14C may see a completely unique map image. For Example, the first and second passengers 14A, 14B may be looking at the same map image, looking at the current route that is being travelled, while the third passenger 14C is looking at a map image of a different area to identify different points of interest and routes that may be of interest.

A transparent display 46 is positioned between the eyes of each of the plurality of passengers 14 and the reflectors 20, 24, 28, 30. As shown in FIG. 1, the transparent display 46 is positioned between the first reflector 20 and the first passenger 14A and between the second reflector 24 and the second passenger 14B. The transparent display 46 is adapted to display information to the first and second passengers 14A, 14B within an image plane positioned in front of the perceived first and second images 12A, 12B floating at the central location within the image chamber 16. The transparent display 46 presents first private information to the first passenger 14A that appears within a first image plane 48, wherein the first private information displayed on the transparent display 46 to the first passenger 14A appears in front of the image 12A perceived by the first passenger 14A within the image chamber 16. The first private information is information meant to be seen only by the first passenger 14A. The transparent display 46 presents second private information to the second passenger 14B that appears within a second image plane 50, wherein second private information displayed on the transparent display 46 to the second passenger 14B appears in front of the image 12B perceived by the second passenger 14B within the image chamber 16. The second private information is information meant to be seen only by the second passenger 14B.

In an exemplary embodiment, the transparent display 46 is a transparent touch screen that is adapted to allow the plurality of passengers 14 to receive annotated information and to provide input to the system 10. Referring to FIG. 1 and FIG. 2, in an exemplary embodiment, the transparent display 46 includes a clear cylindrical touch screen. The clear cylindrical touch screen encircles the image chamber 16 and is thereby positioned between the eyes of the plurality of passengers 14 and the perceived image 12 floating at the central location within the image chamber 16. In an exemplary embodiment, the transparent display 46 is an organic light-emitting diode (OLED). It should be understood, that the transparent display 46 may be other types of transparent touch screen displays known in the art.

The transparent display 46 is adapted to present visible displayed information only to the passenger 14 that is directly in front of a portion of the transparent display 46. The nature of the transparent display 46 is such that the displayed information is only displayed on a first side, the outward facing cylindrical surface, of the transparent display 46. A second side, the inward facing cylindrical surface, of the transparent display 46 does not display information, and thus, when viewed by the other passengers 14, allows the other passengers 14 to see through the transparent display 46.

Referring to FIG. 2B, second private information 47 is displayed on the transparent display that is viewable only by the second passenger 14B. The second private information 47 includes information specific to the second passenger 14B and allows the second passenger 14B to provide input to the system controller 19. For example, the second passenger 14B may select points of interest or proposed routes by touching the transparent display 46. In an exemplary embodiment, the system controller 19 is adapted to receive input from the first, second and third passengers 14A, 14B, 14C, via the transparent display 46, to adjust a perspective, zoom level, position an angle of view of the first, second and third map images 12A, 12B, 12C, respectively, and to modify the first, second and third map images 12A, 12B, 12C accordingly. The system controller 19 is further adapted to receive input from the first, second and third passengers 14A, 14B, 14C allowing any one of the first, second and third passengers 14A, 14B, 14C to selectively share an image. For example, the first passenger 14A, after making selections, via the transparent display 46, and modifying any or all of the perspective, zoom level, position and angle of view of the first map image 12A, may selectively share the modified first map image 12A with the other passengers so all the passengers see the same map image as the first passenger 14A. Likewise, the second passenger 14B may selectively share a modified second map image 12B with the other passengers.

In an exemplary embodiment, any of the first, second and third passengers 14A, 14B, 14C are able to provide input via a personal mobile device 170, such as a cell phone, tablet or laptop that is synced to the system 10.

In an exemplary embodiment, the system controller 19 is adapted to include, within the first map image 12A highlighted points of interest for the first passenger 14A, and at least one highlighted route between a current location and a point of interest selected by the first passenger 14A. Information related to points of interest that may be relevant to the first and second passengers 14A, 14B can be collected based on direct input by the first and second passengers 14A, 14B, or, such information may be pulled from a database within the system controller 19 where information related to the first and second passengers 14A, 14B is stored based on past behavior. Further, the system may be adapted to prompt a passenger for personal interests, wherein, when the system 10 identifies multiple potential points of interest, the passenger 14 is allowed to select a point of interest that they are interested in. Referring to FIG. 2C, the first map image 12A includes a highlighted point of interest 172, (as shown, a football stadium), a primary route 174, and an alternate route 176. The system 10 may highlight the primary route 174 in a specific manner, such as by marking the primary route 174 with a thick line or a specific color, to identify the primary route 174 as the route that is quickest to a selected point of interest 172. Further, the system 10 may highlight the alternate route 176 in a specific manner, such as by marking the alternate route 176 with a thinner line or a dashed line of a different color to identify the alternate route 176 as a viable, but less desirable under normal circumstances, route.

In one example, the system controller 19 highlights the football stadium 172 either in response to data collected from remote sources which indicates the football stadium 172 may be a point of interest for the first passenger 14A or in response to input from the first passenger 14A that the football stadium 172 is a desired point of interest and potential final destination. The first passenger 14A may, via the transparent display 46, select the football stadium 172, indicating that the navigation system 45 should direct the vehicle to the football stadium 172. Subsequently, the system controller 19 displays the primary route 174 and the alternate route 176 to the first passenger 14A so the first passenger 14A may select the preferred route to take to the football stadium 172.

Once the first passenger 14A has selected a point of interest and a route, the first passenger 14A may selectively share the highlighted point of interest, the football stadium 172, and the selected route, either the primary route 174 or the alternate route 176, with the other passengers, and the highlighted point of interest and the selected route will appear on the second and third map images 12B, 12C for the second and third passengers 14B, 14C. Alternatively, prior to making a selection of a point of interest or a route, the first passenger 14A can share highlighted points of interest and routes that appear in the first map image 12A with the second and third passengers 14B, 14C, via the second and third map images 12B, 12C, allowing the second and third passengers 14B, 14C to vote on, rank, approve or input alternate suggestions to the highlighted points of interest and routes shared by the first passenger 14A.

For example, referring again to FIG. 2C, the first passenger 14A shares the highlighted football stadium 172, the primary route 174 and the alternate route 176 with the second and third passengers 14B, 14C. The first passenger 14A may provide input, via the transparent display 46 or otherwise, suggesting that they take the primary route 174 to the football stadium 172. The second passenger 14B may provide input, via the transparent display 46 or otherwise, suggesting that they take the alternate route 176 because the second passenger 14B knows of potential traffic issues along the primary route 174.

In another example, the first passenger 14A may share the location of a friend that needs to be picked up with the second and third passengers 14B, 14C, wherein the second and third passengers 14B, 14C can provide input on other points of interest or friends that need to be picked up, allowing the first, second and third passengers 14A, 14B, 14C to collaborate on prioritizing their stops and selecting an optimized route.

Like the first passenger 14A, the system controller 19 is adapted to allow each of the second and third passengers 14B, 14C to see highlighted points of interest and routes, selected by the system controller 19 for the second and third passengers 14B, 14C, and to select points of interest and routes and selectively share highlighted points of interest and routes for collaborative decision making between the first, second and third passengers 14A, 14B, 14C.

Referring to FIG. 2D, in an exemplary embodiment, the system controller 19 is adapted to highlight multiple drop-off points at a selected point of interest, and to receive input from each of the passengers on where they want to be dropped off. For example, the passengers 14A, 14B, 14C have agreed to go to the football stadium 172 previously mentioned. The system controller 19, accessing remote data from the stadium 172, identifies two drop-off areas 178A, 178B near the stadium 172 that are designated for passenger drop-off/pick-up. The system controller 19 highlights a first drop-off point 178A and a second drop-off point 178B and displays the first and second drop-off points 178A, 178B within the first, second and third map images 12A, 12B, 12C. Each of the passengers 14A, 14B, 14C may select which one of the first and second drop-off points 178A, 178B they want to arrive at.

For example, the first and second passengers 14A, 14B may select the first drop-off point 178A, but the third passenger 14C selects the second drop-off point 178B because the third passenger 14C is meeting another friend there. The system controller 19 and the navigation system 45 will plot a route that goes to the closest of the first and second drop-off points 178A, 178B first, and then goes to the other of the first and second drop off points 178A, 178B after. Alternatively, all three of the first, second and third passengers 14A, 14B, 14C may select the same one of the first and second drop-off points 178A, 178B, wherein, the system controller 19 and the navigation system 45 will select a route directly to the selected drop-off point, ignoring the other drop-off point.

Referring to FIG. 2E, in an exemplary embodiment, the system controller 19 is adapted to include, within the first map image 12A, an icon 180B, 180C representing the second and third passengers 14B, 14C to illustrate the second and third passenger's perspective of the displayed map image. For example, when each of the first, second and third passengers 14A, 14B, 14C is viewing the same first, second and third map image 12A, 12B, 12C, the first map image 12A includes a second passenger icon 180B that illustrates the perspective from which the second passenger 14B is seeing the map area within the second map image 12B. Likewise, the first map image 12A includes a third passenger icon 180C that illustrates the perspective from which the third passenger 14C is seeing the map area within the third map image 12C. As shown, the second passenger icon 180B and the third passenger icon 180C are three dimensional avatars of the head of the second and third passengers 14B, 14C, illustrating the orientation of the second and third passenger's head relative to the map area shown in the first map image 12A. Alternatively, the second and third passenger icons 180B, 180C could be a simple shape, such as an arrow, indicating an orientation of the second and third passenger's view.

In an exemplary embodiment, the images from each of the displays 18, 22 are generated via holographic method, pre-computed and encoded into a hologram generator within the display 18, 22. In an exemplary embodiment, each display 18, 22 is adapted to project a three-dimensional image with variable virtual image distance. Three-dimensional images with variable virtual image distance allows the system 10 to project a floating image 12 to the passengers 14 with the capability of making the floating image 12 appear closer or further away from the passengers 14.

Referring again to FIG. 1, in an exemplary embodiment, the system 10 is mounted to a support structure suspended from a roof 29 within the vehicle compartment 26. Alternatively, in another exemplary embodiment, the system is mounted to a support structure, such as a pedestal, mounted to a floor 31 within the vehicle compartment 26. In various embodiments, the system bay be retractable, wherein, when not in use, the system recesses within the roof 29 or the floor 31 within the vehicle compartment.

The transparent display 46 and each of the reflectors 20, 24, 28, 30 are transparent, wherein a passenger 14 can see through the transparent display 46 and an associated reflector 20, 24, 28, 30. This allows the passenger 14 to perceive the floating image 12 at a distance beyond the reflector 20, 24, 28, 30 and further, allows the passenger 14 to see through the transparent display 46 and the reflectors 20, 24, 28, 30 and able to see the interior of the vehicle compartment 26 and other passengers 14 therein.

In one exemplary embodiment, the transparent display 46 is an autostereoscopic display that is adapted to display stereoscopic, or three-dimensional images by adding binocular perception of three-dimensional depth without the use of special headgear, glasses, something that affects the viewer's vision, or anything for the viewer's eyes. Because headgear is not required, autostereoscopic displays are also referred to as “glasses-free 3D” or “glassesless 3D”. The autostereoscopic transparent display includes a display panel and a parallax barrier mounted to the display panel, on an outwardly facing side of the display panel facing an associated one of the plurality of passengers 14. In an exemplary embodiment the parallax barrier that is mounted onto the transparent display 46 includes a plurality of parallel, vertical apertures, that divide the image displayed such that a left eye and a right eye of a passenger 14 viewing the autostereoscopic display see different portions of the displayed image and the passenger 14 perceives a three-dimensional image.

In an exemplary embodiment, the parallax barrier that is mounted onto the transparent display 46 is selectively actuatable by a controller adapted to switch between having the parallax barrier off, wherein the parallax barrier is completely transparent, and the viewing passenger 14 sees images displayed on the transparent display 46 as two-dimensional images, and having the parallax barrier on, wherein the viewing passenger 14 sees the images displayed on the transparent display 46 as a three-dimensional images.

When the parallax barrier is actuated, each of the left and right eyes of the viewing passenger 14 only see half of the displayed image, therefore, the resolution of the three-dimensional image is reduced. To improve resolution, in one exemplary embodiment, the controller is configured to implement time-multiplexing by alternately turning the parallax barrier on and off. Time-multiplexing requires the system 10 to be capable of switching the parallax barrier on and off fast enough to eliminate any perceptible image flicker by the viewing passenger 14. Liquid crystal displays are particularly suitable for such an application.

Referring to FIG. 4, the image chamber 16 includes transparent portions 52, 54 to allow the passengers 14 to see their associated reflector 20, 24, 28, 30. As shown, the image chamber 16 includes a first transparent portion 52 that is adapted to allow the first map image 12A reflected by the first reflector 20 to pass from the image chamber 16 outward toward the first passenger 14A, as indicated by arrows 34 in FIG. 1. Further, the image chamber 16 includes a second transparent portion 54 that is adapted to allow the second map image 14B reflected by the second reflector 24 to pass from the image chamber 16 outward toward the second passenger 14B, as indicated by arrows 40 in FIG. 1.

The image chamber 16 further includes solid portions 56, 58 that are adapted to prevent light from entering the image chamber 16 behind the first and second reflectors 20, 24. The image chamber 16 functions much like a Pepper's Ghost Chamber, wherein the image of an object is perceived by a viewer within a reflective surface adjacent the actual image. As discussed above, in the present disclosure, the image presented by a display 18, 22 which is not within view of a passenger 14, is reflected by a reflector 20, 24, 28, 30 to the passenger 14A, 14B, 14C, 14D such that the passenger “sees” the image within the image chamber 16 and perceives the image 12 to be floating behind the reflective surface of the reflector 20, 24, 28, 30. If the image chamber 16 behind the reflectors 20, 24, 28, 30 is exposed to ambient light, the image will not be viewable by the passengers 14. Thus, solid portions 56, 58 of the image chamber 16 are adapted to prevent light from entering the image chamber 16 behind the first and second reflectors 20, 24. Referring to FIG. 4, the image chamber 16 includes solid overlapping panels 56, 58 that are adapted to prevent light from entering the image chamber 16 behind the first and second reflectors 20, 24.

Referring to FIG. 5, in an exemplary embodiment, the system 10 is selectively moveable vertically up and down along a vertical central axis 60, as indicated by arrow 62. Further, each display 18, 22 and the associated reflector 20, 24, 28, 30 are unitarily and selectively rotatable about the vertical central axis 60, as shown by arrows 64. This allows the system 10 to adjust to varying locations of the passengers 14 within the vehicle compartment 26.

Referring to FIG. 6, the fist reflector 20 and the first display 18 are rotatable about the vertical central axis 60, as indicated by arrow 66. The second reflector 24 and the second display 22 are rotatable about the vertical central axis 60, as indicated by arrow 68. As shown in FIG. 4, the first and second passengers 14A, 14B are sitting directly across from one another, and the first reflector 20 and first display 18 are positioned 180 degrees from the second reflector 24 and second display 22. As shown in FIG. 6, the position of the head of the second passenger 14B has moved, and the second reflector 24 and the second display 22 have been rotated an angular distance 70 to ensure the second passenger 14B perceives the image 12 from the second display 22 and the second reflector 24.

In an exemplary embodiment, the image chamber 16 includes first solid panels 56 positioned adjacent the first reflector 20 on either side and adapted to move unitarily with the first reflector 20 and the first display 18 as the first reflector 20 and the first display 18 rotate about the vertical central axis 60. Second solid panels 58 are positioned adjacent the second reflector 24 on either side and are adapted to move unitarily with the second reflector 24 and the second display 22 as the second reflector 24 and the second display 22 rotate about the vertical central axis 60. The first solid panels 56 overlap the second solid panels 58 to allow relative movement of the first solid panels 56 relative to the second solid panels 58 and to ensure that ambient light is blocked from entering the image chamber 16 behind the first and second reflectors 20, 24 at all times.

In an exemplary embodiment, each of the displays 18, 22 and associated reflectors 20, 24, 28, 30 are equipped with head tracking capability, wherein an orientation of each display 18, 22 and associated reflector 20, 24, 28, 30 changes automatically in response to movement of a head and eyes of a passenger 14 detected by a monitoring system 72. Monitoring systems 72 within a vehicle include sensors 74 that monitor head and eye movement of a driver/passenger within the vehicle.

In an exemplary embodiment, the system 10 includes at least one first sensor 74 adapted to monitor a position of a head and eyes of the first passenger 14A. The at least one first sensor 74 may include camera and motion sensors adapted to detect the position and movement of the first passenger's head and eyes. As shown, the first sensors 74 include a camera oriented to monitor the position and movement of the head and eyes of the first passenger 14A. The first display 18 and first reflector 20 are adapted to rotate in response to movement of the head and eyes of the first passenger 14A. The system 10 further includes at least one second sensor 76 adapted to monitor a position of a head and eyes of the second passenger 14B. The at least one second sensor 76 may include camera and motion sensors adapted to detect the position and movement of a passenger's head and eyes. As shown, the second sensors 76 include a camera oriented to monitor the position and movement of the head and eyes of the second passenger 14B. The second display 22 and second reflector 24 are adapted to rotate about the vertical central axis 60 in response to movement of the head and eyes of the second passenger 14B.

Referring again to FIG. 3, a controller 78 of the monitoring system 72 is in communication with the system controller 19 and receives information from the first sensors 74, and in response to detection of head/eye movement by the first passenger 14A, actuates a first motor 80 adapted to rotate the first reflector 20 and first display 18 about the vertical central axis 60. Further, the controller 78 of the monitoring system 72 receives information from the second sensors 76, and in response to detection of head/eye movement by the second passenger 14B, actuates a second motor 82 adapted to rotate the second reflector 24 and second display 22 about the vertical central axis 60.

In addition to rotation of the first display 18 and first reflector 20 and the second display 22 and second reflector 24, the system 10 is adapted to move up and down along the vertical central axis 60 in response to movement of the head and eyes of the first passenger 14A and movement of the head and eyes of the second passenger 14B. The controller 78 of the monitoring system 72 receives information from the first sensors 74 and the second sensors 76, and in response to detection of head/eye movement by the first and second passengers 14A, 14B, actuates a third motor 84 adapted to raise and lower the system 10 along the vertical central axis 60 to maintain optimal vertical position of the system 10 relative to the passengers 14. Preferences may be set within the system 10 such that the system 10 maintains optimal vertical positioning relative to a designated one of the plurality of passengers 14, or alternatively, preferences can be set such that the system 10 maintains a vertical position taking into consideration some or all of the plurality of passengers 14.

In an exemplary embodiment, the monitoring system 72 is adapted to monitor the position of a head and eyes of each one of the plurality of passengers 14, wherein, for each of the plurality of passengers 14, the system 10 is adapted to display information at a specific location on the transparent display 46 based on a position of the head and eyes of the passenger 14. In another exemplary embodiment, for each of the plurality of passengers 14, the system 10 is adapted to display information at a specific location on the transparent display 46 based on the position of the head and eyes of the passenger 14 relative to the perceived image 12 within the image chamber 16, such that, for each of the plurality of passengers 14, information displayed on the transparent display 46 is properly positioned relative to the perceived image 12 within the image chamber 16.

Referring to FIG. 7, in a schematic view of a passenger 14 an associated transparent display 46 and a floating image 12, the passenger 14 perceives the floating image 12 at a distance behind the transparent display 46. The transparent display 46 displays information related to the floating image 12 at a proper location on the transparent display 46 so the passenger 14 sees the information at a proper location relative to the floating image 12. As shown in FIG. 7, the floating image 12 is of a skyline, and more specifically, of three buildings, a first building 86, a second building 88, and a third building 90. The transparent display 46 displays first building information 92, second building information 94 and third building information 96.

The first building information 92 appears in a text box and may contain information about the first building 86 as well as the option of allowing the passenger 14 to touch the first building information 92 text box to acquire additional information about the first building 86. For example, the first building information 92 text box may contain the name of the first building 86 and the street address. The passenger 14 may opt to touch the first building information 92 text box, wherein additional information will appear on the transparent display 46, such as the date the first building 86 was built, what type of building (office, church, arena, etc.), or statistics such as height, capacity, etc. The second building information 94 and the third building information 96 also appear in text boxes that contain similar information and the option for the passenger 14 to touch the second or third building information 94, 96 text boxes to receive additional information about the second and third buildings 88, 90.

The monitoring system 72 tracks the position of the passenger's 14 head 14H and eyes 14E and positions the first, second and third building information 92, 94, 96 text boxes at a location on the transparent display 46, such that when the passenger 14 looks at the floating image 12 through the reflector 20, 24, 28, 30 and the transparent display 46, the passenger 14 sees the first, second and third building information 92, 94, 96 text boxes at the proper locations relative to the floating image 12. For exam ple, the transparent display 46 positions the first building information 92 in the passenger's line of sight, as indicated by dashed line 98, such that the first building information 92 is perceived by the passenger 14 at a location immediately adjacent the first building 86, as indicated at 100. Correspondingly, the transparent display positions the second building information 94 in the passenger's line of sight, as indicated by dashed line 102, and the third building information 96 in the passenger's line of sight, as indicated by dashed line 104, such that the second and third building information 94, 96 is perceived by the passenger 14 at a location superimposed on the building, in the case of the second building 88, as indicated at 106, and at a location immediately adjacent the building, in the case of the third building 90, as indicated at 108.

The monitoring system 72 continuously tracks movement of the head 14H and eyes 14E of the passenger 14 and adjusts the position that the first, second and third building information 92, 94, 96 are displayed on the transparent display 46 to ensure that the passenger 14 always perceives the first, second and third building information 92, 94, 96 at the proper locations 100, 106, 108 relative to the floating image 12.

In an exemplary embodiment, the system 10 is adapted to accept input from a passenger 14 based solely on contact between the passenger 14 and the transparent display 46. For example, when a passenger 14 reaches out to touch a finger-tip to the transparent display 46, the transparent display 46 takes the input based solely on the point of contact between the tip of the finger of the passenger 14 and the transparent display 46.

In another exemplary embodiment, the system 10 is adapted to accept input from a passenger 14 based on contact between the passenger 14 and the transparent display 46 and based on the location of a point of contact between the passenger 14 and the transparent display 46 relative to the perceived image 12. For example, the monitoring system 72 tracks the movement and position of the passenger's 14 eyes 14E and head 14H. The transparent display 46 displays information that is perceived by the passenger 14 relative to the floating image 12, as discussed above. When the passenger 14 touches the transparent display 46, the passenger 14 perceives that they are touching the floating image 12. The system 10 uses parallax compensation to correlate the actual point of contact between the finger-tip of the passenger 14 on the transparent display 46 to the location on the floating image 12 that the passenger 14 perceives they are touching.

The system 10 may display, on the transparent display 46, multiple different blocks of annotated information relative to a floating image 12. As the passenger's 14 head 14H and eyes 14E move, the passenger's head 14H and eyes 14E will be positioned at a different distance and angle relative to the transparent display 46, thus changing the perceived location of displayed information relative to the image 12. By using parallax compensation techniques, such as disclosed in U.S. Pat. No. 10,318,043 to Seder, et al., hereby incorporated by reference herein, the system 10 ensures that when the passenger 14 touches the transparent display 46, the system 10 correctly identifies the intended piece of annotated information that the passenger 14 is selecting.

In another exemplary embodiment, the system 10 is adapted to accept input from a passenger 14 based on gestures made by the passenger 14 where the passenger 14 does not touch the transparent display 46. For example, when the passenger 14 moves a hand 114, or points to an object that is displayed on the transparent display 46 or to an object within the vehicle compartment 26 or outside of the vehicle compartment 26.

Referring again to FIG. 1, in an exemplary embodiment, the system includes at least one first gesture sensor 110 adapted to monitor position and movement of arms, hands and fingers 114 of the first passenger 14A and to gather data related to gestures made by the first passenger 14A. The first gesture sensor 110 may include a camera and motion sensors adapted to detect the position and movement of the first passenger's arms, hands and fingers. As shown, the first gesture sensor 110 includes a camera oriented to monitor the position and movement of the arms, hands and fingers of the first passenger 14A. Further, the system 10 includes a second gesture sensor 112 adapted to monitor position and movement of arms, hands and fingers of the second passenger 14B and to gather data related to gestures made by the second passenger 14B. The second gesture sensor 112 may include a camera and motion sensors adapted to detect the position and movement of the second passenger's arms, hands and fingers. As shown, the second gesture sensor 112 includes a camera oriented to monitor the position and movement of the arms, hands and fingers of the second passenger 14B.

The system 10 uses data collected by the first and second gesture sensors 110, 112 to identify gestures made by the passengers 14A, 14B within the vehicle compartment 26. A system controller will use computer learning algorithms and parallax compensation techniques to interpret such gestures and identify input data, such as when a passenger 14 is pointing to an object outside the vehicle compartment 26. For example, the system controller 19 may determine that the first passenger 14A is looking at a location outside the vehicle compartment 26 based on orientation of the first passenger's head and eyes, and highlight the location within the first map image 12A, providing the first passenger 14A the opportunity to select and/or share the location.

In another exemplary embodiment, the system 10 is adapted to accept audio input from passengers 14 within the vehicle compartment 26. Referring to FIG. 1, the system 10 includes a first microphone 116, in communication with the system controller 19, that is adapted to collect audio input from the first passenger 14A. Correspondingly, the system 10 includes a second microphone 118, in communication with the system controller 19, that is adapted to collect audio input from the second passenger 14B. The system controller 19 is adapted to recognize pre-determined commands from the passengers 14A, 14B within the vehicle compartment 26 when such commands are vocalized by the passengers 14A, 14B and picked-up by the first and second microphones 116, 118. Referring to the example above, the system controller 19 may identify a location outside the vehicle compartment 26 based on the orientation of the head and eyes of the first passenger 14A, and in response to use of a command word or phrase by the first passenger 14A, highlight the location within the first map image 12A. Further, the first passenger 14A may selectively share the highlighted location by using a command word or phrase which triggers the system controller 19 to share the highlighted location with the other passengers within the vehicle compartment 26.

Further, the system 10 includes a first zonal speaker 120 adapted to broadcast audio output to the first passenger 14A. The first zonal speaker 120 is adapted to broadcast audio output in a manner such that only the first passenger 14A can hear and understand the audio output from the first zonal speaker 120. In this manner, audio information can be broadcast, by the system controller 19, to the first passenger 14A that is private to the first passenger 14A and does not disturb other passengers within the vehicle compartment 26. The system 10 includes a second zonal speaker 122 adapted to broadcast audio output to the second passenger 14B. The second zonal speaker 122 is adapted to broadcast audio output in a manner such that only the second passenger 14B can hear and understand the audio output from the second zonal speaker 122. In this manner, audio information can be broadcast, by the system controller 19, to the second passenger 14B that is private to the second passenger 14B and does not disturb other passengers within the vehicle compartment 26. The first and second zonal speakers 120, 122 may comprise speakers that are mounted within the vehicle compartment 26 and to broadcast audio output directionally to a specified location within the vehicle compartment 26. Further, the first and second zonal speakers 120, 122 may comprise a wireless headset or ear-bud adapted to be worn by the passengers 14A, 14B.

In an exemplary embodiment, the system controller is adapted to provide information to a passenger when the vehicle is nearing or has arrived at a destination relevant to that passenger. For example, as discussed above, the first and second passengers 14A, 14B may select the first drop-off point 178A, but the third passenger 14C selects the second drop-off point 178B because the third passenger 14C is meeting another friend there. The system controller 19 and the navigation system 45 will plot a route that goes to the closest of the first and second drop-off points 178A, 178B first, and then goes to the other of the first and second drop off points 178A, 178B after.

Thus, when the vehicle nears the first drop-off point 178A, the system controller 19 lets the first and second passengers 14A, 14B know that they are nearing their drop-off point, via zonal speakers dedicated to the first and second passengers 14A, 14B, so they can prepare to disembark from the vehicle. When the vehicle arrives at the first drop-off point 178A, the system controller 19 notifies the first and second passengers 14A, 14B, via zonal speakers dedicated to the first and second passengers 14A, 14B, that they have arrived at their destination. Correspondingly, when the vehicle nears the second drop-off point 178B, the system controller 19 lets the third passenger 14C know that the vehicle is nearing the second drop-off point 178B, via a zonal speaker dedicated to the third passenger 14C, and, when the vehicle arrives at the second drop-off point 178B, the system controller 19 notifies the third passenger 14C, via the zonal speaker dedicated to the third passenger 14C, that the vehicle has arrived at the second drop-off point 178B.

In an exemplary embodiment, the system 10 further includes an external scene camera 124 that is in communication with the system controller 19 and is adapted to capture images of an external environment outside the vehicle compartment 26. In this manner, the system controller 19 can collect data and “see” objects, locations, destinations and points of interest immediately outside the vehicle compartment 26.

The system 10 further includes an augmented reality display 125 positioned within the vehicle compartment 26 remotely from the image chamber 16 and adapted to display one of the first and second map images 12A, 12B. Thus, for example, when the first passenger 14A selects to share the first map image 12A with the second and third passengers 14B, 14C within the second and third map images 12B, 12C, in addition, the first passenger 14A may selectively share the first map image 12A to the augmented reality display 125 for broader viewing by all of the passengers within the vehicle compartment 26.

Referring to FIG. 8 and FIG. 9, an augmented reality display 125 includes a transparent substrate 138 affixed to a window 127 within the vehicle compartment 26 and including light emitting particles 136 embedded therein. As explained below, the augmented reality display 125 includes a primary graphic projection device 126 and a secondary graphic projection device 128 that work together to provide an augmented reality experience for the passengers 14 within the vehicle compartment 26.

The augmented reality display 125 includes one or more controllers 129 in electronic communication with the system controller 19 and the external scene camera 124, the monitoring system 72, a primary graphics processing unit 130 corresponding to the primary graphic projection device 126, and a secondary graphics processing unit 132 corresponding to the secondary graphic projection device 128. The external scene camera 124 may be cameras that obtain periodic or sequential images representing a view of a surrounding environment outside the vehicle compartment 26. As described above, the monitoring system 72 includes one or more sensors for determining the location of a head of a passenger 14 within the vehicle compartment 26 as well as the orientation or gaze location of the passenger's eyes.

When excitation light is absorbed by the light emitting particles 136, visible light is generated by the light emitting particles 136. In an embodiment, the light emitting particles 136 are red, green, and blue (RGB) phosphors for full color operation, however, it is to be appreciated that monochrome or a two-color phosphor may be used as well. Referring to FIG. 10A, the window 127 includes a primary area 140 and a secondary area 142, where the primary graphic projection device 126 generates a first set of images 144 upon the primary area 140 of the window 127 based on visible light, and the secondary graphic projection device 128 generates a second set of images 146 upon the secondary area 142 the window 127 based on an excitation light. Specifically, the light emitting particles 136 dispersed within the transparent substrate 138 emit visible light in response to absorbing the excitation light emitted by the secondary graphic projection device 128. The first set of images 144 displayed upon the primary area 140 of the window 127 cooperate with the second set of images 146 displayed upon the secondary area 142 of the window 127 to create an edge-to-edge augmented reality view of the environment outside the vehicle compartment 26.

The primary area 140 of the window 127 only includes a portion of the window 127 having a limited field-of-view, while the secondary area 142 of the window 127 includes a remaining portion of the window 127 that is not included as part of the primary area 140. Combining the primary area 140 with the secondary area 142 results in an augmented reality view of the environment outside the vehicle compartment 26 that spans from opposing side edges 150 of the window 127. The primary graphics processing unit 130 is in electronic communication with the primary graphic projection device 126, where the primary graphics processing unit 130 translates image-based instructions from the one or more controllers 129 into a graphical representation of the first set of images 144 generated by the primary graphic projection device 126. The first set of images 144 are augmented reality graphics 148 that are overlain and aligned with one or more objects of interest located in the environment outside the vehicle compartment 26 to provide a passenger with a virtual reality experience. In the example as shown in FIG. 10A, the augmented reality graphics 48 include a destination pin and text that reads “0.1 mi” that are overlain and aligned with an object of interest, which is a building 152. It is to be appreciated that the destination pin and text as shown in FIG. 10A is merely exemplary in nature and that other types of augmented reality graphics may be used as well. Some examples of augmented reality graphics 148 include, for example, circles or boxes that surround or highlight an object of interest, arrows, warning symbols, text, numbers, colors, lines, indicators, and logos.

The primary graphic projection device 126 includes a visible light source configured to generate the first set of images 144 upon the window 127. The visible light source may be, for example, a laser or light emitting diodes (LEDs). In the embodiment as shown in FIG. 10A, the primary area 140 of the window 127 includes a single image plane 160, and the primary graphic projection device 126 is a single image plane augmented reality head-up display including a digital light projector (DLP) optical system.

In another exemplary embodiment, referring to FIG. 10B, the primary area 140 of the window 127 includes a dual image plane 162, and the primary graphic projection device 126 is a fixed dual image plane holographic augmented reality head-up display. In one embodiment, the dual image plane 162 may include a first, near-field image plane 162A and a second, far-field image plane 162B. In the present example, the first set of images 144 include both cluster content information 154 projected upon the near-field image plane 162A and the augmented reality graphics 148 projected upon the far-field image plane 162B. The cluster content information 154 informs the driver of the vehicle 14 of driving conditions such as, but not limited to, vehicle speed, speed limit, gear position, fuel level, current position, and navigational instructions. In the example as shown in FIG. 6B, the cluster content information 154 includes vehicle speed.

Referring again to FIG. 10A, the secondary graphics processing unit 132 is in electronic communication with the secondary graphic projection device 128, where the secondary graphics processing unit 132 translates image-based instructions from the controllers 129 into a graphical representation of the second set of images 146 generated by the secondary graphic projection device 128. The second set of images 146 include one or more primitive graphic objects 164. In the example as shown in FIG. 10A, the primitive graphic object 164 is a point position or pixel. Other examples of primitive graphic objects 164 include straight lines. As explained below, objects in the surrounding environment first appear in a periphery field of a driver, which is in the secondary area 142 of the window 127, as the primitive graphic object 164. An object of interest located in the secondary area 142 of the window 127 is first brought to the attention of the driver by highlighting the object with the secondary set of images 146 (i.e., the primitive graphic object 64). Once the object of interest travels from the secondary area 142 of the window 127 into the primary area 140 of the window 127, the object of interest is highlighted by the primary set of images 144 (i.e., the augmented reality graphic 148) instead.

The secondary graphic projection device 128 includes an excitation light source configured to generate the second set of images upon the window 127. Specifically, the light emitting particles 136 dispersed within the transparent substrate 138 on the window 127 emit visible light in response to absorbing the excitation light emitted by the secondary graphic projection device 128. In embodiments, the excitation light is either a violet light in the visible spectrum (ranging from about 380 to 450 nanometers) or ultraviolet light that induces fluorescence in the light emitting particles 136. It is to be appreciated that since the light emitting particles 136 are dispersed throughout the transparent substrate 138 on the window 127, there is no directionality in the fluorescence irradiated by the light emitting particles 136. Therefore, no matter where a passenger is located within the vehicle compartment 26, the fluorescence is always visible. In other words, no eye-box exists, and therefore the disclosed augmented reality display 125 may be used as a primary instrument. The excitation light source may be, for example, a laser or LEDs. In embodiments, the secondary graphic projection device 128 is a pico-projector having a relatively small package size and weight. A throw distance D is measured from the window 127 to a projection lens 158 of the secondary graphic projection device 128. The throw distance D is dimensioned so that the secondary area 142 of the window 127 spans from opposing side edges 150 of the window 127 and between a top edge 166 of the window 127 to a bottom edge 168 of the window 127.

Further details of the augmented reality display are included in U.S. patent application Ser. No. 17/749,464 to Seder et al., filed on May 20, 2022 and which is hereby incorporated by reference into the present application.

Referring to FIG. 11A, a method 200 of using a system 10 for generating a centrally located floating three-dimensional image display for a plurality of passengers 14 positioned within a vehicle compartment 26 includes, beginning at block 202, displaying, with a first display 18 of an image chamber 16 in communication with a system controller 19, a first map image 12A from a navigation system, moving to block 204, receiving, with a first reflector 20 individually associated with a first passenger 14A, the first map image 12A from the first display 18, and, moving to block 206, reflecting, with the first reflector 20, the first map image 12A to the first passenger 14A, wherein the first passenger 14A perceives the first map image 12A floating at a central location within the image chamber 16.

Moving to block 208, the method 200 further includes displaying, with a second display 22 of the image chamber 16 in communication with the system controller 19, a second map image 12B from the navigation system, moving to block 210, receiving, with a second reflector 24 individually associated with a second passenger 14B, the second map image 12B from the second display 22, and, moving to block 212, reflecting, with the second reflector 24, the second map image 12B to the second passenger 14B, wherein the second passenger 14B perceives the second map image 12B floating at the central location within the image chamber 16.

Moving to block 214, the method 200 includes displaying, with a transparent display 46 in communication with the system controller 19 and positioned between eyes of the first passenger 14A and the first reflector 20 and between the eyes of the second passenger 14B and the second reflector 24, first private information to the first passenger 14A within an image plane 48 positioned in front of the first image 12A floating at the central location within the image chamber 16 and second private information to the second passenger 14B within an image plane 50 positioned in front of the second image 12B floating at the central location within the image chamber 16.

Moving to block 216, the method 200 includes receiving, with the system controller 19, input from the first passenger 14A and the second passenger 14B. Moving to block 218, the method 200 includes collecting, with an external scene camera 124, images of an external environment outside the vehicle compartment 26, and, moving to block 220, displaying, with an augmented reality display 125 in communication with the system controller 19 and positioned within the vehicle compartment 26 remotely from the image chamber 16, one of the first map image and the second map image.

In an exemplary embodiment, the receiving, with the system controller 19, input from the first passenger 14A and the second passenger 14B, at block 216, further includes, moving to block 221, receiving input, with the system controller, via wireless communication with a mobile device 170, and, moving to block 222, receiving, with the system controller 19, via the transparent display 46, input from the first passenger 14A and the second passenger 14B, moving to block 224, receiving, with the system controller 19, via at least one first sensor 74, input comprising a position of a head and eyes of the first passenger 14A, moving to block 226, receiving, with the system controller 19, via at least one first gesture sensor 110, information related to gestures made by the first passenger 14A, moving to block 228, collecting, with the system controller 19, via a first microphone 116, audio input from the first passenger 14A, and, moving to block 230, collecting, with the system controller 19, via a second microphone 118, audio input from the second passenger 14B.

In an exemplary embodiment, the receiving, with the system controller 19, input from the first passenger 14A and the second passenger 14B, at block 216, further includes, at block 236, receiving, from the first passenger 14A, input to adjust a perspective, zoom level, position and angle of view of the first map image 12A, and, moving to block 238, modifying, with the system controller 19, the perspective, zoom level, position and angle of view of the first map image 12A based on the input from the first passenger 14A, and, at block 240, receiving, from the second passenger 14B, input to adjust a perspective, zoom level, position and angle of view of the second map image 12B and, moving to block 242, modifying, with the system controller 19, the perspective, zoom level, position and angle of view of the second map image 12B based on the input from the second passenger 14B.

In an exemplary embodiment, the method 200 further includes, moving to block 244, selectively sharing the first map image 12A, as modified based on input from the first passenger 14A at block 238, with the second passenger 14B, and, moving to block 246, selectively sharing the second map image 12B, as modified based on input from the second passenger 14B at block 242, with the first passenger 14A.

Moving to block 232, the method 200 further includes broadcasting, with the system controller 19, via a first zonal speaker 120, audio output for the first passenger 14A, and, moving to block 234, broadcasting, with the system controller 19, via a second zonal speaker 122, audio output for the second passenger 14B.

Referring to FIG. 11B, in an exemplary embodiment, the displaying, with the first display 18 of the image chamber 16 in communication with the system controller 19, the first map image 12A from the navigation system 45 at block 202 further includes, at block 248, highlighting, within the first map image 12A, points of interest for the first passenger 14A, and highlighting, within the first map image 12A, at least one route between a current location and a point of interest selected by the first passenger 14A, and, the displaying, with the second display 22 of the image chamber 16 in communication with the system controller 19, the second map image 12B from the navigation system 45 at block 208 further includes, at block 250, highlighting, within the second map image 12B, points of interest for the second passenger 14B, and highlighting, within the second map image 12B, at least one route between a current location and a point of interest selected by the second passenger 14B.

In an exemplary embodiment, the highlighting, within the first map image 12A, points of interest for the first passenger 14A at block 248 further includes, moving to block 252, collecting, with the system controller 19, from remote data sources, information related to interests of the first passenger 14A, moving to block 254, receiving, with the system controller 19, input from the first passenger 14A related to points of interest that the first passenger 14A is interested in, and moving to block 256, identifying at least one point of interest that the first passenger 14A may be interested in based on the information collected from remote data sources and input from the first passenger 14A. Further, the highlighting, within the second map image 12B, points of interest for the second passenger 14B at block 250 further includes, at block 258, collecting, with the system controller 19, from remote data sources, information related to interests of the second passenger 14B, moving to block 260, receiving, with the system controller 19, input from the second passenger 14B related to points of interest that the second passenger 14B is interested in, and moving to block 262, identifying at least one point of interest that the second passenger 14B may be interested in based on the information collected from the remote data sources and input from the second passenger 14B.

In an exemplary embodiment, referring again to FIG. 11B, the method 200 further includes, moving to block 264, selectively sharing the highlighted points of interest and the highlighted at least one route between a current location and a point of interest selected by the first passenger 14A, from the first map image 12B, with the second passenger 14B, and moving to block 266, selectively sharing the highlighted points of interest and the highlighted at least one route between a current location and a point of interest selected by the second passenger 14B, from the second map image 12B, with the first passenger 14A.

In still another exemplary embodiment, the receiving, with the system controller 19, input from the first passenger 14A and the second passenger 14B at block 216, further includes, moving to block 268, receiving, from the first passenger 14A, input selecting at least one of a point of interest within the first map image 12A and a highlighted route within the first map image 12A, and, moving to block 270, receiving, from the second passenger 14B, input selecting at least one of a point of interest within the second map image 12B and a highlighted route within the second map image 12B. Moving from block 268 to block 272, the method 200 further includes selectively sharing the at least one of a point of interest within the first map image 12A and a highlighted route within the first map image 12A, selected by the first passenger 14A, with the second passenger 14B, and moving from block 270 to block 274, selectively sharing the at least one of a point of interest within the second map image 12B and a highlighted route within the second map image 12B, selected by the second passenger 14B, with the first passenger 14A.

In another exemplary embodiment, the receiving, with the system controller 19, input from the first passenger 14A and the second passenger 14B at block 216, further includes, moving from block 274 to 276, receiving, from the first passenger 14A, input for one of voting, ranking, approving and suggesting alternatives related to a selected at least one of a point of interest within the second map image 12B and a highlighted route within the second map image 12B that has been shared, by the second passenger 14B, with the first passenger 14A, and, moving to block 278, sharing the input from the first passenger 14A with the second passenger 14B. Further, moving from block 272 to block 280, the method 200 includes, receiving, from the second passenger 14B, input for one of voting, ranking, approving or suggesting alternatives related to a selected at least one of a point of interest within the first map image 12A and a highlighted route within the first map image 12A that has been shared, by the first passenger 14A, with the second passenger 14B, and, moving to block 282, sharing the input from the second passenger 14B with the first passenger 14A.

In another exemplary embodiment, moving from blocks 278 and 282 to block 284, the method 200 further includes highlighting, with the system controller 19, within the first map image 12A and the second map image 12B, at least one drop off location 178A, 178B at a selected point of interest, and moving to block 286, receiving, from each of the first and second passengers 14A, 14B, input indicating which of the at least one drop-off location the first and second passengers 14A, 14B want to be dropped off at.

In another exemplary embodiment, the first map image 14A is a view of a map image from the navigation system 45 from a first perspective for the first passenger 14A, and the second map image 12B is a view of the map image from the navigation system 45 from a second perspective for the second passenger 14B, and, the displaying, with the first display 18 of the image chamber 16 in communication with the system controller 19, the first map image 12A from the navigation system 45 at block 202 further includes, including, within the first map image 12A, an icon 180B representing the second passenger 14B to illustrate, for the first passenger 14A, the second passenger's perspective.

In another exemplary embodiment, the method 200 further includes, moving from blocks 218, 224, 226 and 228 to block 288, receiving, via the first microphone 116, a command from the first passenger 14A that triggers the system 10 to “look” for an object or location which is being observed by the first passenger 14A, and moving to block 290, receiving, via the at least one first sensor 74 and the at least one first gesture sensor 110 data related to gestures made by the first passenger 14A and the direction of a gaze of the first passenger 14A. Moving to block 292, the method 200 includes identifying, based on the data related to gestures made by the first passenger 14A, the direction of the gaze of the first passenger 14A and images of the external environment outside the vehicle compartment 26, a location outside of the vehicle compartment 26 that the first passenger 14A is looking at, and moving to block 294, highlighting the location within the first map image 12A and the second map image 12B.

For example, the first passenger 14A may say a pre-determined phrase, such as “Hey General”, which triggers the system 10. Upon triggering of the system 10, the receiving, with the system controller 19, via at least one first sensor 74, input comprising a position of a head and eyes of the first passenger 14A at block 224 and the receiving, with the system controller 19, via at least one first gesture sensor 110, information related to gestures made by the first passenger 14A at block 226, further includes identifying, based on the data related to gestures made by the first passenger 14A, the direction of the gaze of the first passenger 14A and images of the external environment outside the vehicle compartment 26, collected at block 218, a location outside of the vehicle compartment 26 that the first passenger 14A is looking at.

Gaze and gesture tracking allow the system 10 to generate a vector which defines an array of GPS coordinates. The point where this vector intersects the image from the external scene camera 124 identifies the observed location and allows calculation of a distance to the observed location. Merging the captured image of the external environment with the GPS coordinate vector allows the system to pinpoint a GPS location of the observed location.

In another exemplary embodiment, the broadcasting, with the system controller 19, via a first zonal speaker 120, audio output for the first passenger 14A at block 232, and the broadcasting, with the system controller 19, via a second zonal speaker 122, audio output for the second passenger 14B at block 234, further includes broadcasting an audible alert that the vehicle is nearing the destination as the vehicle approaches the destination. For example, the system controller 19 may provide an audible alert, such as “Your friend is on your left in 200 yards”.

A system 10 of the present disclosure offers several advantages. These include providing a floating image that is perceived by the passengers at a centrally location position within the vehicle compartment. This provides a camp-fire like viewing atmosphere where the passengers can all view a common floating image, or each passenger can view a unique floating image. Further, a system 10 in accordance with the present disclosure provides the ability to allow collaborative navigation by some or all of the passenger within the vehicle compartment. The system 10 also allows a passenger to interact with the virtual image via the touch screen passenger interface (transparent display 46) and uses parallax compensation to ensure the system 10 correctly correlates passenger input via the passenger interface to annotations and information displayed along with the virtual image.

The description of the present disclosure is merely exemplary in nature and variations that do not depart from the gist of the present disclosure are intended to be within the scope of the present disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the present disclosure.

Claims

1. A method of using a system for generating a centrally located floating three-dimensional image display for a plurality of passengers positioned within a vehicle compartment, comprising:

displaying, with a first display of an image chamber in communication with a system controller, a first map image from a navigation system;
receiving, with a first reflector individually associated with a first passenger, the first map image from the first display;
reflecting, with the first reflector, the first map image to the first passenger, wherein the first passenger perceives the first map image floating at a central location within the image chamber;
displaying, with a second display of the image chamber in communication with the system controller, a second map image from the navigation system;
receiving, with a second reflector individually associated with a second passenger, the second map image from the second display;
reflecting, with the second reflector, the second map image to the second passenger, wherein the second passenger perceives the second map image floating at the central location within the image chamber;
displaying, with a transparent display in communication with the system controller and positioned between eyes of the first passenger and the first reflector and between the eyes of the second passenger and the second reflector, first private information to the first passenger within an image plane positioned in front of the first image floating at the central location within the image chamber and second private information to the second passenger within an image plane positioned in front of the second image floating at the central location within the image chamber;
receiving, with the system controller, input from the first passenger and the second passenger;
collecting, with an external scene camera, images of an external environment outside the vehicle compartment; and
displaying, with an augmented reality display in communication with the system controller, having a transparent substrate with light emitting particles dispersed therein, and positioned on a window within the vehicle compartment remotely from the image chamber, one of the first map image and the second map image, including: generating, with a primary graphic projection device in communication with the system controller, a first set of images upon the window within the vehicle compartment based on visible light, wherein the first set of images are displayed upon a primary area of the window; and generating, with a secondary graphic projection device in communication with the system controller, a second set of images upon a secondary area of the window based on an excitation light, wherein the light emitting particles in the windscreen emit visible light in response to absorbing the excitation light, and wherein the first set of images displayed upon the primary area of the windscreen cooperate with the second set of images displayed upon the secondary area of the windscreen to create an edge-to-edge augmented reality image.

2. The method of claim 1, wherein, the receiving, with the system controller, input from the first passenger and the second passenger, further includes:

receiving input, with the system controller, via wireless communication with a mobile device;
receiving, with the system controller, via the transparent display, input from the first passenger and the second passenger;
receiving, with the system controller, via at least one first sensor, input comprising a position of a head and eyes of the first passenger;
receiving, with the system controller, via at least one first gesture sensor, information related to gestures made by the first passenger;
collecting, with the system controller, via a first microphone, audio input from the first passenger;
collecting, with the system controller, via a second microphone, audio input from the second passenger; and
the method further including:
broadcasting, with the system controller, via a first zonal speaker, audio output for the first passenger; and
broadcasting, with the system controller, via a second zonal speaker, audio output for the second passenger.

3. The method of claim 2, wherein the receiving, with the system controller, input from the first passenger and the second passenger, further includes:

receiving, from the first passenger, input to adjust a perspective, zoom level, position and angle of view of the first map image, and modifying, with the system controller, the perspective, zoom level, position and angle of view of the first map image based on the input from the first passenger; and
receiving, from the second passenger, input to adjust a perspective, zoom level, position and angle of view of the second map image and modifying, with the system controller, the perspective, zoom level, position and angle of view of the second map image based on the input from the second passenger.

4. The method of claim 3, further including:

selectively sharing the first map image, as modified based on input from the first passenger, with the second passenger; and
selectively sharing the second map image, as modified based on input from the second passenger, with the first passenger.

5. The method of claim 2, wherein the displaying, with the first display of the image chamber in communication with the system controller, the first map image from the navigation system further includes highlighting, within the first map image, points of interest for the first passenger, and highlighting, within the first map image, at least one route between a current location and a point of interest selected by the first passenger, and, the displaying, with the second display of the image chamber in communication with the system controller, the second map image from the navigation system further includes highlighting, within the second map image, points of interest for the second passenger, and highlighting, within the second map image, at least one route between a current location and a point of interest selected by the second passenger.

6. The method of claim 5, wherein the highlighting, within the first map image, points of interest for the first passenger further includes:

collecting, with the system controller, from remote data sources, information related to interests of the first passenger;
receiving, with the system controller, input from the first passenger related to points of interest that the first passenger is interested in;
identifying at least one point of interest that the first passenger may be interested in based on the information collected from remote data sources and input from the first passenger; and
the highlighting, within the second map image, points of interest for the second passenger further includes:
collecting, with the system controller, from remote data sources, information related to interests of the second passenger;
receiving, with the system controller, input from the second passenger related to points of interest that the second passenger is interested in;
identifying at least one point of interest that the second passenger may be interested in based on the information collected from the remote data sources and input from the second passenger.

7. The method of claim 6, further including:

selectively sharing the highlighted points of interest and the highlighted at least one route between a current location and a point of interest selected by the first passenger, from the first map image, with the second passenger; and
selectively sharing the highlighted points of interest and the highlighted at least one route between a current location and a point of interest selected by the second passenger, from the second map image, with the first passenger.

8. The method of claim 6, wherein the receiving, with the system controller, input from the first passenger and the second passenger, further includes:

receiving, from the first passenger, input selecting at least one of a point of interest within the first map image and a highlighted route within the first map image, and receiving, from the second passenger, input selecting at least one of a point of interest within the second map image and a highlighted route within the second map image;
selectively sharing the at least one of a point of interest within the first map image and a highlighted route within the first map image, selected by the first passenger, with the second passenger; and
selectively sharing the at least one of a point of interest within the second map image and a highlighted route within the second map image, selected by the second passenger, with the first passenger.

9. The method of claim 8, wherein the receiving, with the system controller, input from the first passenger and the second passenger, further includes:

receiving, from the first passenger, input for one of voting, ranking, approving and suggesting alternatives related to a selected at least one of a point of interest within the second map image and a highlighted route within the second map image that has been shared, by the second passenger, with the first passenger, and sharing the input from the first passenger with the second passenger; and
receiving, from the second passenger, input for one of voting, ranking, approving or suggesting alternatives related to a selected at least one of a point of interest within the first map image and a highlighted route within the first map image that has been shared, by the first passenger, with the second passenger, and sharing the input from the second passenger with the first passenger.

10. The method of claim 8, further including:

highlighting, with the system controller, within the first map image and the second map image, at least one drop off location at a selected point of interest; and
receiving, from each of the first and second passengers, input indicating which of the at least one drop-off location the first and second passengers want to be dropped off at.

11. The method of claim 2, wherein the first map image is a view of a map image from the navigation system from a first perspective for the first passenger, and the second map image is a view of the map image from the navigation system from a second perspective for the second passenger.

12. The method of claim 11, wherein the displaying, with the first display of the image chamber in communication with the system controller, the first map image from the navigation system further includes, including, within the first map image, an icon representing the second passenger to illustrate the second passenger's perspective.

13. The method of claim 2, further including, with the system controller:

receiving, via the first microphone, a command from the first passenger;
receiving, via the at least one first sensor and the at least one first gesture sensor data related to gestures made by the first passenger and the direction of a gaze of the first passenger;
identifying, based on the data related to gestures made by the first passenger, the direction of the gaze of the first passenger and images of the external environment outside the vehicle compartment, a location outside of the vehicle compartment that the first passenger is looking at; and
highlighting the location within the first map image and the second map image.

14. A system for generating a centrally located floating three-dimensional image display for a plurality of passengers positioned within a vehicle compartment within a vehicle, comprising:

a system controller in communication with a navigation system;
an image chamber including: a first display adapted to project a first map image from the navigation system; a first reflector individually associated with the first display and a first one of the plurality of passengers, the first reflector adapted to receive the first image from the first display and to reflect the first image to the first passenger, wherein the first passenger perceives the first image floating at a central location within the image chamber; a second display adapted to project a second map image from the navigation system; and a second reflector individually associated with the second display and a second one of the plurality of passengers, the second reflector adapted to receive the second image from the second display and to reflect the second image to the second passenger, wherein, the second passenger perceives the second image floating at the central location within the image chamber;
a transparent touch screen display positioned between the first reflector and the first passenger and between the second reflector and the second passenger and adapted to display first private information to the first passenger within an image plane positioned in front of the first image floating at the central location within the image chamber and to receive input from the first passenger, and adapted to display second private information to the second passenger within an image plane positioned in front of the second image floating at the central location within the image chamber and to receive input from the second passenger;
an external scene camera adapted to collect images of an external environment outside the vehicle compartment; and
an augmented reality display positioned within the vehicle compartment remotely from the image chamber and adapted to display one of the first map image and the second map image, the augmented reality display including: a transparent substrate, having light emitting particles dispersed therein, positioned on a window within the vehicle compartment; a primary graphic projection device for generating a first set of images upon the window of the vehicle based on visible light, wherein the first set of images are displayed upon a primary area of the window; a secondary graphic projection device for generating a second set of images upon a secondary area the window of the vehicle based on an excitation light, wherein the light emitting particles in the window emit visible light in response to absorbing the excitation light, and wherein the first set of images displayed upon the primary area of the window cooperate with the second set of images displayed upon the secondary area of the window to create an edge-to-edge augmented reality view of a surrounding environment of the vehicle: a primary graphics processing unit in electronic communication with the primary graphic projection device and the system controller; and a secondary graphics processing unit in electronic communication with the secondary graphic projection device and the system controller.

15. The system of claim 14, wherein the system is selectively moveable vertically up and down along a vertical central axis, the first display and the first reflector are unitarily and selectively rotatable about the vertical central axis, and the second display and the second reflector are unitarily and selectively rotatable about the vertical central axis;

the system further including: first sensors adapted to monitor a position of a head and eyes of the first passenger, wherein, the first display and first reflector are adapted to rotate in response to movement of the head and eyes of the first passenger, and second sensors adapted to monitor a position of a head and eyes of the second passenger, wherein, the second display and the second reflector are adapted to rotate in response to movement of the head and eyes of the second passenger, the system adapted to move up and down along the vertical axis in response to movement of the head and eyes of the first passenger and movement of the head and eyes of the second passenger; and a first gesture sensor adapted to gather information related to gestures made by the first passenger, and a second gesture sensor adapted to gather information related to gestures made by the second passenger, wherein, the system is adapted to receive input from the first and second passengers via data collected by the first and second gesture sensors.

16. A method of using a system for generating a centrally located floating three-dimensional image display for a plurality of passengers positioned within a vehicle compartment, comprising:

displaying, with a first display of an image chamber in communication with a system controller, a first map image from a navigation system;
receiving, with a first reflector individually associated with a first passenger, the first map image from the first display;
reflecting, with the first reflector, the first map image to the first passenger, wherein the first passenger perceives the first map image floating at a central location within the image chamber;
displaying, with a second display of the image chamber in communication with the system controller, a second map image from the navigation system;
receiving, with a second reflector individually associated with a second passenger, the second map image from the second display;
reflecting, with the second reflector, the second map image to the second passenger, wherein the second passenger perceives the second map image floating at the central location within the image chamber;
displaying, with a transparent display in communication with the system controller and positioned between eyes of the first passenger and the first reflector and between the eyes of the second passenger and the second reflector, first private information to the first passenger within an image plane positioned in front of the first image floating at the central location within the image chamber and second private information to the second passenger within an image plane positioned in front of the second image floating at the central location within the image chamber;
receiving, with the system controller, input from the first passenger and the second passenger, including: receiving input, with the system controller, via wireless communication with a mobile device; receiving, with the system controller, via the transparent display, input from the first passenger and the second passenger; receiving, with the system controller, via at least one first sensor, input comprising a position of a head and eyes of the first passenger; receiving, with the system controller, via at least one first gesture sensor, information related to gestures made by the first passenger; collecting, with the system controller, via a first microphone, audio input from the first passenger; and collecting, with the system controller, via a second microphone, audio input from the second passenger; and
the method further including: receiving, from the first passenger and the second passenger, input to adjust a perspective, zoom level, position and angle of view of the first map image and the second map image; modifying, with the system controller, the perspective, zoom level, position and angle of view of the first map image based on the input from the first passenger and the second map image based on the input from the second passenger; selectively sharing the first map image, as modified based on input from the first passenger, with the second passenger and selectively sharing the second map image, as modified based on input from the second passenger, with the first passenger; broadcasting, with the system controller, via a first zonal speaker, audio output for the first passenger; broadcasting, with the system controller, via a second zonal speaker, audio output for the second passenger; and collecting, with an external scene camera, images of an external environment outside the vehicle compartment;
wherein, displaying the first map image and the second map image includes highlighting, within the first map image, points of interest for the first passenger and at least one route between a current location and a point of interest selected by the first passenger, and highlighting, within the second map image, points of interest for the second passenger and at least one route between a current location and a point of interest selected by the second passenger.

17. The method of claim 4, wherein the first map image is a view of a map image from the navigation system from a first perspective for the first passenger, and the second map image is a view of the map image from the navigation system from a second perspective for the second passenger.

18. The method of claim 17, wherein the displaying, with the first display of the image chamber in communication with the system controller, the first map image from the navigation system further includes, including, within the first map image, an icon representing the second passenger to illustrate the second passenger's perspective.

19. The system of claim 14, wherein the first map image is a view of a map image from the navigation system from a first perspective for the first passenger, and the second map image is a view of the map image from the navigation system from a second perspective for the second passenger.

20. The system of claim 19, wherein the first map image from the navigation system includes an icon representing the second passenger to illustrate the second passenger's perspective.

Referenced Cited
U.S. Patent Documents
10318043 June 11, 2019 Seder et al.
11077844 August 3, 2021 Szczerba
20180147985 May 31, 2018 Brown
20190243151 August 8, 2019 Hansen
20210023948 January 28, 2021 Knittl
20230039608 February 9, 2023 Ji
20230375829 November 23, 2023 Seder
Other references
  • U.S. Appl. No. 17/749,464 to Seder et al., filed on May 20, 2022.
  • U.S. Appl. No. 17/746,243 to Chang et al., filed on May 17, 2022.
  • U.S. Appl. No. 17/824,210 to Sharma et al., filed on May 25, 2022.
  • U.S. Appl. No. 17/842,253 to Chang et al., filed on Jun. 16, 2022.
  • U.S. Appl. No. 17/842,272 to Seder et al., filed on Jun. 16, 2022.
  • U.S. Appl. No. 17/888,767 to Sharma et al., filed on Aug. 16, 2022.
Patent History
Patent number: 11977243
Type: Grant
Filed: Jan 12, 2023
Date of Patent: May 7, 2024
Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC (Detroit, MI)
Inventors: Joseph F. Szczerba (Grand Blanc, MI), John P. Weiss (Shelby Township, MI), Thomas A. Seder (Fraser, MI)
Primary Examiner: Kenneth B Lee, Jr.
Application Number: 18/153,790
Classifications
International Classification: G02B 30/31 (20200101); B60K 35/00 (20060101); G02B 27/01 (20060101); G06F 3/01 (20060101); G06F 3/04815 (20220101); G06F 3/04845 (20220101); G06F 3/16 (20060101); G06T 19/20 (20110101); G09G 3/00 (20060101); B60K 35/23 (20240101); B60K 35/28 (20240101); B60K 35/60 (20240101); G02B 7/182 (20210101); G02B 27/14 (20060101);