Navigation system employing augmented labeling and/or indicia

- BROADCOM CORPORATION

Navigation system employing augmented labeling and/or indicia. Various embodiments provide very accurate and timely information to a user thereby minimizing the amount of time or effort that a user will need to interpret or translate any information and/or directions provided thereby. In one instance, a vehicular based navigational system, which can operate with one or more projectors, provides information to a user such that the user need not turn away from the actual field of vision (e.g., 3D, 2D, etc.). Labeling and/or indicia is/are projected within the actual field of vision of the navigation system user. In another instance, labeling and/or indicia are overlaid or included within photographic and/or video information corresponding to an environment depicted by the navigation system. Such a navigation system may be implemented within a variety of contexts including within a vehicle, within a handheld/portable device, within a headset/eyeglasses device, etc.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED PATENTS/PATENT APPLICATIONS Provisional Priority Claims

The present U.S. Utility patent application claims priority pursuant to 35 U.S.C. §119(e) to the following U.S. Provisional patent application which is hereby incorporated herein by reference in its entirety and made part of the present U.S. Utility patent application for all purposes:

  • 1. U.S. Provisional Patent Application Ser. No. 61/491,838, entitled “Media communications and signaling within wireless communication systems,” (Attorney Docket No. BP22744), filed May 31, 2011, pending.

INCORPORATION BY REFERENCE

The following standards/draft standards are hereby incorporated herein by reference in their entirety and are made part of the present U.S. Utility patent application for all purposes:

  • 1. WD1: Working Draft 1 of High-Efficiency Video Coding, Joint Collaborative Team on Video Coding (JCT-VC), of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, Thomas Wiegand, et al., 3rd Meeting: Guangzhou, CN, 7-15 Oct., 2010, Document: JCTVC-C403, 137 pages.
  • 2. ISO/IEC 14496-10—MPEG-4 Part 10, AVC (Advanced Video Coding), alternatively referred to as H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding), ITU H.264/MPEG4-AVC, or equivalent.

INCORPORATION BY REFERENCE

The following IEEE standards/draft IEEE standards are hereby incorporated herein by reference in their entirety and are made part of the present U.S. Utility patent application for all purposes:

  • 1. IEEE Std 802.11™—2007, “IEEE Standard for Information technology—Telecommunications and information exchange between systems—Local and metropolitan area networks—Specific requirements; Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications,” IEEE Computer Society, IEEE Std 802.11™—2007, (Revision of IEEE Std 802.11-1999), 1233 pages.
  • 2. IEEE Std 802.11n™—2009, “IEEE Standard for Information technology—Telecommunications and information exchange between systems—Local and metropolitan area networks—Specific requirements; Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications; Amendment 5: Enhancements for Higher Throughput,” IEEE Computer Society, IEEE Std 802.11n™—2009, (Amendment to IEEE Std 802.11™—2007 as amended by IEEE Std 802.11k™—2008, IEEE Std 802.11r™—2008, IEEE Std 802.11y™—2008, and IEEE Std 802.11r™—2009), 536 pages.
  • 3. IEEE P802.11ac™/D1.0, May 2011, “Draft STANDARD for Information Technology—Telecommunications and information exchange between systems—Local and metropolitan area networks—Specific requirements, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications, Amendment 5: Enhancements for Very High Throughput for Operation in Bands below 6 GHz,” Prepared by the 802.11 Working Group of the 802 Committee, 263 total pages (pp. i-xxi, 1-242).

BACKGROUND OF THE INVENTION

1. Technical Field of the Invention

The invention relates generally to navigation systems; and, more particularly, it relates to providing effective, timely, and detailed information to a user of such navigation systems.

2. Description of Related Art

For many years, navigation systems have been of interest for a variety of applications. For example many vehicles include some form of navigational system to provide directions to a driver of a vehicle thereby assisting the driver to reach an endpoint destination of interest. However, typical prior art vehicular and other navigation systems often fail to provide adequate directions to user. For example, such navigation systems will often fail to provide adequate directions with sufficient specificity within impending moments before which some type of route adjustments must be made. For example, while driving a vehicle, an indication that a driver should make a particular turn onto a particular road is often so soon before that route adjustments must be made that the driver does not have time to make the proper route adjustment. For example, such an instruction may be provided by the navigation system when the vehicle is within a proximity that is too close to an intersection at which a driver should turn. That is to say, the instruction may be provided too late, and the driver may not be able to position the vehicle into the proper position or lane to make the appropriate route adjustments safely.

In typical prior art navigation systems, because of these errors that are often times made, rerouting and establishing a new reach the endpoint destination are commonplace. Of course, this could lead to great frustration for the driver or passengers of the vehicle, and there may be associated lost time as well as the driver must then take the alternate route to the endpoint destination. Moreover, by the typical situation in which a prior art navigation system may provide such information in a non-timely manner, accidents or other unfortunate events are more likely to happen as the driver attempts to comply with the very late given instructions.

Also, prior art navigation systems typically are implemented to provide such directions to a user in accordance with some form of animated depiction of the physical environment. That is to say a typical prior art navigation system will often times include cartoonlike animation that is representative of the physical environment. As the reader will understand, such animation is often times or inherently a non-accurate and imperfect representation of the physical environment. This also may result in a delay or problem for the user of the navigation system in trying to translate the animated type depiction of the physical environment to the actual physical environment. This also introduces a latency and delay which can result in a sacrifice of safety most importantly and also lead to frustration and delays for the user secondarily. For example, when a driver of a vehicle is spending an inordinate amount of time in trying to decipher and translate the directions provided by the navigation system, the driver may fail to notice hazards, people, etc. within the actual physical environment and may unfortunately place him or herself or others in a detrimental position.

There appears to be an insatiable desire for better and more effective navigation systems for use by drivers, pedestrians, etc. The present state of the art does not provide inadequate since solution that can overcome these and other deficiencies. As such, there seems to be a continual desire for more effective navigation systems and for improvements thereof.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

FIG. 1 and FIG. 2 are diagrams illustrating various embodiments of communication systems.

FIG. 3 is a diagram illustrating an alternative embodiment of a wireless communication system.

FIG. 4 is a diagram illustrating an embodiment of a wireless communication device.

FIG. 5 is a diagram illustrating an alternative embodiment of a wireless communication device.

FIG. 6 is a diagram illustrating an embodiment of a navigation system with a display or projection system in a vehicular context.

FIG. 7 is a diagram illustrating an embodiment of a navigation system operative to include overlays/directions within an in-dash and/or in-steering wheel display.

FIG. 8 is a diagram illustrating an embodiment of a vehicle including at least one projector for effectuating a display or projection of information.

FIG. 9A is a diagram illustrating an embodiment of a vehicle including at least one antenna for supporting wireless communications with at least one wireless communication system and/or network.

FIG. 9B is a diagram illustrating an embodiment of a vehicle including at least one camera for supporting acquisition of image and/or video information.

FIG. 10 is a diagram illustrating an embodiment of a navigation system operative to include overlays/directions within a wireless communication device.

FIG. 11A is a diagram illustrating an embodiment of navigation system operative to include overlays/directions within a headset and/or eyeglasses context.

FIG. 11B is a diagram illustrating an embodiment of navigation system operative to include overlays/directions within a headset and/or eyeglasses context in conjunction with a wireless communication device.

FIG. 12 is a diagram illustrating an embodiment of various types of navigation systems, implemented to support wireless communications, entering/exiting various wireless communication systems and/or networks.

FIG. 13 is a diagram illustrating an embodiment of at least two projectors operative in accordance with a lenticular surface for effectuating three dimensional (3-D) display.

FIG. 14 is a diagram illustrating an embodiment of at least two projectors operative in accordance with a non-uniform surface for effectuating 3-D display.

FIG. 15 is a diagram illustrating an embodiment of an organic light emitting diode (LED) as may be implemented in accordance with various applications for effectuating 3-D display.

FIG. 16A, FIG. 16B, FIG. 16C, FIG. 17A, FIG. 17B, FIG. 17C, FIG. 18A, FIG. 18B, FIG. 19A, and FIG. 19B illustrate various embodiment of methods as may be performed in accordance with operation of various devices such as various wireless communication devices and/or various navigations systems.

FIG. 20 is a diagram illustrating an embodiment of a vehicle including an audio system for effectuating directional audio indication.

FIG. 21 is a diagram illustrating an embodiment of a navigation system with a display or projection system in a vehicular context, and particularly including at least one localized region of the field of vision in which labeling and/or indicia is provided.

DETAILED DESCRIPTION OF THE INVENTION

Within communication systems, signals are transmitted between various communication devices therein. The goal of digital communications systems is to transmit digital data from one location, or subsystem, to another either error free or with an acceptably low error rate. As shown in FIG. 1, data may be transmitted over a variety of communications channels in a wide variety of communication systems: magnetic media, wired, wireless, fiber, copper, and other types of media as well.

FIG. 1 and FIG. 2 are diagrams illustrating various embodiments of communication systems, 100, and 200, respectively.

Referring to FIG. 1, this embodiment of a communication system 100 is a communication channel 199 that communicatively couples a communication device 110 (including a transmitter 112 having an encoder 114 and including a receiver 116 having a decoder 118) situated at one end of the communication channel 199 to another communication device 120 (including a transmitter 126 having an encoder 128 and including a receiver 122 having a decoder 124) at the other end of the communication channel 199. In some embodiments, either of the communication devices 110 and 120 may only include a transmitter or a receiver. There are several different types of media by which the communication channel 199 may be implemented (e.g., a satellite communication channel 130 using satellite dishes 132 and 134, a wireless communication channel 140 using towers 142 and 144 and/or local antennae 152 and 154, a wired communication channel 150, and/or a fiber-optic communication channel 160 using electrical to optical (E/O) interface 162 and optical to electrical (O/E) interface 164)). In addition, more than one type of media may be implemented and interfaced together thereby forming the communication channel 199.

To reduce transmission errors that may undesirably be incurred within a communication system, error correction and channel coding schemes are often employed. Generally, these error correction and channel coding schemes involve the use of an encoder at the transmitter end of the communication channel 199 and a decoder at the receiver end of the communication channel 199.

Any of various types of ECC codes described can be employed within any such desired communication system (e.g., including those variations described with respect to FIG. 1), any information storage device (e.g., hard disk drives (HDDs), network information storage devices and/or servers, etc.) or any application in which information encoding and/or decoding is desired.

Generally speaking, when considering a communication system in which video data is communicated from one location, or subsystem, to another, video data encoding may generally be viewed as being performed at a transmitting end of the communication channel 199, and video data decoding may generally be viewed as being performed at a receiving end of the communication channel 199.

Also, while the embodiment of this diagram shows bi-directional communication being capable between the communication devices 110 and 120, it is of course noted that, in some embodiments, the communication device 110 may include only video data encoding capability, and the communication device 120 may include only video data decoding capability, or vice versa (e.g., in a uni-directional communication embodiment such as in accordance with a video broadcast embodiment).

Referring to the communication system 200 of FIG. 2, at a transmitting end of a communication channel 299, information bits 201 (e.g., corresponding particularly to video data in one embodiment) are provided to a transmitter 297 that is operable to perform encoding of these information bits 201 using an encoder and symbol mapper 220 (which may be viewed as being distinct functional blocks 222 and 224, respectively) thereby generating a sequence of discrete-valued modulation symbols 203 that is provided to a transmit driver 230 that uses a DAC (Digital to Analog Converter) 232 to generate a continuous-time transmit signal 204 and a transmit filter 234 to generate a filtered, continuous-time transmit signal 205 that substantially comports with the communication channel 299. At a receiving end of the communication channel 299, continuous-time receive signal 206 is provided to an AFE (Analog Front End) 260 that includes a receive filter 262 (that generates a filtered, continuous-time receive signal 207) and an ADC (Analog to Digital Converter) 264 (that generates discrete-time receive signals 208). A metric generator 270 calculates metrics 209 (e.g., on either a symbol and/or bit basis) that are employed by a decoder 280 to make best estimates of the discrete-valued modulation symbols and information bits encoded therein 210.

Within each of the transmitter 297 and the receiver 298, any desired integration of various components, blocks, functional blocks, circuitries, etc. therein may be implemented. For example, this diagram shows a processing module 280a as including the encoder and symbol mapper 220 and all associated, corresponding components therein, and a processing module 280 is shown as including the metric generator 270 and the decoder 280 and all associated, corresponding components therein. Such processing modules 280a and 280b may be respective integrated circuits. Of course, other boundaries and groupings may alternatively be performed without departing from the scope and spirit of the invention. For example, all components within the transmitter 297 may be included within a first processing module or integrated circuit, and all components within the receiver 298 may be included within a second processing module or integrated circuit. Alternatively, any other combination of components within each of the transmitter 297 and the receiver 298 may be made in other embodiments.

As with the previous embodiment, such a communication system 200 may be employed for the communication of video data is communicated from one location, or subsystem, to another (e.g., from transmitter 297 to the receiver 298 via the communication channel 299).

FIG. 3 is a diagram illustrating an embodiment of a wireless communication system 300. The wireless communication system 300 includes a plurality of base stations and/or access points 312, 316, a plurality of wireless communication devices 318-332 and a network hardware component 334. Note that the network hardware 334, which may be a router, switch, bridge, modem, system controller, etc., provides a wide area network connection 342 for the communication system 300. Further note that the wireless communication devices 318-332 may be laptop host computers 318 and 326, personal digital assistant hosts 320 and 330, personal computer hosts 324 and 332 and/or cellular telephone hosts 322 and 328.

Wireless communication devices 322, 323, and 324 are located within an independent basic service set (IBSS) area and communicate directly (i.e., point to point). In this configuration, these devices 322, 323, and 324 may only communicate with each other. To communicate with other wireless communication devices within the system 300 or to communicate outside of the system 300, the devices 322, 323, and/or 324 need to affiliate with one of the base stations or access points 312 or 316.

The base stations or access points 312, 316 are located within basic service set (BSS) areas 311 and 313, respectively, and are operably coupled to the network hardware 334 via local area network connections 336, 338. Such a connection provides the base station or access point 312-316 with connectivity to other devices within the system 300 and provides connectivity to other networks via the WAN connection 342. To communicate with the wireless communication devices within its BSS 311 or 313, each of the base stations or access points 312-116 has an associated antenna or antenna array. For instance, base station or access point 312 wirelessly communicates with wireless communication devices 318 and 320 while base station or access point 316 wirelessly communicates with wireless communication devices 326-332. Typically, the wireless communication devices register with a particular base station or access point 312, 316 to receive services from the communication system 300.

Typically, base stations are used for cellular telephone systems (e.g., advanced mobile phone services (AMPS), digital AMPS, global system for mobile communications (GSM), code division multiple access (CDMA), local multi-point distribution systems (LMDS), multi-channel-multi-point distribution systems (MMDS), Enhanced Data rates for GSM Evolution (EDGE), General Packet Radio Service (GPRS), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA and/or variations thereof) and like-type systems, while access points are used for in-home or in-building wireless networks (e.g., IEEE 802.11, Bluetooth, ZigBee, any other type of radio frequency based network protocol and/or variations thereof). Regardless of the particular type of communication system, each wireless communication device includes a built-in radio and/or is coupled to a radio.

FIG. 4 is a diagram illustrating an embodiment 300 of a wireless communication device that includes the host device 318-332 and an associated radio 460. For cellular telephone hosts, the radio 460 is a built-in component. For personal digital assistants hosts, laptop hosts, and/or personal computer hosts, the radio 460 may be built-in or an externally coupled component.

As illustrated, the host device 318-332 includes a processing module 450, memory 452, a radio interface 454, an input interface 458, and an output interface 456. The processing module 450 and memory 452 execute the corresponding instructions that are typically done by the host device. For example, for a cellular telephone host device, the processing module 450 performs the corresponding communication functions in accordance with a particular cellular telephone standard.

The radio interface 454 allows data to be received from and sent to the radio 460. For data received from the radio 460 (e.g., inbound data), the radio interface 454 provides the data to the processing module 450 for further processing and/or routing to the output interface 456. The output interface 456 provides connectivity to an output display device such as a display, monitor, speakers, etc., such that the received data may be displayed. The radio interface 454 also provides data from the processing module 450 to the radio 460. The processing module 450 may receive the outbound data from an input device such as a keyboard, keypad, microphone, etc., via the input interface 458 or generate the data itself. For data received via the input interface 458, the processing module 450 may perform a corresponding host function on the data and/or route it to the radio 460 via the radio interface 454.

Radio 460 includes a host interface 462, digital receiver processing module 464, an analog-to-digital converter 466, a high pass and low pass filter module 468, an IF mixing down conversion stage 470, a receiver filter 471, a low noise amplifier 472, a transmitter/receiver switch 473, a local oscillation module 474 (which may be implemented, at least in part, using a voltage controlled oscillator (VCO)), memory 475, a digital transmitter processing module 476, a digital-to-analog converter 478, a filtering/gain module 480, an IF mixing up conversion stage 482, a power amplifier 484, a transmitter filter module 485, a channel bandwidth adjust module 487, and an antenna 486. The antenna 486 may be a single antenna that is shared by the transmit and receive paths as regulated by the Tx/Rx switch 473, or may include separate antennas for the transmit path and receive path. The antenna implementation will depend on the particular standard to which the wireless communication device is compliant.

The digital receiver processing module 464 and the digital transmitter processing module 476, in combination with operational instructions stored in memory 475, execute digital receiver functions and digital transmitter functions, respectively. The digital receiver functions include, but are not limited to, digital intermediate frequency to baseband conversion, demodulation, constellation demapping, decoding, and/or descrambling. The digital transmitter functions include, but are not limited to, scrambling, encoding, constellation mapping, modulation, and/or digital baseband to IF conversion. The digital receiver and transmitter processing modules 464 and 476 may be implemented using a shared processing device, individual processing devices, or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions. The memory 475 may be a single memory device or a plurality of memory devices. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, and/or any device that stores digital information. Note that when the processing module 464 and/or 476 implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory storing the corresponding operational instructions is embedded with the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry.

In operation, the radio 460 receives outbound data 494 from the host device via the host interface 462. The host interface 462 routes the outbound data 494 to the digital transmitter processing module 476, which processes the outbound data 494 in accordance with a particular wireless communication standard (e.g., IEEE 802.11, Bluetooth, ZigBee, WiMAX (Worldwide Interoperability for Microwave Access), any other type of radio frequency based network protocol and/or variations thereof etc.) to produce outbound baseband signals 496. The outbound baseband signals 496 will be digital base-band signals (e.g., have a zero IF) or digital low IF signals, where the low IF typically will be in the frequency range of one hundred kHz (kilo-Hertz) to a few MHz (Mega-Hertz).

The digital-to-analog converter 478 converts the outbound baseband signals 496 from the digital domain to the analog domain. The filtering/gain module 480 filters and/or adjusts the gain of the analog signals prior to providing it to the IF mixing stage 482. The IF mixing stage 482 converts the analog baseband or low IF signals into RF signals based on a transmitter local oscillation 483 provided by local oscillation module 474. The power amplifier 484 amplifies the RF signals to produce outbound RF signals 498, which are filtered by the transmitter filter module 485. The antenna 486 transmits the outbound RF signals 498 to a targeted device such as a base station, an access point and/or another wireless communication device.

The radio 460 also receives inbound RF signals 488 via the antenna 486, which were transmitted by a base station, an access point, or another wireless communication device. The antenna 486 provides the inbound RF signals 488 to the receiver filter module 471 via the Tx/Rx switch 473, where the Rx filter 471 bandpass filters the inbound RF signals 488. The Rx filter 471 provides the filtered RF signals to low noise amplifier 472, which amplifies the signals 488 to produce an amplified inbound RF signals. The low noise amplifier 472 provides the amplified inbound RF signals to the IF mixing module 470, which directly converts the amplified inbound RF signals into an inbound low IF signals or baseband signals based on a receiver local oscillation 481 provided by local oscillation module 474. The down conversion module 470 provides the inbound low IF signals or baseband signals to the filtering/gain module 468. The high pass and low pass filter module 468 filters, based on settings provided by the channel bandwidth adjust module 487, the inbound low IF signals or the inbound baseband signals to produce filtered inbound signals.

The analog-to-digital converter 466 converts the filtered inbound signals from the analog domain to the digital domain to produce inbound baseband signals 490, where the inbound baseband signals 490 will be digital base-band signals or digital low IF signals, where the low IF typically will be in the frequency range of one hundred kHz to a few MHz. The digital receiver processing module 464, based on settings provided by the channel bandwidth adjust module 487, decodes, descrambles, demaps, and/or demodulates the inbound baseband signals 490 to recapture inbound data 492 in accordance with the particular wireless communication standard being implemented by radio 460. The host interface 462 provides the recaptured inbound data 492 to the host device 318-332 via the radio interface 454.

As one of average skill in the art will appreciate, the wireless communication device of the embodiment 400 of FIG. 4 may be implemented using one or more integrated circuits. For example, the host device may be implemented on one integrated circuit, the digital receiver processing module 464, the digital transmitter processing module 476 and memory 475 may be implemented on a second integrated circuit, and the remaining components of the radio 460, less the antenna 486, may be implemented on a third integrated circuit. As an alternate example, the radio 460 may be implemented on a single integrated circuit. As yet another example, the processing module 450 of the host device and the digital receiver and transmitter processing modules 464 and 476 may be a common processing device implemented on a single integrated circuit. Further, the memory 452 and memory 475 may be implemented on a single integrated circuit and/or on the same integrated circuit as the common processing modules of processing module 450 and the digital receiver and transmitter processing module 464 and 476.

Any of the various embodiments of communication device that may be implemented within various communication systems can incorporate functionality to perform communication via more than one standard, protocol, or other predetermined means of communication. For example, a single communication device, designed in accordance with certain aspects of the invention, can include functionality to perform communication in accordance with a first protocol, a second protocol, and/or a third protocol, and so on. These various protocols may be WiMAX (Worldwide Interoperability for Microwave Access) protocol, a protocol that complies with a wireless local area network (WLAN/WiFi) (e.g., one of the IEEE (Institute of Electrical and Electronics Engineer) 802.11 protocols such as 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, etc.), a Bluetooth protocol, or any other predetermined means by which wireless communication may be effectuated.

FIG. 5 is a diagram illustrating an alternative embodiment of a wireless communication device that includes the host device 318-332 and an associated at least one radio 560. For cellular telephone hosts, the radio 560 is a built-in component. For personal digital assistants hosts, laptop hosts, and/or personal computer hosts, the radio 560 may be built-in or an externally coupled component. For access points or base stations, the components are typically housed in a single structure.

As illustrated, the host device 318-332 includes a processing module 550, memory 552, radio interface 554, input interface 558 and output interface 556. The processing module 550 and memory 552 execute the corresponding instructions that are typically done by the host device. For example, for a cellular telephone host device, the processing module 550 performs the corresponding communication functions in accordance with a particular cellular telephone standard.

The radio interface 554 allows data to be received from and sent to the radio 560. For data received from the radio 560 (e.g., inbound data), the radio interface 554 provides the data to the processing module 550 for further processing and/or routing to the output interface 556. The output interface 556 provides connectivity to an output display device such as a display, monitor, speakers, et cetera such that the received data may be displayed. The radio interface 554 also provides data from the processing module 550 to the radio 560. The processing module 550 may receive the outbound data from an input device such as a keyboard, keypad, microphone, et cetera via the input interface 558 or generate the data itself. For data received via the input interface 558, the processing module 550 may perform a corresponding host function on the data and/or route it to the radio 560 via the radio interface 554.

Radio 560 includes a host interface 562, a baseband processing module 564, memory 566, a plurality of radio frequency (RF) transmitters 568-372, a transmit/receive (T/R) module 574, a plurality of antennae 582-386, a plurality of RF receivers 576-380, and a local oscillation module 5100 (which may be implemented, at least in part, using a VCO). The baseband processing module 564, in combination with operational instructions stored in memory 566, execute digital receiver functions and digital transmitter functions, respectively. The digital receiver functions, include, but are not limited to, digital intermediate frequency to baseband conversion, demodulation, constellation demapping, decoding, de-interleaving, fast Fourier transform, cyclic prefix removal, space and time decoding, and/or descrambling. The digital transmitter functions, include, but are not limited to, scrambling, encoding, interleaving, constellation mapping, modulation, inverse fast Fourier transform, cyclic prefix addition, space and time encoding, and/or digital baseband to IF conversion. The baseband processing modules 564 may be implemented using one or more processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions. The memory 566 may be a single memory device or a plurality of memory devices. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, and/or any device that stores digital information. Note that when the processing module 564 implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory storing the corresponding operational instructions is embedded with the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry.

In operation, the radio 560 receives outbound data 588 from the host device via the host interface 562. The baseband processing module 564 receives the outbound data 588 and, based on a mode selection signal 5102, produces one or more outbound symbol streams 590. The mode selection signal 5102 will indicate a particular mode as are illustrated in the mode selection tables, which appear at the end of the detailed discussion. Such operation as described herein is exemplary with respect to at least one possible embodiment, and it is of course noted that the various aspects and principles, and their equivalents, of the invention may be extended to other embodiments without departing from the scope and spirit of the invention.

For example, the mode selection signal 5102, with reference to table 1 may indicate a frequency band of 2.4 GHz or 5 GHz, a channel bandwidth of 20 or 22 MHz (e.g., channels of 20 or 22 MHz width) and a maximum bit rate of 54 megabits-per-second. In other embodiments, the channel bandwidth may extend up to 1.28 GHz or wider with supported maximum bit rates extending to 1 gigabit-per-second or greater. In this general category, the mode selection signal will further indicate a particular rate ranging from 1 megabit-per-second to 54 megabits-per-second. In addition, the mode selection signal will indicate a particular type of modulation, which includes, but is not limited to, Barker Code Modulation, BPSK, QPSK, CCK, 16 QAM and/or 64 QAM. As is further illustrated in table 1, a code rate is supplied as well as number of coded bits per subcarrier (NBPSC), coded bits per OFDM symbol (NCBPS), and data bits per OFDM symbol (NDBPS).

The mode selection signal may also indicate a particular channelization for the corresponding mode which for the information in table 1 is illustrated in table 2. As shown, table 2 includes a channel number and corresponding center frequency. The mode select signal may further indicate a power spectral density mask value which for table 1 is illustrated in table 3. The mode select signal may alternatively indicate rates within table 4 that has a 5 GHz frequency band, 20 MHz channel bandwidth and a maximum bit rate of 54 megabits-per-second. If this is the particular mode select, the channelization is illustrated in table 5. As a further alternative, the mode select signal 5102 may indicate a 2.4 GHz frequency band, 20 MHz channels and a maximum bit rate of 192 megabits-per-second as illustrated in table 6. In table 6, a number of antennae may be utilized to achieve the higher bit rates. In this instance, the mode select would further indicate the number of antennae to be utilized. Table 7 illustrates the channelization for the set-up of table 6. Table 8 illustrates yet another mode option where the frequency band is 2.4 GHz, the channel bandwidth is 20 MHz and the maximum bit rate is 192 megabits-per-second. The corresponding table 8 includes various bit rates ranging from 12 megabits-per-second to 216 megabits-per-second utilizing 2-4 antennae and a spatial time encoding rate as indicated. Table 9 illustrates the channelization for table 8. The mode select signal 102 may further indicate a particular operating mode as illustrated in table 10, which corresponds to a 5 GHz frequency band having 40 MHz frequency band having 40 MHz channels and a maximum bit rate of 486 megabits-per-second. As shown in table 10, the bit rate may range from 13.5 megabits-per-second to 486 megabits-per-second utilizing 1-4 antennae and a corresponding spatial time code rate. Table 10 further illustrates a particular modulation scheme code rate and NBPSC values. Table 11 provides the power spectral density mask for table 10 and table 12 provides the channelization for table 10.

It is of course noted that other types of channels, having different bandwidths, may be employed in other embodiments without departing from the scope and spirit of the invention. For example, various other channels such as those having 80 MHz, 120 MHz, and/or 160 MHz of bandwidth may alternatively be employed such as in accordance with IEEE Task Group ac (TGac VHTL6).

The baseband processing module 564, based on the mode selection signal 5102 produces the one or more outbound symbol streams 590 from the output data 588. For example, if the mode selection signal 5102 indicates that a single transmit antenna is being utilized for the particular mode that has been selected, the baseband processing module 564 will produce a single outbound symbol stream 590. Alternatively, if the mode select signal indicates 2, 3 or 4 antennae, the baseband processing module 564 will produce 2, 3 or 4 outbound symbol streams 590 corresponding to the number of antennae from the output data 588.

Depending on the number of outbound streams 590 produced by the baseband module 564, a corresponding number of the RF transmitters 568-372 will be enabled to convert the outbound symbol streams 590 into outbound RF signals 592. The transmit/receive module 574 receives the outbound RF signals 592 and provides each outbound RF signal to a corresponding antenna 582-386.

When the radio 560 is in the receive mode, the transmit/receive module 574 receives one or more inbound RF signals via the antennae 582-386. The T/R module 574 provides the inbound RF signals 594 to one or more RF receivers 576-380. The RF receiver 576-380 converts the inbound RF signals 594 into a corresponding number of inbound symbol streams 596. The number of inbound symbol streams 596 will correspond to the particular mode in which the data was received (recall that the mode may be any one of the modes illustrated in tables 1-12). The baseband processing module 560 receives the inbound symbol streams 590 and converts them into inbound data 598, which is provided to the host device 318-332 via the host interface 562.

In one embodiment of radio 560 it includes a transmitter and a receiver. The transmitter may include a MAC module, a PLCP module, and a PMD module. The Medium Access Control (MAC) module, which may be implemented with the processing module 564, is operably coupled to convert a MAC Service Data Unit (MSDU) into a MAC Protocol Data Unit (MPDU) in accordance with a WLAN protocol. The Physical Layer Convergence Procedure (PLCP) Module, which may be implemented in the processing module 564, is operably coupled to convert the MPDU into a PLCP Protocol Data Unit (PPDU) in accordance with the WLAN protocol. The Physical Medium Dependent (PMD) module is operably coupled to convert the PPDU into a plurality of radio frequency (RF) signals in accordance with one of a plurality of operating modes of the WLAN protocol, wherein the plurality of operating modes includes multiple input and multiple output combinations.

An embodiment of the Physical Medium Dependent (PMD) module includes an error protection module, a demultiplexing module, and a plurality of direction conversion modules. The error protection module, which may be implemented in the processing module 564, is operably coupled to restructure a PPDU (PLCP (Physical Layer Convergence Procedure) Protocol Data Unit) to reduce transmission errors producing error protected data. The demultiplexing module is operably coupled to divide the error protected data into a plurality of error protected data streams The plurality of direct conversion modules is operably coupled to convert the plurality of error protected data streams into a plurality of radio frequency (RF) signals.

It is also noted that the wireless communication device of this diagram, as well as others described herein, may be implemented using one or more integrated circuits. For example, the host device may be implemented on one integrated circuit, the baseband processing module 564 and memory 566 may be implemented on a second integrated circuit, and the remaining components of the radio 560, less the antennae 582-586, may be implemented on a third integrated circuit. As an alternate example, the radio 560 may be implemented on a single integrated circuit. As yet another example, the processing module 550 of the host device and the baseband processing module 564 may be a common processing device implemented on a single integrated circuit. Further, the memory 552 and memory 566 may be implemented on a single integrated circuit and/or on the same integrated circuit as the common processing modules of processing module 550 and the baseband processing module 564.

The previous diagrams and their associated written description illustrate some possible embodiments by which a wireless communication device may be constructed and implemented. In some embodiments, more than one radio (e.g., such as multiple instantiations of the radio 460, the radio 560, a combination thereof, or even another implementation of a radio) is implemented within a wireless communication device. For example, a single wireless communication device can include multiple radios therein to effectuate simultaneous transmission of two or more signals. Also, multiple radios within a wireless communication device can effectuate simultaneous reception of two or more signals, or transmission of one or more signals at the same time as reception of one or more other signals (e.g., simultaneous transmission/reception).

Within the various diagrams and embodiments described and depicted herein, wireless communication devices may generally be referred to as WDEVs, DEVs, TXs, and/or RXs. It is noted that such wireless communication devices may be wireless stations (STAs), access points (APs), or any other type of wireless communication device without departing from the scope and spirit of the invention. Generally speaking, wireless communication devices that are APs may be referred to as transmitting or transmitter wireless communication devices, and wireless communication devices that are STAs may be referred to as receiving or receiver wireless communication devices in certain contexts.

Of course, it is noted that the general nomenclature employed herein wherein a transmitting wireless communication device (e.g., such as being an AP, or a STA operating as an ‘AP’ with respect to other STAs) initiates communications, and/or operates as a network controller type of wireless communication device, with respect to a number of other, receiving wireless communication devices (e.g., such as being STAs), and the receiving wireless communication devices (e.g., such as being STAs) responding to and cooperating with the transmitting wireless communication device in supporting such communications.

Of course, while this general nomenclature of transmitting wireless communication device(s) and receiving wireless communication device(s) may be employed to differentiate the operations as performed by such different wireless communication devices within a communication system, all such wireless communication devices within such a communication system may of course support bi-directional communications to and from other wireless communication devices within the communication system. In other words, the various types of transmitting wireless communication device(s) and receiving wireless communication device(s) may all support bi-directional communications to and from other wireless communication devices within the communication system.

Various aspects and principles, and their equivalents, of the invention as presented herein may be adapted for use in various standards, protocols, and/or recommended practices (including those currently under development) such as those in accordance with IEEE 802.11x (e.g., where x is a, b, g, n, ac, ah, ad, af, etc.).

FIG. 6 is a diagram illustrating an embodiment 600 of a navigation system with a display or projection system in a vehicular context. With respect to navigation systems, a novel approach and architecture is presented herein by which information is provided to a driver of a vehicle within the driver's field of vision. For example, instead of providing information to a driver in a way that the driver must take his or her eyes off of the road, information related to the driving experience is provided within the driver's field of vision (e.g., so the driver need not take his or her eyes off of the road). In addition, by providing such information within the field of vision, the navigation system becomes much more effectual.

For example, by providing such information to a driver within the actual field of vision, much greater specificity and clarification may be provided to a driver in regards to directions. For example, when approaching a relatively complex interchange of roads, if specific information regarding which of a number of potential roads is the correct one, the driver will be able to make better or more accurate decisions. Generally speaking, such a navigation system, operating in conjunction with one or more projectors, provides information to help ensure that the driver is able to keep his or her head up while driving.

As may be seen with respect to this embodiment, a navigation system is implemented using overlays or directions that are included within the actual field of vision. For example, in the context of a vehicle application, a driver (or user) is provided with overlays or directions within the actual field of vision rather than on a display such as may be included within the dashboard of a vehicle. Also, various feature labeling may also be included with respect to elements that are included within the field of vision. For example, a label for a building or other feature within the field of vision may be appropriately placed thereon. Street names may also be overlaid the actual streets within the field of vision.

In addition, any of a variety of types of information related to a driving experience may similarly be displayed within the field of vision. For example, the actual vehicle's location (e.g., such as in accordance with a global positioning system (GPS) location), the vehicle's relative location (e.g., such as relative to a point of destination to which the vehicle is heading), the vehicle's speed (velocity), the vehicle's directionality (e.g., North, South, Southwest, etc.), time information (e.g., a clock), chronometer information (e.g., a timer), temperature information (e.g., external to the vehicle and/or internal of the vehicle), and/or any other additional information which may be used to enhance a driver's experience may be included within the field of vision.

Of course, a user is given wide latitude to select which particular information is to be provided within the field of vision. Using a user interface, display, and/or control module of the vehicle, a user may select any group or combination of information to be displayed within the field of vision. It is noted that such user interface, display, and/or control module may be implemented in any of a variety of ways including by using an already implemented navigation or GPS system of the vehicle.

Also, a user may select or identify feature labels to be included within the field of vision. Of course, with respect to feature labels, those labels are only included within the field of vision when the vehicle is within proximity of those features. For example, when the vehicle is near a particular street, such that the user should be able to see that street, the feature label would then be overlaid the street within the actual field of vision. Analogously, when the vehicle is within a sufficient proximity of a particular feature such as a building, the label for that particular building would then be displayed within the field of vision. In certain embodiments, the user may also select the proximity to which various features will cause such labeling to appear. For example, a user may select a proximity of X number of distance measure (e.g., where distance measure may be in meters, feet, etc. and/or any other measuring unit) to a particular feature as being the triggering event that will cause such labeling to appear within the field of vision.

As the reader will understand, any other variety of different information may be displayed within the field of vision to enhance driver experience. To effectuate such labeling, overlays, identifiers, and/or directions, etc. within the actual field of vision, the windshields of the vehicle may provide background on which such information is projected. For example, various imaging devices such as projectors may be implemented within a vehicle and can use various translucent surfaces around the driver as the backdrop(s) or background on which images are projected for visual consumption by the driver in a vehicle or context. That is to say, the windshields and windows of the vehicle may themselves serve as the background on which such information may be projected. The glass or glasslike materials employed for windshields may serve as the actual background for projection of such labeling and/or indicia for consumption by the driver and/or passengers. With respect to such labeling and/or indicia as described with respect to this embodiment, as well as with respect to other embodiments and/or diagrams, it is noted that such terminology of labeling and/or indicia may be viewed as generally referring to information that is provided to enhance the user experience. For example, such labeling and/or indicia may correspond to any of a variety of formats, including arrows, markers, labels, alphanumeric characters, etc. and/or any combination thereof, etc.

As the reader will understand with respect to other embodiments, such information may be provided in accordance with three-dimensional (3-D) projection such that the various labels etc. will appear as being extended out and into the field of vision.

Referring specifically to this diagram, overlays or directions are provided by arrows indicating the path by which the driver should proceed. In the diagram, one arrow indicates to proceed forward until reaching the intersection of X Street and Y Avenue. A second arrow indicates that the driver should turn right on Y Avenue. As may be seen, by providing such directions particularly within the field of vision, there is a very small possibility that the driver will make meeting returns in trying to reach his or her destination. In addition, by providing such labels or features within the field of vision (e.g., labels of landmarks, buildings, stadiums, airports, parks, historical features, and/or any other features that a driver may come across, etc.), additional information is provided to help a driver navigate in an attempt to reach his or her destination. Such labeling of these features may operate as guideposts or reference points that will help the driver navigate along the appropriate roads to reach his or her destination. Generally speaking, such labeling and/or indicia may be viewed as markers that reach out into the field of vision at perspective depths within the field of vision to highlight certain pathways, features, destinations, etc. for a driver.

With respect to providing such labeling and/or indicia within the field of vision, a user also has a wide degree of latitude in selecting when such labeling and/or indicia are to be provided to the user. For example, a user may select that desired information be provided at all times. In an alternative embodiment, such labeling and/or indicia may be provided when the vehicle is within certain proximity of the destination (e.g., which may be selected by the user) or only when the vehicle is to make a particular road change, lane change, turn, etc. along the path which will lead to the destination. That is to say, when a particular action must be taken by the driver to ensure that the vehicle maintains on the proper path to the endpoint destination, at that point such labeling and/or indicia would then be provided to the driver within the driver's field of vision. Of course, such labeling and/or indicia may be provided when selected by the user (e.g., when the user enables such labeling and/or indicia to be provided).

In even other embodiments, such labeling and/or indicia may be provided within the field of vision as a function of the complexity of a particular segment or portion of the path that leads to the endpoint destination. For example, while driving along a country road in a predominantly rural environment, there may be very little need to provide very specific indicia related to the drive. As the reader will understand, while driving in a very rural environment in which there are very few roads, very few interchanges, etc., very little indication and information may need to be provided to the driver. However, while driving in a very congested, complex, urban, etc. environment, there may be a multiplicity of options by which a driver may take the wrong path. As the reader will understand, while driving along a road in a large city and when approaching a very complex interchange of roads in which there are many different routes, liens, exits, etc. that may be taken, greater specificity and labeling of the various roads, and more specifically the corrected road on which the driver should proceed, will help ensure that the driver takes the correct road.

As may be seen with respect to this diagram, such labeling and/or indicia is provided within the actual field of vision and is provided with respect to the actual physical environment. That is to say, such labeling and/or indicia is not provided within an artificially reconstructed environment, but such labeling and/or indicia are within the actual field of vision of the driver. Generally speaking, such labeling and/or indicia is provided and tailored particularly for the frame of reference of the driver. However, such labeling and/or indicia could alternatively be provided and tailored particularly with respect to the reference or any other location within the vehicle (e.g., with respect to the passenger seat next to the driver, a seat in the back of the vehicle, etc.). For example, in a situation in which the driver does not necessarily want such labeling and/or indicia to be provided to him or her, yet when a passenger in the vehicle does want such information provided to him or her, such labeling and/or indicia may be provided only to those occupants of the vehicle for whom such information is desired to be provided.

FIG. 7 is a diagram illustrating an embodiment 700 of a navigation system operative to include overlays/directions within an in-dash and/or in-steering wheel display. With respect to this diagram, instead of providing such labeling and/or indicia within the actual field of vision of a driver and/or occupant of the vehicle, such labeling and/or indicia is being instead included within actual photographic image and/or video information that is provided via a display within the vehicle. For example, actual photographic image and/or video may be provided via a display within the vehicle (e.g., such as via a small screen implemented within the dashboard of the vehicle, on the steering wheel of the vehicle, etc.), and such labeling and/or indicia is included within that actual photographic image and/or video information. As the reader will understand, such labeling and/or indicia will correspond particularly to the actual physical environment as represented by the actual photographic image and/or video information.

As we understand, by using actual photographic and/or video information corresponding to the actual physical environment, there is a much greater likelihood and lower possibility that a driver will incorrectly interpret the directions provided by such a navigation system. While this variant does not particularly project labeling and/or indicia into the driver's field of vision, the use of actual photographic and/or video information nevertheless does provide a very accurate depiction of the physical environment. There is significantly less likelihood that a driver will have difficulty in translating the directions provided by the navigation system. For example, the fact that actual photo and/or video information of the physical environment is itself employed as the medium within which labeling and/or indicia is placed (as opposed to some animated or cartoonlike representation of the physical environment) will effectively reduce the likelihood of misinterpretation of the directions provided by the navigation system.

Analogous to the previous embodiment, any particular information which may be operative to enhance the experience of the driver for any occupant of the vehicle may be provided within the actual photographic image and/or video information. As will be seen with respect to other diagrams and/or embodiments described herein, such photographic image and/or video information may be acquired in real-time. Alternatively, such photographic image and/or video information may be retrieved beforehand, such as from a database including such photographic image and/or video information, maps, etc., and such information may be stored with in any desired memory device within the vehicle and/or the navigation system thereof. In some instances, such information is retrieved from one or more networks (such as one or more wireless networks) whenever the vehicle is stationary/not being driven and has access to such one or more networks. This way, very recent/up to date information (or at least relatively recent/up to date) information may be employed even if not retrieved exactly in real time.

FIG. 8 is a diagram illustrating an embodiment 800 of a vehicle including at least one projector for effectuating a display or projection of information. This diagram illustrates how one or more projectors may be implemented within and around the vehicle for effectuating the display of labeling and/or indicia within their field of vision of the driver or passenger of the vehicle. In some embodiments, only a single projector is employed for displaying labeling and/or indicia for one or more people within the vehicle. In an alternative embodiment, two or more projectors may be employed to effectuate three-dimensional or visual stereo labeling and/or indicia within the field of vision. This embodiment particularly shows how various projectors may be implemented to use the windshields or windows of a vehicle as the background on which various labeling and/or indicia are projected for visual consumption by the driver or passengers of the vehicle.

This diagram represents a top view of a generic vehicle, and the projectors may be implemented within the vehicle to ensure their protection from the elements, etc. Of course, various embodiments may implement the one or more projectors in entirely different positions within the vehicle. That is to say, one or more projectors may be appropriately placed to provide a better user experience for someone located within the driver position of the vehicle in one embodiment. In another embodiment, the one or more projectors may be appropriately placed to provide a better user experience for a passenger or non-driver of the vehicle.

Also, as the reader will understand with respect to other embodiments described herein, the labeling and/or indicia may be provided within any directionality or field of vision of the driver or any passenger of the vehicle. That is to say, the labeling and/or indicia need not be provided only via the front windshield of the vehicle. Labeling and/or indicia may also be provided via the rear windshield, any side windows, any sun or moon roof, etc. that may provide a field of vision for the driver or any passenger of the vehicle.

As such, such labeling and/or indicia may be employed to provide an enhanced experience for passengers of the vehicle. That is to say, in addition to merely providing directions and information for use by the driver of the vehicle to assist in that driver in reaching the appropriate endpoint destination, labeling and/or indicia may be provided for consumption by the passengers of the vehicle. One embodiment may include providing information for consumption by participants of a tour that happens to be riding along in the vehicle. For example, within a relatively large vehicle such as a van or bus, there may be a very large number of windows by which various passengers of the vehicle may have respective fields of vision. Different and respective labeling and/or indicia may be provided for selective consumption by the various passengers in the vehicle. For example, a first passenger riding near the back of the vehicle may have labeling and/or indicia for a particular feature within the physical environment projected into his respective field of vision that is different than the labeling and/or indicia for that very same particular feature within the physical environment from the perspective of a second passenger riding in the vehicle. As the reader will understand, such labeling and/or indicia may be very useful in tour guide industries such that each of the respective participants of the tour can be provided with individual and selective information regarding the physical environment through which they are traveling for greater understanding and enjoyment of that physical environment.

FIG. 9A is a diagram illustrating an embodiment 900 of a vehicle including at least one antenna for supporting wireless communications with at least one wireless communication system and/or network. As may be seen within this diagram, one or more antenna may be included within a vehicle to help effectuate wireless communications with one or more wireless communication systems and/or networks. For example, the appropriate and respective means, functionality, circuitry, etc. for supporting wireless communications with any desired wireless communication systems and/or networks may be included within the vehicle (e.g., satellite, wireless local area network (WLAN) such as WiFi, wide area network (WAN) such as WiMAX, etc.). This diagram also shows a top view of the vehicle. The particular placement of the one or more antenna for supporting wireless communications may be placed within any desired pattern or configuration within the vehicle.

By including such wireless communication capability within a vehicle, a navigation system of the vehicle may receive information from any wireless communication system and/or network with which the vehicle may communicate. Of course, the navigation system of the vehicle may also access other communication systems and/or networks through any intervening wireless communication system and/or network with which the navigation system may communicate. For example, the navigation system may be able to communicate wirelessly with an access point (AP) of a wireless local area network, and via that access point connectivity, the navigation system may be able to access the Internet to access or retrieve any information there from. For example, various databases available via the Internet would be acceptable by the navigation system of the vehicle.

By being able to provide such real-time communications to the navigation system on the vehicle, real-time information may be provided to driver via the navigation system. For example, real-time information regarding traffic flow, accidents, road closures, detours, etc. may be provided to the driver. Moreover, any locally stored database corresponding to the navigation system on the vehicle may be adapted or modified as a function of new information received via such wireless communications.

Referring to a previous embodiment in which photographic and/or video information regarding the actual physical environment is employed by the navigation system, certain other embodiments may operate by which such photographic and/or video information may be updated, corrected, enhanced, etc. by information that is received via such wireless communications. For example, by providing such wireless communication capability for the navigation system of the vehicle, such updating of any information locally stored for use by the navigation system may be performed without meeting specific interaction by the driver or user of the navigation system. In addition, selective updating of the photographic and/or video information employed by the navigation system may need only to be performed when the vehicle is within a certain location. That is to say, locally stored photographic and/or video information corresponding to the physical environment of a given locale could be updated when the vehicle enters into or near that given locale. As such, the updating of any locally stored information of the navigation system could be performed as a function of the location of the vehicle. As the vehicle moves around, any photographic and/or video information pertaining to its current location could be updated in real time.

It is also noted that such updating of locally stored information of the navigation system may be triggered based on a comparison of such locally stored information to that which may be retrieved from a database with which the navigation system may communicate. For example, as the vehicle moves to a location for which the navigation system has some locally stored information, the navigation system would then automatically compare its locally stored information to other information corresponding to that same location that which may be retrieved from some remote database (e.g., from a remote database accessible via the Internet). Of course, in certain embodiments, the implementation of such image processing comparison functionality, circuitry, etc. may not be desirable in all embodiments. For example, in an embodiment in which the provisioning of such functionality, circuitry, etc. may be undesirable (e.g., for any number of reasons, including cost, complexity, space, etc.), information that is received via one or more wireless networks may simply be provided for consumption by the user “as is” (such as without undergoing any comparison operations).

FIG. 9B is a diagram illustrating an embodiment 901 of a vehicle including at least one camera for supporting acquisition of image and/or video information. As may be seen with respect to this diagram, one or more cameras may be included within the vehicle to perform photo and/or video capture of the physical environment around the vehicle. By using one or more cameras that are physically implemented around the vehicle, very accurate and completely up-to-date information may be employed by the navigation system. For example, as the vehicle is approaching a particular intersection in which any given route adjustment may need to be made, photographic and/or video information may be acquired in real time by the one or more cameras implemented around the vehicle. Then, this currently acquired information may be used by the navigation system such that labeling and/or indicia may be included therein.

With various operational parameters and constraints by which these one or more cameras operate may be adaptive and/or configurable. For example, via a user interface of the vehicle's navigation system, a user may select the various operational parameters and constraints by which photographic and/or video information is acquired by the one or more cameras. Alternatively, the navigation system may include adaptive capability such that the one or more cameras perform photographic and/or video capture and compare that acquired information to that which is stored with in memory of the navigation system or which may be retrieved via one or more communication networks with which the navigation system may communicate. When the real-time acquired information differs from that which is either locally stored or retrieved from a remote database, the navigation system may update its respective information using that information which has been acquired in real time by the one or more cameras. It is also noted that such updating of information may be performed when a vehicle is not being driven (e.g., such as when it is parked).

As the reader will understand, a navigation system, operating in conjunction with at least one camera implemented with respect to the vehicle, may be implemented to provide for acquisition of the physical environment through which the vehicle passes to contribute significantly to the accuracy of the photographic and/or video information that is employed to provide direction and instructions to a user of the navigation system. For example, the driver may be provided with much more accurate information, corresponding specifically to the environment in which the vehicle presently is, thereby minimizing any possibility that the driver of the vehicle will improperly translate instructions provided via the navigation system.

As described with respect to other embodiments herein, using photographic and/or video information accompanied with labeling and/or indicia, and specifically using up-to-date and fully accurate photographic and/or video information, very accurate and comprehensive directions may be provided to the driver of the vehicle via the navigation system.

Somewhat analogously as described with respect other embodiments that include various complements therein (such as the previous embodiment that includes one or more antenna), the one or more cameras that may be implemented with respect to a vehicle may be implemented in any desired location. In some embodiments, the one or more cameras will be appropriately implemented to provide for photographic and/or video information acquisition in the direction of the forward field of vision with respect to the perspective of the driver. In other embodiments, additional cameras may be implemented to provide acquisition of such information in any directional view extending from the vehicle.

For example, with respect to an embodiment including a very large vehicle such as a bus, multiple cameras may be implemented around the perimeter or periphery of the bus to allow for acquisition of such information from any perspective of any particular occupant of such a vehicle.

It is also noted that any desired capability of the one or more cameras may be selected for and implemented within a given embodiment. For example, in some instances, the one or more cameras are three dimension (3-D) capable. In other embodiments, the cameras may have any one or combination of a particular or preferred resolution, focus or auto focus capabilities, image acquisition size, widescreen capability, panoramic capability, etc. By implementing one or more cameras having such desired capabilities for a given application, the operational capabilities and quality of the navigation system may be specifically tailored.

With respect to such an embodiment includes one or more cameras operative to perform photographic and/or image acquisition, various types of buffering may be performed. For example, such acquisition may be performed in a manner that only a last or most recent period of time of photographic and/or image information is kept (e.g., depending upon the amount of memory employed within such a system for buffering of photographic and/or image information).

FIG. 10 is a diagram illustrating an embodiment 1000 of a navigation system operative to include overlays/directions within a wireless communication device. The navigation of this particular diagram is implemented within a wireless communication device (e.g., a wireless terminal 1010), such as may be a portable wireless communication device as may be employed by a user. Examples of such wireless communication devices or wireless terminal terminals are many including cell phones, personal digital assistants, laptop computers, and/or, generally speaking, any wireless communication device that may be portable.

Such a wireless communication device includes a user interface that is operative to provide information to a user and may also be operative to receive input from the user. For example, the user interface includes a display by which photographic image and/or video information may be displayed for consumption by the user. Somewhat analogous to previous embodiments, labeling and/or indicia may be included within such photographic image and/or video information that is provided via the display for consumption by the user in accordance with navigational directions and instructions. Also, somewhat analogous to certain previous embodiments, any desired additional information which may enhance the user experience may be included or overlaid with respect to actual photographic and/or video in for example, formation. Some examples of such user experience enhancing information may include direction or bearing (such as North, South, etc.) that the wireless communication device or user is moving or facing, the actual location of the wireless communication device or user, arrows indicating certain routes and/or route adjustments that will get the user to the appropriate endpoint destination.

FIG. 11A is a diagram illustrating an embodiment 1100 of navigation system operative to include overlays/directions within a headset and/or eyeglasses context. With respect to this diagram, and appropriately implemented headset and/or eyeglass-based system may include functionality in which labeling and/or indicia may be projected into the user's field of vision. Such labeling and/or indicia may be provided in accordance with the operation of the navigation system. An appropriately implemented headset and/or eyeglasses having such functionality to include labeling and/or indicia within a user's field of vision will include the appropriate circuitry to effectuate such labeling and/or indicia.

In one embodiment, one or more projectors are implemented within the headset and/or eyeglass-based system such that the labeling and/or indicia may be projected onto the actual transparent or translucent surfaces through which a user views the respective field of vision. When the user does view the field of vision, the labeling and/or indicia being projected appropriately onto the transparent or translucent surface of the headset and/or eyeglass-based system will effectuate such labeling and/or indicia as being projected into or overlaid the field of vision.

It is also noted respective this diagram and embodiment 1100 as well as with respect other diagrams and embodiments herein, such labeling and/or indicia as may be provided for consumption by a user need not necessarily be provided within the visible spectrum. For example, by using appropriately implemented eyeglasses, headgear, etc., certain labeling and/or indicia may be provided within the non-visible spectrum. For example, such appropriately implemented eyeglasses, headgear, etc. may enable a user to perceive infrared provided labeling and/or indicia. For example, in accordance with a night vision-based system, such infrared provided labeling and/or indicia could be provided for consumption by a user that would not be perceptible by others (e.g., those not employing such appropriately implemented eyeglasses, headgear, etc.).

FIG. 11B is a diagram illustrating an embodiment 1101 of navigation system operative to include overlays/directions within a headset and/or eyeglasses context in conjunction with a wireless communication device. With respect to this diagram, a headset and/or eyeglass-based system may include wireless communication capability. For example, the component placed upon a user's head or in front of the user's eyes may include a wireless terminal therein. Such a wireless terminal or wireless communication device may be implemented to operate in accordance with any desired wireless communication protocol, standard, and/or recommended practice, etc. as may be desired.

In addition, such a wireless terminal as included within a headset and/or eyeglass-based system may operate cooperatively with a handheld or portable wireless terminal of the user. In such an embodiment, the headset and/or eyeglass-based system and the portable wireless terminal may generally be viewed as a combined system. For example, the handheld or portable wireless terminal may include significantly more processing capability, hardware, memory, etc. with respect to that which may be included within the compliment placed upon the user's head or in front of the user's eyes. By utilizing the wireless communications effectuated between the two respective complements of the overall system, significantly reduced amount of processing capability, hardware, memory, etc. may be implemented within the placed upon the user's head or in front of the user's eyes.

FIG. 12 is a diagram illustrating an embodiment 1200 of various types of navigation systems, implemented to support wireless communications, entering/exiting various wireless communication systems and/or networks. As may be seen with respect to this embodiment, certain such devices and/or systems may include wireless communication capability. As the reader will understand, such devices and/or systems may enter and exit various wireless network service areas. In this diagram, it may be seen that a vehicle including wireless communication capability may move among various wireless network service areas. Specifically, with respect to a time 1 and a time 2, the vehicle is shown as being within two respective wireless network service areas. As the vehicle enters and exits various wireless networks, it may acquire information respectively from those wireless networks.

With respect to a pedestrian as depicted below the vehicle, such a user may also enter and exit such wireless network service areas. However, it is likely that such a user while moving on foot would move at a rate that is less than that of the vehicle. As such, the respective times at which such a user may communicate with the respective wireless network service areas may differ from that of the vehicle.

Regardless of the particular mode of transportation of a user, as a device including such navigation system having such wireless communication capability moves among various wireless network service areas, different respective information may be retrieved respectively there from and provided to the user of the navigation system. For example, certain wireless network service areas may include information specific to the locale in which the wireless network operates. One embodiment may correspond to the locale of a traffic intersection or road near which the wireless network operates. The wireless network may include at least one device therein (e.g., access point (AP)) that includes capability for monitoring traffic flow. For example, an access point may include photographic and/or video acquisition capability (e.g., one or more cameras) such that information related to traffic flow is acquired and monitored. As a navigation system enters such a wireless network service area, the navigation system could then retrieve information related to such traffic flow. Such information may be current information, historical information such as trends of traffic flow as a function of time of day (e.g., rush-hour, midday, etc.), etc. Then, when the navigation system having such wireless communication capability effectuate communication with such a wireless network service area, such information may be retrieved and utilizes within and in conjunction with the navigation system.

As the reader will understand, by including such capability that real-time retrieved information may be incorporated within a navigation system as a function of the movement of the navigation system, such greater detailed information will further enhance the user's experience.

It is also noted that while such functionality and capability may be included within a navigation system having such wireless communication capability, there may be selectivity by which some real-time acquired information is provided to a user of the navigation system. For example, one embodiment may correspond to always providing such information to a user. Another embodiment may include selectivity in which such information is provided to the user only when switched on such as via a switch, a voice command, etc.

In yet another embodiment, such acquired information may be provided to a user as a function of the proximity of the navigation system or user to the endpoint destination. For example, when the user is within a particular distance of the endpoint destination (e.g., such as determined by a threshold or proximity which may be set by the user), only then is such information provided to the user.

In yet another embodiment, such acquired information may be provided to a user as a function of the complexity of the current environment in which the user is. For example, while driving a vehicle, when approaching a particular interchange having a relatively high complexity, such capability would be automatically turned on. The settings and constraints by which such decision-making may be made could be user-defined, predetermined, etc.

In giving yet another embodiment, such acquired information may be provided as a function of the congestion or traffic in which the vehicle may currently be in or along the route that the vehicle is proceeding. For example, one or more sensors may be included within the vehicle to detect the existence of one or more additional vehicles nearby (e.g., collision detection capability may be employed to detect proximity of one or more other vehicles). Such information acquired by sensors may be accompanied with monitoring of the velocity of the vehicle (e.g., when the velocity of the vehicle is below a particular threshold and one or more other vehicles are detected within a particular proximity of the vehicle, which may indicate relatively congested traffic or a traffic jam), then such information may be provided for consumption by the user. In some embodiments, one or more cameras may be employed to perform photographic and/or video information that may be used to determine congestion or traffic information, proximity of one or more other vehicles, etc. In addition, by employing such wireless communication capability, real-time acquisition of information from one or more wireless networks with which the navigation system may communicate may be used to provide information related to congestion or traffic in which the vehicle may currently be in or along the route that the vehicle is proceeding.

FIG. 13 is a diagram illustrating an embodiment 1300 of at least two projectors operative in accordance with a lenticular surface for effectuating three dimensional (3-D) display. Generally speaking, a lenticular surface may be viewed as an array or series of cylindrical molded lenses. Such a lenticular surface may be included within any of the transparent or translucent surface (e.g., any windshield or window of a vehicle, headset and/or eyeglasses, a transparent or translucent portion of a headset, etc.).

Such a lenticular surface may be constructed such that it is generally transparent non-perceptible to a user. That is to say, the cylindrical lenses of such a lenticular surface will not deleteriously affect the perceived quality or vision of a user when looking through such a lenticular surface. The cylindrical lenses may be visually non-perceptible to a user, yet tactically felt by a user (e.g., when a user may slide a finger across the surface).

However, the particular features of such a lenticular surface can provide one means by which three-dimensional imaging may be effectuated. For example, with respect to an embodiment of a navigation system implemented within a vehicle such that labeling and/or indicia is projected into a field of vision of the driver or any other passenger of the vehicle, when one or more of the transparent or translucent surfaces of the vehicle (e.g., any windshield or window of a vehicle) is implemented in accordance with the lenticular surface, three-dimensional display of such labeling and/or indicia may be made. Again, the use of such a lenticular surface will not obstruct the view or vision capabilities of a driver or passenger of the vehicle. However, by having such a lenticular surface implemented on the transparent or translucent surface, the effect of three-dimensional projection of labeling and/or indicia may be made with respect to a field of vision of a driver or passenger of the vehicle.

As may be seen with respect to this diagram, two or more projectors may be employed in different locations for effectuating three-dimensional imaging. In this particular instance, a first projector could be implemented on a left-hand side and a second projector could be implemented on a right-hand side. Each respective projector is then operative respectively to project an image and/or video information towards the lenticular surface. Each of the respective cylindrical lenses of the lenticular surface then appropriately process the respective received image and/or video information such that a driver or passenger of the vehicle will perceive a three-dimensional effect. That is to say, the cylindrical lenses and their curvature shape operate to refract light rays received via the at least two respective directions corresponding to the at least two respective projectors. As the reader will understand, the curvature shape of the respective cylindrical lenses along the lenticular surface effectively refract the received light rays as a function of their angle of incidence at the surface. In a preferred embodiment, such a lenticular surface may be designed to have a rear focal plane coincident with the backplane of the transparent or translucent surface. In the instance of the lenticular surface being implemented on a windshield, the rear focal plane of the lenticular surface may be designed to coincide with the backplane of the windshield (e.g., the surface of the windshield which is exposed to the outside, which is the backplane of the translucent or transparent surface through which a driver or passenger looks).

It is noted that such a lenticular surface may be implemented along any desired surface through which a user may look. For example, in another embodiment, such a lenticular surface may be implemented within eyeglasses, a headset type display, etc. By using two or more appropriately placed projectors, three-dimensional imaging may be effectuated using such transparent or translucent material.

FIG. 14 is a diagram illustrating an embodiment 1400 of at least two projectors operative in accordance with a non-uniform surface for effectuating 3-D display. This diagram depicts an alternative embodiment by which three-dimensional imaging may be effectuated. Instead of employing a particularly titular surface as with respect to a previous embodiment, a non-uniform surface may be employed. Somewhat analogous to the previous embodiment, two or more projectors may be implemented at different respective locations so that when they respectively project photographic and/or video information towards the non-uniform surface. Therefore, from the perspective of a driver or passenger of the vehicle, three-dimensional projected labeling and/or indicia will appear to extend out into the respective field of vision.

Generally speaking, any desired surface may be employed to assist in effectuating three-dimensional imaging for projecting labeling and/or indicia into a user's field of vision. The use of lenticular surface as described with respect to a previous embodiment and the sawtooth appearing non-uniform surface of this perspective embodiment are to possible examples. Of course, other such services that help effectuate three-dimensional imaging.

FIG. 15 is a diagram illustrating an embodiment 1500 of an organic light emitting diode (organic LED or OLED) as may be implemented in accordance with various applications for effectuating 3-D display. This diagram depicts yet another means by which three-dimensional imaging may be effectuated for an enhanced user experience. An organic LED is a light emitting diode in which the permissive electroluminescent layer is a film of or chronic compounds which emit light in response to an electric current. An OLED may be implemented using a flexible type material such that the OLED may be placed over any desired having any of a variety of different shapes. For example, an OLED may be implemented over any translucent or transparent surface. Again, as with respect to various other embodiments described herein, some examples of such transparent or translucent surfaces include those which may appear in a vehicular context (e.g., windshield, windows, sun roofs, moon roofs, etc.). With respect to eyeglasses and/or headset applications, examples of transparent translucent surfaces include the lenses of the eyeglasses and/or headwear. Generally speaking, an OLED implemented using a flexible material may be employed within a variety of applications including those such as may be used within navigation systems to provide labeling and/or indicia to a user.

By appropriately implementing an OLED flexible material over a translucent or transparent surface through which a user views a respective field of vision, indicia and/or labeling may be indicated within a user's respective field of vision. As the reader will understand, different OLEDs may be operative to emit light having different colors. By employing a matrix type display structure in which different respectively located OLEDs are arranged to emit different respective colors (e.g., red, green, blue), such an overall display structure would then be operative to display any desired photographic and or video information. In certain alternative embodiments, a common OLED material may be selectively and differentially energized using different voltages appropriately applied across different portions of the overall OLED structure to effectuate the emission of different respective colors. Generally speaking, the use of an old LED flexible material over a translucent or transparent surface allows for the inclusion of labeling and/or indicia within a user's field of vision.

The composition of an OLED includes a layer of organic material situated between two electrodes, the anode and cathode, respectively. The organic molecules are electrically conductive as a result of delocalization of pi electrons caused by conjugation over all or part of the molecule. These materials have conductivity levels ranging from insulators to conductors, and therefore are considered organic semiconductors. The highest occupied and lowest unoccupied molecular orbits (HOMO and LUMO) of such organic semiconductors may be viewed as being analogous to the valence and conduction band of inorganic semiconductors.

During operation, a voltage is applied across the OLED such that the anode is positive with respect to the cathode. A current of electrons flows through the device from cathode to anode, as electrons are injected into the LUMO of the organic layer at the cathode and withdrawn from the HOMO at the anode. This process may also be described as the injection of electron holes into the HOMO. Electrostatic forces bring the electrons in the holes toward each other, and they recombine forming an exciton, a bound state of the electron and hole. This will typically occur closer to the emissive layer, because in organic semiconductors, holes are generally more mobile than electrons. The decay of this excited state (e.g., decay of the exciton) results in a relaxation of the energy levels of the electron, accompanied by emission of radiation whose frequency is in the visible spectrum. As the reader will understand, the frequency of this radiation depends on the bandgap of the material, particularly the difference in energy levels between HOMO and LUMO. Such OLED's may be implemented using a variety of different materials. For example, indium tin oxide (ITO) is one material that may be used as the anode material within OLED's.

Generally speaking, the use of OLED's on the transparent or translucent surfaces through which a user looks to perceive a field of vision when using a navigation system can provide for three-dimensional imaging of labeling and/or indicia to assist the user with directions or instructions.

As the reader will understand with respect to the various embodiments described herein, a novel means by which labeling and/or indicia may be provided within a field of vision of a user. In many instances, such labeling and/or indicia is provided with respect to the operation of a navigation system in which such labeling and/or indicia assists the user to navigate through a particular region or environment in an effort to arrive at a desired endpoint destination. In some embodiments, the inclusion of such labeling and or indicia is made in accordance with three-dimensional imaging. A variety of means may be employed to effectuate such three-dimensional imaging including the use of OLED, the use of appropriately designs along transparent or translucent services, etc. In addition, any desired information may be provided for consumption by a user to enhance the overall experience. For example, in the context of a vehicular system, labeling and/or indicia within a user's field of vision may accompany additional information such as directionality, location, vehicular velocity, time, estimated time of arrival, etc. By providing such information in a manner, without requiring a driver or passenger of the vehicle to take his or her eyes out of the field of vision, such information may be provided to the user without incurring a potential safety risk.

With respect to those embodiments that include labeling and/or indicia within or overlaid photographic and/or video information, the improved accuracy provided by such photographic and/or video information can reduce or eliminate the amount of time for a user to translate what is depicted within the display of such a system to that of the actual physical environment. Again, the use of such photographic and/or video information will provide much greater specificity thereby assisting a user to reach a desired endpoint destination hopefully more safely, quicker, and with less frustration for the user.

FIG. 16A, FIG. 16B, FIG. 16C, FIG. 17A, FIG. 17B, FIG. 17C, FIG. 18A, FIG. 18B, FIG. 19A, and FIG. 19B illustrate various embodiment of methods as may be performed in accordance with operation of various devices such as various wireless communication devices and/or various navigations systems.

Referring to method 1600 of FIG. 16A, the method 1600 begins by identifying a feature within an actual physical environment, as shown in a block 1610. Such a feature may be any as described herein including a landmark, a building, a stadium, and airport, apart, a historical feature, a geological feature, and/or any other feature within a physical environment. The method 1600 continues by projecting an overlay/label into a user's field of vision based on the identified feature, as shown in a block 1620. Such projection may be made in accordance with three-dimensional (3D) projection. For example, one or more projectors may be implemented within a vehicle to project labeling and/or indicia into the actual physical environment from the perspective of a user (e.g., a driver and/or passenger of the vehicle).

Referring to method 1601 of FIG. 16B, the method 1601 begins by identifying at least one of a user's current position and an endpoint destination, as shown in a block 1611. For example, any of a variety of means may be employed to identify a user's current location. In some embodiments, a user enters information corresponding to that user's current location via a user interface of a given device. In other embodiments, certain location determining means, functionality, circuitry, etc. are employed that do not necessarily require the interaction of user (e.g., global positioning system (GPS), detection of one or more fixed wireless access points within a wireless communication network [e.g., such as one or more access point (APs) within a wireless local area network (WLAN)], etc.).

The method 1601 then operates by identifying a feature within an actual physical environment corresponding to a user's current location and/or an endpoint destination, as shown in a block 1621. For example, the feature identified may correspond particularly to a pathway between a user's current location and an endpoint destination. That is to say, the method 1601 may operate particularly to identify features based upon a user's current location, an endpoint destination, and/or one or more pathways between the user's current location and the endpoint destination. For example, the method 1601 may operate to identify different respective features that may be encountered along different respective pathways between a user's current location and an endpoint destination. Such different pathways, and their respective features identified there along, could be presented to a user of a given device for selection by the user of a particular one of the possible pathways. As may be seen, selectivity of pathways, such as in accordance with a method operating in accordance with navigation, may be based upon respective features that may be encountered along different respective pathways.

The method 1601 continues by projecting an overlay/label into the user's field of vision based on the identified feature, the user's current location, and/or the endpoint destination, as shown in a block 1631. Any one or any combination of the identified feature, the user's current location, and the endpoint destination may govern, at least in part, an overlay/label that may be projected into a user's field of vision. As described also with respect to other diagrams and embodiments herein any such overlay/label may take the form of any labeling and/or indicia that may correspond to any of a variety of formats, including arrows, markers, labels, alphanumeric characters, etc. and/or any combination thereof, etc.

Referring to method 1602 of FIG. 16C, the method 1602 begins by receiving user input corresponding to an endpoint destination, as shown in a block 1610. For example, the method 1602 may operate in accordance with navigation functionality such that a user may input different types of information thereto.

The method 1602 continues by identifying the user's current location and the endpoint destination, as shown in a block 1622. The user's current location may be provided by information that is entered via a user interface by the user. Alternatively, other means may be employed to identify the user's current location (e.g., GPS, detection of one or more fixed wireless access points within a wireless communication network [e.g., such as one or more access point (APs) within a wireless local area network (WLAN)], etc.). The endpoint destination may be provided based upon information that is entered via a user interface by the user.

The method 1602 then operates by determining a pathway between the user's current location and the endpoint destination, as shown in a block 1632. In certain embodiments, more than one pathway is presented to a user, and a user is provided an opportunity to select one of those pathways.

The method 1602 continues by identifying a feature within an actual physical environment corresponding to the pathway, as shown in a block 1642. In certain embodiments, different respective features are identified as corresponding to different respective pathways between a user's current location and an endpoint destination.

The method 1602 then operates by projecting an overlay/label into the user's field of vision based on the identified feature, as shown in a block 1652. In alternative embodiments, the projection of an overlay/label may be based upon additional considerations as well (e.g., the user's current location, the endpoint destination, etc.)

Referring to method 1700 of FIG. 17A, the method 1700 begins by identifying a feature within media, as shown in a block 1710. Various examples of such media may include photographic and or video media depicting an actual physical environment. Such an actual physical environment may correspond to that environment in which a user currently is. Alternatively, the physical buyer environment may correspond to an environment in which a user intends or plans to enter (e.g., such as in accordance with following navigational directions between a current location and an endpoint destination).

The method 1700 continues by outputting the media with an overlay/label therein based on the identified feature, as shown in a block 1720. For example, the media, after having undergone processing for identification of at least one feature therein, may be output for consumption by a user with at least one overlay/label therein. By providing such labeling and/or indicia particularly within media that corresponds particularly to an actual physical environment, highly accurate information may be provided to a user.

Referring to method 1701 of FIG. 17B, the method 1701 begins by identifying a user's current location and/or an endpoint destination, as shown in a block 1711. Again, as with respect to other embodiments herein, various means may be employed to identify the user's current location (e.g., GPS, user provided information entered via a user interface, etc.) and the endpoint destination (e.g., user provided information entered via a user interface, etc.).

The method 1701 then operates by identifying at least one feature within media corresponding to the user's current location and/or an endpoint destination, as shown in a block 1721. For example, within media including photographic and/or video media, certain processing of that media may provide indication of respective features included therein. Various types of pattern recognition, image processing, video processing, etc. may be employed to identify one or more features within media. In some embodiments, the identification of at least one feature within the media is based upon a pathway between the user's current location and the endpoint destination. As also described with respect other embodiments, more than one pathway may be identified between a user's current location and an endpoint destination. In such instances, different respective features may be identified along the different respective pathways between the user's current location and the endpoint destination.

The method 1701 continues by outputting the media with an overlay/label therein based on anyone or any combination of the identified feature, the user's current location, and/or the endpoint destination, as shown in a block 1731.

Referring to method 1702 of FIG. 17C, the method 1702 begins by receiving user input corresponding to an endpoint destination, as shown in a block 1710. Such information may be provided via a user interface of a given device.

The method 1702 continues by identifying the user's current location and the endpoint destination, as shown in a block 1722. As also mentioned with respect to various embodiments, various means may be employed to identify the user's current location (e.g., GPS, user provided information entered via a user interface, etc.) and the endpoint destination (e.g., user provided information entered via a user interface, etc.).

The method 1702 then operates by determining a pathway between the user's current location in the endpoint destination, as shown in a block 1732. In alternative embodiments, more than one pathway may be identified between the user's current location in the endpoint destination, and those respective pathways may be provided to a user for selection of at least one thereof.

The method 1702 continues by identifying at least one feature within media corresponding to the pathway, as shown in a block 1742. For example, within media including photographic and/or video media, certain processing of that media may provide indication of respective features included therein. Various types of pattern recognition, image processing, video processing, etc. may be employed to identify one or more features within media.

The method 1702 then operates by outputting the media with an overlay/label therein based on the identified feature, as shown in a block 1752. Of course, multiple overlays/labels may be included within the media in accordance with more than one respective identified feature.

Referring to method 1800 of FIG. 18A, the method 1800 begins by acquiring media corresponding to an actual physical environment, as shown in a block 1810.

In certain embodiments, such acquisition is performed in accordance with accessing one or more databases via one or more networks, as shown in a block 1810a. For example, via some wireless communication network and/or via the Internet, one or more databases may be accessed to acquire media corresponding to an actual physical location. Such an actual physical location may correspond to an environment in which a user currently is, an environment in which a user plans to be, etc.

In certain other embodiments, such acquisition is performed in accordance with one or more cameras, as shown in a block 1810b. For example, one or more cameras may be employed to perform real-time acquisition of such media. As the reader will understand, such real-time acquired information will hopefully be extremely accurate, up-to-date, etc. and properly detect the actual physical environment.

The method 1800 continues by processing the media in accordance with identifying a feature therein, as shown in a block 1820. Again, within this embodiment as well as within others, various types of pattern recognition, image processing, video processing, etc. may be employed for processing of media to identify one or more features therein.

The method 1800 then operates by outputting the media with an overlay/label therein based on the identified feature, as shown in a block 1830. Of course, within this embodiment as within others, multiple overlays/labels may be included within the media in accordance with more than one respective identified feature.

Referring to method 1801 of FIG. 18B, the method 1801 begins by identifying a user's current location and/or an endpoint destination, as shown in a block 1811. Again, various means may be employed to identify the user's current location (e.g., GPS, user provided information entered via a user interface, etc.) and the endpoint destination (e.g., user provided information entered via a user interface, etc.).

The method 1801 then operates by acquiring media based on one or both of the user's current location and the endpoint destination, as shown in a block 1821.

In certain embodiments, such acquisition is performed in accordance with accessing one or more databases via one or more networks, as shown in a block 1821a. For example, via some wireless communication network and/or via the Internet, one or more databases may be accessed to acquire media corresponding to an actual physical location. Such an actual physical location may correspond to an environment in which a user currently is, an environment in which a user plans to be, etc. As the reader will understand, such acquisition of media is particularly based on one or both of the user's current location and the endpoint destination. In other words, from certain perspectives, the search window is particularly tailored and/or narrowed based upon one or both of the user's current location and the endpoint destination. In alternative embodiments in which multiple pathways may be identified corresponding to multiple paths between the user's current location and the end point destination, different respective media may be acquired for more than one of the respective pathways. For example, a first set of media may be acquired for a first pathway, a second set of media may be acquired for a second pathway, etc.

In certain other embodiments, such acquisition is performed in accordance with one or more cameras, as shown in a block 1821b. For example, one or more cameras may be employed to perform real-time acquisition of such media. As the reader will understand, such real-time acquired information will hopefully be extremely accurate, up-to-date, etc. and properly detect the actual physical environment.

The method 1801 continues by processing the media in accordance with identifying a feature therein, as shown in a block 1831. Again, various types of pattern recognition, image processing, video processing, etc. may be employed to identify one or more features within media.

The method 1801 then operates by outputting the media with an overlay/label therein based on the identified feature, as shown in a block 1841. Of course, within this embodiment as within others, multiple overlays/labels may be included within the media in accordance with more than one respective identified feature.

Referring to method 1900 of FIG. 19A, the method 1900 begins by acquiring first information via a first network, as shown in a block 1910. For example, any of a variety of types of information may be acquired via a given network, and certain networks may be operative to provide different information, different levels of specificity, etc. As the reader will understand, as a user and/or device moves throughout a physical environment, and particularly into and out of different respective networks (e.g., such as into and out of different respective wireless networks which may include any or more, or any combination of satellite networks, wireless local area networks (WLANs), wide area networks (WANs), and/or any other type of wireless networks), conductivity to those different respective networks may be made, and by which, different respective information may be acquired.

In certain embodiments, such acquisition is performed in accordance with acquiring real-time traffic information, information corresponding to one or more features within a physical environment, media corresponding to an actual physical environment, etc., as shown in a block 1910a.

In certain embodiments, the method 1900 operates by acquiring second information via a second network, as shown in a block 1920. That is to say, a user and/or device may move throughout a physical environment including moving in and out of different respective service areas of different respective wireless networks.

The method 1900 then operates by selectively outputting the first information (and/or the second information) based on a user's current location and/or an endpoint destination, as shown in a block 1930.

Referring to method 1901 of FIG. 19B, the method 1901 begins by receiving user input corresponding to a first operational mode, as shown in a block 1911. Such an operational mode may indicate any one or more different parameters corresponding to the type and/or manner of information to be provided to a user. For example, a given operational mode may correspond to a particular display format, indicating what particular information to display, when to display such information (e.g., displaying first information at her during a first time, displaying second information at ordering a second time, etc.), etc.

The method 1901 then operates by providing one or more overlays/labels within media and/or a user's field of vision in accordance with the first operational mode, as shown in a block 1921. For example, a great deal of selectivity is provided by which such information may be provided to a user in any of a given number of applications. For example, certain embodiments relate to providing such labeling and/or indicia within a user's field of vision, other embodiments relate to providing such labeling and/or indicia within media, and even other embodiments relate to providing such labeling and/or indicia within both a user's field of vision and media. For example, within the navigation system type application, a method may operate by not only projecting labeling and/or indicia within a user's field of vision, but a display (e.g., within the vehicle, on a portable device, etc.) may also output media including one or more overlays/labels therein. For example, a redundant type system may operate in which not only is labeling and/or indicia projected within a user's field of vision, but a display is operative to output media including one or more overlays/labels therein simultaneously or in parallel with a projection of the labeling and/or indicia within the user's field of vision.

The method 1901 continues by receiving user input corresponding to a second operational mode, as shown in a block 1931. In certain embodiments, the second operational mode may include one or more overlapping operational parameters similar to the first operational mode. In other embodiments, the second operational mode and the first operational mode are mutually exclusive such that they do not include any common operational parameters.

The method 1901 then operates by providing one or more overlays/labels within media and/or a user's field of vision in accordance with the second operational mode, as shown in a block 1941. As the reader will understand, different respective operational modes may govern the manner by which such information is provided. Also, by allowing for different respective operational modes, different operation may be effectuated at different times. As described elsewhere herein, various types of indicia may be used to trigger or select when to operate within different operational modes. For example, a first operational mode may be selected when a user is relatively far from an endpoint destination. Then, a second operational mode based be selected when that user is within a given proximity of the endpoint destination (e.g., within X number of miles).

FIG. 20 is a diagram illustrating an embodiment 2000 of a vehicle including an audio system for effectuating directional audio indication. A vehicle may be implemented to have an audio system that supports capability of stereo, surround sound, and other functionality. For example, a number of speakers may be implemented at different locations within the vehicle. Using appropriate audio signaling, sound may be provided from the speakers in such a manner as to have a given directionality. For example, within such audio systems, often times multiple audio channels (and often times multiple corresponding speakers, such as may be serviced by the multiple audio channels) are implemented for providing sound to the respective speakers of the audio system. Depending upon the particular manner in which the audio signaling is provided to the speakers, the perceptual effect of the sound coming from a particular location or direction may be made. In the context of a navigation system implemented within a vehicle, an audio system may be coordinated such that audio prompts or audio indications may be provided via the audio system in such a way as to correspond to the respective location of a particular feature. For example, a label associated with a feature located on a particular side of the vehicle may be audibly provided via the audio system such that the label is perceptually been as being sourced from the location of that particular feature.

As one possible example in the context of operation in conjunction with a navigation system, when approaching an intersection at which a driver should change route and turn to the right (e.g., based upon a given route that will lead the driver to an endpoint destination), an audio prompt may be provided via the audio system such that, from the perspective of the driver, the audio prompt is perceived as directionally coming from forward, right-hand side portion of the vehicle. Alternatively, when approaching an intersection at which a driver should change route in turn to the left, such an audio prompt may be provided via the audio system such that, from the perspective of the driver, the audio prompt is perceived as directionally coming from the forward, left hand side portion of the vehicle. Generally speaking, by operating using a given audio system having at least two speakers and associated functionality, circuitry, etc. for performing any of a number of different audio processing operations (e.g., DSP audio processing [such as for effectuating certain audio effects as reverb or chorus effects, equalizer settings, matrixing of mono/stereo signaling two different channels, parametric control of audio DSP effects, etc.], balance adjustment, fader adjustment, selectively driving one or more speakers,)

As another possible example in the context of operation in conjunction with a navigation system, considering an instance in which a driver has missed a turn (e.g., based upon a given route that will lead the driver to an endpoint destination), and audio prompt may be provided via the audio system such that, from the perspective of the driver, the audio prompt is perceived as directionally coming from the rear of the vehicle. As the reader will understand, coordination between various labeling and/or indicia which may be included and presented within various respective fields of vision and corresponding audio prompts associated with that labeling and/or indicia may further augment or enhance a driver experience.

FIG. 21 is a diagram illustrating an embodiment 2100 of a navigation system with a display or projection system in a vehicular context, and particularly including at least one localized region of the field of vision in which labeling and/or indicia is provided. The reader is referred to FIG. 6 in comparison to this diagram. As can be seen with respect to this diagram, instead of providing such labeling and/or indicia across an entirety of the one or more respective fields of vision of the driver, such labeling and/or indicia is provided within a localized region (e.g., a subset of the field of vision). That is to say, and set of providing such labeling and/or indicia within any particular location of one or more fields of vision, such labeling and/or indicia is provided within only one or more localized regions.

As may be understood, a driver of such a vehicle will then be able to focus attention only to those one or more localized regions instead of surveying all of the respective fields of vision and multiple directions. If desired, certain feature labeling may only be provided when visible within one or more of these localized regions. For example, a feature labeling associated with the street would appear only when that street may be perceived within a given localized region.

It is also noted that such a localized region may be appropriately selected from the perspective of the user to which such labeling and/or indicia is being provided. For example, a driver of a vehicle may be provided a first respective localized region (e.g., located within the windshield in front of the driver), a front seat passenger may be provided a second respective localized region (e.g., located within the windshield in front of the front seat passenger), and a backseat passenger may be provided a third respective localized region (e.g., located within a side window nearest to the backseat passenger, on the right-hand or left-hand side), etc.

It is also noted that the various operations and functions as described with respect to various methods herein may be performed within a navigation system including various compounds, a wireless communication device, and/or any of a number of other devices. For example, within certain embodiments, one or more modules and/or circuitries within a given device (e.g., such as baseband processing module implemented in accordance with the baseband processing module as described with reference to FIG. 2) and/or other components therein may operate to perform any of the various methods herein.

It is noted that the various modules and/or circuitries (baseband processing modules and/or circuitries, encoding modules and/or circuitries, decoding modules and/or circuitries, etc., etc.) described herein may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions. The operational instructions may be stored in a memory. The memory may be a single memory device or a plurality of memory devices. Such a memory device may be a read-only memory (ROM), random access memory (RAM), volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, and/or any device that stores digital information. It is also noted that when the processing module implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory storing the corresponding operational instructions is embedded with the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. In such an embodiment, a memory stores, and a processing module coupled thereto executes, operational instructions corresponding to at least some of the steps and/or functions illustrated and/or described herein.

It is also noted that any of the connections or couplings between the various modules, circuits, functional blocks, components, devices, etc. within any of the various diagrams or as described herein may be differently implemented in different embodiments. For example, in one embodiment, such connections or couplings may be direct connections or direct couplings there between. In another embodiment, such connections or couplings may be indirect connections or indirect couplings there between (e.g., with one or more intervening components there between). Of course, certain other embodiments may have some combinations of such connections or couplings therein such that some of the connections or couplings are direct, while others are indirect. Different implementations may be employed for effectuating communicative coupling between modules, circuits, functional blocks, components, devices, etc. without departing from the scope and spirit of the invention.

Various aspects of the present invention have also been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claimed invention.

Various aspects of the present invention have been described above with the aid of functional building blocks illustrating the performance of certain significant functions. The boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claimed invention.

One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.

Moreover, although described in detail for purposes of clarity and understanding by way of the aforementioned embodiments, various aspects of the present invention are not limited to such embodiments. It will be obvious to one of average skill in the art that various changes and modifications may be practiced within the spirit and scope of the invention, as limited only by the scope of the appended claims.

Mode Selection Tables:

TABLE 1 2.4 GHz, 20/22 MHz channel BW, 54 Mbps max bit rate Code Rate Modulation Rate NBPSC NCBPS NDBPS EVM Sensitivity ACR AACR Barker 1 BPSK Barker 2 QPSK 5.5 CCK 6 BPSK 0.5 1 48 24 −5 −82 16 32 9 BPSK 0.75 1 48 36 −8 −81 15 31 11 CCK 12 QPSK 0.5 2 96 48 −10 −79 13 29 18 QPSK 0.75 2 96 72 −13 −77 11 27 24 16-QAM 0.5 4 192 96 −16 −74 8 24 36 16-QAM 0.75 4 192 144 −19 −70 4 20 48 64-QAM 0.666 6 288 192 −22 −66 0 16 54 64-QAM 0.75 6 288 216 −25 −65 −1 15

TABLE 2 Channelization for Table 1 Frequency Channel (MHz) 1 2412 2 2417 3 2422 4 2427 5 2432 6 2437 7 2442 8 2447 9 2452 10 2457 11 2462 12 2467

TABLE 3 Power Spectral Density (PSD) Mask for Table 1 PSD Mask 1 Frequency Offset dBr −9 MHz to 9 MHz 0 +/−11 MHz −20 +/−20 MHz −28 +/−30 MHz and −50 greater

TABLE 4 5 GHz, 20 MHz channel BW, 54 Mbps max bit rate Code Rate Modulation Rate NBPSC NCBPS NDBPS EVM Sensitivity ACR AACR 6 BPSK 0.5 1 48 24 −5 −82 16 32 9 BPSK 0.75 1 48 36 −8 −81 15 31 12 QPSK 0.5 2 96 48 −10 −79 13 29 18 QPSK 0.75 2 96 72 −13 −77 11 27 24 16-QAM 0.5 4 192 96 −16 −74 8 24 36 16-QAM 0.75 4 192 144 −19 −70 4 20 48 64-QAM 0.666 6 288 192 −22 −66 0 16 54 64-QAM 0.75 6 288 216 −25 −65 −1 15

TABLE 5 Channelization for Table 4 Frequency Frequency Channel (MHz) Country Channel (MHz) Country 240 4920 Japan 244 4940 Japan 248 4960 Japan 252 4980 Japan 8 5040 Japan 12 5060 Japan 16 5080 Japan 36 5180 USA/Europe 34 5170 Japan 40 5200 USA/Europe 38 5190 Japan 44 5220 USA/Europe 42 5210 Japan 48 5240 USA/Europe 46 5230 Japan 52 5260 USA/Europe 56 5280 USA/Europe 60 5300 USA/Europe 64 5320 USA/Europe 100 5500 USA/Europe 104 5520 USA/Europe 108 5540 USA/Europe 112 5560 USA/Europe 116 5580 USA/Europe 120 5600 USA/Europe 124 5620 USA/Europe 128 5640 USA/Europe 132 5660 USA/Europe 136 5680 USA/Europe 140 5700 USA/Europe 149 5745 USA 153 5765 USA 157 5785 USA 161 5805 USA 165 5825 USA

TABLE 6 2.4 GHz, 20 MHz channel BW, 192 Mbps max bit rate ST TX Code Modula- Code Rate Antennas Rate tion Rate NBPSC NCBPS NDBPS 12 2 1 BPSK 0.5 1 48 24 24 2 1 QPSK 0.5 2 96 48 48 2 1 16-QAM 0.5 4 192 96 96 2 1 64-QAM 0.666 6 288 192 108 2 1 64-QAM 0.75 6 288 216 18 3 1 BPSK 0.5 1 48 24 36 3 1 QPSK 0.5 2 96 48 72 3 1 16-QAM 0.5 4 192 96 144 3 1 64-QAM 0.666 6 288 192 162 3 1 64-QAM 0.75 6 288 216 24 4 1 BPSK 0.5 1 48 24 48 4 1 QPSK 0.5 2 96 48 96 4 1 16-QAM 0.5 4 192 96 192 4 1 64-QAM 0.666 6 288 192 216 4 1 64-QAM 0.75 6 288 216

TABLE 7 Channelization for Table 6 Channel Frequency (MHz) 1 2412 2 2417 3 2422 4 2427 5 2432 6 2437 7 2442 8 2447 9 2452 10 2457 11 2462 12 2467

TABLE 8 5 GHz, 20 MHz channel BW, 192 Mbps max bit rate ST TX Code Modula- Code Rate Antennas Rate tion Rate NBPSC NCBPS NDBPS 12 2 1 BPSK 0.5 1 48 24 24 2 1 QPSK 0.5 2 96 48 48 2 1 16-QAM 0.5 4 192 96 96 2 1 64-QAM 0.666 6 288 192 108 2 1 64-QAM 0.75 6 288 216 18 3 1 BPSK 0.5 1 48 24 36 3 1 QPSK 0.5 2 96 48 72 3 1 16-QAM 0.5 4 192 96 144 3 1 64-QAM 0.666 6 288 192 162 3 1 64-QAM 0.75 6 288 216 24 4 1 BPSK 0.5 1 48 24 48 4 1 QPSK 0.5 2 96 48 96 4 1 16-QAM 0.5 4 192 96 192 4 1 64-QAM 0.666 6 288 192 216 4 1 64-QAM 0.75 6 288 216

TABLE 9 channelization for Table 8 Frequency Frequency Channel (MHz) Country Channel (MHz) Country 240 4920 Japan 244 4940 Japan 248 4960 Japan 252 4980 Japan 8 5040 Japan 12 5060 Japan 16 5080 Japan 36 5180 USA/Europe 34 5170 Japan 40 5200 USA/Europe 38 5190 Japan 44 5220 USA/Europe 42 5210 Japan 48 5240 USA/Europe 46 5230 Japan 52 5260 USA/Europe 56 5280 USA/Europe 60 5300 USA/Europe 64 5320 USA/Europe 100 5500 USA/Europe 104 5520 USA/Europe 108 5540 USA/Europe 112 5560 USA/Europe 116 5580 USA/Europe 120 5600 USA/Europe 124 5620 USA/Europe 128 5640 USA/Europe 132 5660 USA/Europe 136 5680 USA/Europe 140 5700 USA/Europe 149 5745 USA 153 5765 USA 157 5785 USA 161 5805 USA 165 5825 USA

TABLE 10 5 GHz, with 40 MHz channels and max bit rate of 486 Mbps TX ST Code Code Rate Antennas Rate Modulation Rate NBPSC 13.5 Mbps  1 1 BPSK 0.5 1  27 Mbps 1 1 QPSK 0.5 2  54 Mbps 1 1 16-QAM 0.5 4 108 Mbps 1 1 64-QAM 0.666 6 121.5 Mbps   1 1 64-QAM 0.75 6  27 Mbps 2 1 BPSK 0.5 1  54 Mbps 2 1 QPSK 0.5 2 108 Mbps 2 1 16-QAM 0.5 4 216 Mbps 2 1 64-QAM 0.666 6 243 Mbps 2 1 64-QAM 0.75 6 40.5 Mbps  3 1 BPSK 0.5 1  81 Mbps 3 1 QPSK 0.5 2 162 Mbps 3 1 16-QAM 0.5 4 324 Mbps 3 1 64-QAM 0.666 6 365.5 Mbps   3 1 64-QAM 0.75 6  54 Mbps 4 1 BPSK 0.5 1 108 Mbps 4 1 QPSK 0.5 2 216 Mbps 4 1 16-QAM 0.5 4 432 Mbps 4 1 64-QAM 0.666 6 486 Mbps 4 1 64-QAM 0.75 6

TABLE 11 Power Spectral Density (PSD) mask for Table 10 PSD Mask 2 Frequency Offset dBr −19 MHz to 19 MHz 0 +/−21 MHz −20 +/−30 MHz −28 +/−40 MHz and −50 greater

TABLE 12 Channelization for Table 10 Frequency Frequency Channel (MHz) Country Channel (MHz) County 242 4930 Japan 250 4970 Japan 12 5060 Japan 38 5190 USA/Europe 36 5180 Japan 46 5230 USA/Europe 44 5520 Japan 54 5270 USA/Europe 62 5310 USA/Europe 102 5510 USA/Europe 110 5550 USA/Europe 118 5590 USA/Europe 126 5630 USA/Europe 134 5670 USA/Europe 151 5755 USA 159 5795 USA

Claims

1. An apparatus, comprising:

a navigation system for identifying at least one parameter corresponding to at least one of a first location and a second location associated with a user; and
a plurality of projectors for projecting three-dimensional (3D) information corresponding to the at least one parameter within a field of vision of the user at a perceptual depth of the at least one parameter within the field of vision of the user.

2. The apparatus of claim 1, wherein:

the first location being associated with a current location of the user; and
the second location being associated with a future location selected by the user or an endpoint destination the user intending to reach.

3. The apparatus of claim 1, wherein:

the navigation system being implemented within a vehicle;
the first location being associated with a current location of the vehicle;
the second location being associated with a future location selected by the user or an endpoint destination the user intending to reach;
at least one of the plurality of projectors projecting at least one additional information corresponding to at least one of the current location of the vehicle, a velocity of the vehicle, a directionality of the vehicle, and a temperature inside or outside of the vehicle.

4. The apparatus of claim 1, wherein:

the plurality of projectors for projecting three-dimensional (3D) information corresponding to the at least one parameter within a localized region of the field of vision of the user.

5. The apparatus of claim 1, wherein:

the at least one parameter corresponding to a feature within the physical environment; and further comprising:
an audio system including a plurality of speakers for directionally providing an audio prompt corresponding a location of the feature.

6. The apparatus of claim 1, wherein:

the plurality of projectors for operating in accordance with a first operational mode for projecting the 3D information corresponding to the at least one parameter during a first time; and
the plurality of projectors for operating in accordance with a second operational mode for projecting the 3D information corresponding to the at least one parameter or at least one additional 3D information corresponding to at least one additional parameter during a second time.

7. The apparatus of claim 1, further comprising:

a processor for modifying media depicting the at least a portion or at least one additional portion of the physical environment by including at least one overlay within the media based on the 3D information or at least one additional information corresponding to the at least one parameter; and
a display for output the media including the at least one overlay therein.

8. An apparatus, comprising:

a navigation system for identifying at least one parameter corresponding to at least one of a first location and a second location associated with a user; and
a projector for projecting information corresponding to the at least one parameter within a field of vision of the user corresponding to at least a portion of a physical environment associated with the user.

9. The apparatus of claim 8, wherein:

the projector being a three three-dimensional (3D) projector for projecting the information corresponding to the at least one parameter into the field of vision of the user.

10. The apparatus of claim 8, wherein:

the projector being a three three-dimensional (3D) projector for projecting the information corresponding to the at least one parameter into the field of vision of the user at a perceptual depth of the at least one parameter within the field of vision of the user.

11. The apparatus of claim 8, wherein:

the first location being associated with a current location of the user; and
the second location being associated with a future location selected by the user or an endpoint destination the user intending to reach.

12. The apparatus of claim 8, wherein:

the navigation system being implemented within a vehicle;
the first location being associated with a current location of the vehicle;
the second location being associated with a future location selected by the user or an endpoint destination the user intending to reach;
the projector projecting at least one additional information corresponding to at least one of the current location of the vehicle, a velocity of the vehicle, a directionality of the vehicle, and a temperature inside or outside of the vehicle.

13. The apparatus of claim 8, wherein:

the projector for projecting information corresponding to the at least one parameter within a localized region of the field of vision of the user.

14. The apparatus of claim 8, wherein:

the at least one parameter corresponding to a feature within the physical environment; and further comprising:
an audio system including a plurality of speakers for directionally providing an audio prompt corresponding a location of the feature.

15. The apparatus of claim 8, wherein:

the at least one parameter corresponding to a feature within the physical environment; and
the information corresponding to the at least one parameter being at least one of labeling and indicia for the feature.

16. The apparatus of claim 8, wherein:

the at least one parameter corresponding to at least a portion of a pathway between the first location and the second location within the physical environment; and
the information corresponding to the at least one parameter being at least one of labeling and indicia for the at least a portion of a pathway.

17. The apparatus of claim 8, wherein:

the projector operating in accordance with a first operational mode for projecting the information corresponding to the at least one parameter during a first time; and
the projector operating in accordance with a second operational mode for projecting the information corresponding to the at least one parameter or information corresponding to at least one additional parameter during a second time.

18. The apparatus of claim 17, wherein:

a user interface for receiving user input corresponding to selection of the first operational mode or the second operational mode.

19. The apparatus of claim 8, further comprising:

a processor for modifying media depicting the at least a portion or at least one additional portion of the physical environment by including at least one overlay within the media based on the information corresponding to the at least one parameter; and
a display for output the media including the at least one overlay therein.

20. An apparatus, comprising:

a navigation system for identifying at least one parameter corresponding to at least one of a first location and a second location associated with a user; and
a plurality of projectors for projecting: three-dimensional (3D) information corresponding to the at least one parameter within a field of vision of the user at a perceptual depth of the at least one parameter within the field of vision of the user; and at least one additional information corresponding to at least one of the current location of the navigation system, a velocity of the navigation system, and a directionality of the navigation system.

21. The apparatus of claim 20, wherein:

the navigation system being implemented within a vehicle;
the first location being associated with a current location of the vehicle; and
the second location being associated with a future location selected by the user or an endpoint destination the user intending to reach.

22. The apparatus of claim 20, wherein:

the plurality of projectors for projecting three-dimensional (3D) information corresponding to the at least one parameter within a localized region of the field of vision of the user.

23. The apparatus of claim 20, wherein:

the at least one parameter corresponding to a feature within a physical environment associated with the user; and further comprising:
an audio system including a plurality of speakers for directionally providing an audio prompt corresponding a location of the feature.

24. The apparatus of claim 20, wherein:

the plurality of projectors for operating in accordance with a first operational mode for projecting the 3D information corresponding to the at least one parameter during a first time; and
the plurality of projectors for operating in accordance with a second operational mode for projecting the 3D information corresponding to the at least one parameter or at least one additional 3D information corresponding to at least one additional parameter during a second time.

25. The apparatus of claim 24, wherein:

a user interface for receiving user input corresponding to selection of the first operational mode or the second operational mode.

26. The apparatus of claim 20, further comprising:

a processor for modifying media depicting the at least a portion or at least one additional portion of the physical environment by including at least one overlay within the media based on the 3D information or at least one additional information corresponding to the at least one parameter; and
a display for output the media including the at least one overlay therein.
Patent History
Publication number: 20120310531
Type: Application
Filed: Jul 15, 2011
Publication Date: Dec 6, 2012
Applicant: BROADCOM CORPORATION (IRVINE, CA)
Inventors: Peyush Agarwal (Milpitas, CA), Yasantha N. Rajakarunanayake (San Ramon, CA)
Application Number: 13/183,613